sentences
sequence | labels
sequence |
---|---|
[
"With the rise of e-commerce, people are accustomed to writing their reviews after receiving the goods.",
"These comments are so important that a bad review can have a direct impact on others buying.",
"Besides, the abundant information within user reviews is very useful for extracting user preferences and item properties.",
"In this paper, we investigate the approach to effectively utilize review information for recommender systems.",
"The proposed model is named LSTM-Topic matrix factorization (LTMF) which integrates both LSTM and Topic Modeling for review understanding.",
"In the experiments on popular review dataset Amazon , our LTMF model outperforms previous proposed HFT model and ConvMF model in rating prediction.",
"Furthermore, LTMF shows the better ability on making topic clustering than traditional topic model based method, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.",
"Recommender systems (RSs) are widely used in the field of electronic commerce to provide personalized recommendation services for customers.",
"Most popular RSs are based on Collaborative Filtering (CF), which makes use of users' explicit ratings or implicit behaviour for recommendations (Koren, 2008).",
"But CF models suffer from data sparsity, which is also called cold-start problem.",
"Models perform poorly when there is few available data.",
"To alleviate this problem, utilizing user reviews can be a good approach because user reviews can directly reflect users' preferences and items' properties and exactly correspond to the user latent factors and item latent factors in CF models.",
"To understand user reviews, previous approaches are mainly based on topic modeling , a suite of algorithms that aim to discover the thematic information among documents (Blei, 2012).",
"The simplest and commonly used topic model is latent dirichlet allocation (LDA).",
"Recently, as deep learning shows great performance in computer vision (Krizhevsky et al., 2017) and NLP (Kim, 2014), some approaches combining deep learning with CF are proposed to capture latent context features from reviews.",
"However, we find there are some limitations in existing models.",
"First, the LDA algorithm used in previous models like Hidden Factors as Topics (HFT) (McAuley and Leskovec, 2013) ignores contextual information.",
"If a user writes I prefer apple than banana when choosing fruits in a review, we can clearly know the user's preference and recommend items including apple.",
"But LDA ignores the structural information and considers the two words as the same since they both appear once in the sentence.",
"Compared with topic modeling, deep learning methods such as Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are able to retain more context information.",
"CNN uses sliding windows to capture local context and word order.",
"RNN considers a sentence as a word sequence, and the former word information will be reserved and passed back, which gives RNN the ability to retain the whole sentence information.",
"But these still exist some problems.",
"For CNN, the sizes of sliding windows are often small, which causes CNN model fails to link words in the sentence begin and end.",
"Given the review I prefer apple than google when choosing jobs , CNN can not notice the two words 'apple' and 'jobs' simultaneously if the windows size is small, so it will meet the ambiguity problem that the word 'apple' means fruit or company.",
"For RNN, al-1605 though it performs better than CNN on persisting former information, the information will still decreases with the length of sentences increasing.",
"So when a review is long, the effect of RNN is limited.",
"Faced with these problems, we propose to integrate deep learning and topic modeling to extract more global context information and get a deeper understanding of user reviews.",
"Deep learning methods can reserve context information, while topic modeling can provide word co-occurrence relation to make a supplement for information loss.",
"We use Long Short-Term Memory (LSTM) network for the deep learning part, because it is a special type of RNN which has better performance on gradient vanishing and long term dependence problems than vanilla RNN structure.",
"We use LDA for the topic modeling part.",
"Then the two parts are integrated into a matrix factorization framework.",
"The final model is named LSTM-Topic matrix factorization ( LTMF ).",
"Furthermore, as the topic modeling part and deep learning part are connected in our model, the topic clustering results will be influenced by the deep learning information.",
"In experiments, LTMF shows a better topic clustering ability than traditional LDA based HFT model.",
"This gives us some inspiration on using the integrating methods into other tasks like sentiment classification.",
"In the remainder of the paper, we first review previous work related to our work.",
"Then we address preliminaries and present our models in detail.",
"After that we evaluate our approach by comparing our approach with state-of-the-art algorithms.",
"Finally we conclude the paper with future work.",
"There has been some earlier approaches to extract review information for RSs.",
"Wang and Blei (2011) proposed collaborative topic regression (CTR) that combined topic modeling and collaborative filtering in a probabilistic model.",
"McAuley and Leskovec (2013) developed a statistical model HFT using a transfer function to combine rating and review tightly.",
"Ling et al. (2014) and Bao et al. (2014) proposed models similar to CTR and HFT with some structural differences.",
"presented a Bayesian model collaborative deep learning (CDL) leveraging SDAE neural networks as a text feature learning component.",
"Bansal et al. (2016) trained a gated recurrent units (GRUs) network to encode text sequences into latent vectors.",
"Zhao et al. (2016) trained a deep CNN to discover the abstract representation of movie posters and still frames, and incorporated it into a neighborhood CF model.",
"Kim et al. (2016) utilized CNN to retain contextual information in review, and developed a document context-aware recommendation model (ConvMF).",
"The ConvMF model is a recently proposed model and is shown to outperform PMF and CDL, and we choose it as a baseline in our experiments.",
"Zheng et al. (2017) proposed the Deep Cooperative Neural Networks (DeepCoNN) model which constructed two concurrent CNN to simultaneously model user and item reviews and then combined the features into Factorization Machine.",
"Attention in neural networks has been popular in nearly years, Seo et al. (2017) proposed a model using CNN with dual attention for rating prediction.",
"There are some similarity between the D-attn model with our LTMF model for we both want to extract more global information, where they use attention CNN model and we utilize the information from both topic modeling and deep learning.",
"The D-attn model fail to work if there is not enough reviews, while our LTMF model use review information as a supplementary of rating.",
"So it can still work effectively even there are few reviews.",
"Besides, Diao et al. (2014) proposed a method jointly modeling aspects, sentiments and ratings for movie recommendation.",
"Hu et al. (2015) proposed MR3 model to combine ratings, social relations and reviews together for rating prediction.",
"These hybrid models boost the performance than individual components, which also give us some inspiration on proposing the LTMF framework.",
"We use explicit ratings as the training and test data.",
"Suppose there are M users U = { u 1 , u 2 , ..., u i , ..., u M } and N items V = { v 1 , v 2 , ..., v j , ..., v N } , 1606 where each user and item is represented by a K dimension latent vector, u i RK and v j RK .",
"The rating sparse matrix is denoted as R RM N , where r ij is the rating of user u i on item v j .",
"D is the review (document) corpus where d ij is the review of user u i on item v j .",
"Probabilistic Matrix Factorization (PMF) (Mnih and Salakhutdinov, 2008) is an effective recommendation model that uses matrix factorization (MF) technique to find the latent features of users and items from a probabilistic perspective.",
"In PMF, the predicted rating R ij is expressed as the inner product of user latent vector u i and item latent vector v j : R ij = u Ti v j .",
"To get latent vectors, PMF minimises the following loss function: L = MX i NX j I ij ( R ij u Ti v j ) 2 + u MX i k u i k 2 F + v NX j k v j k 2 F , (1) where R ij is the observed rating.",
"The first part of",
"Eq.(1) is the sum-of-squared-error between predicted and observed ratings and the second part is quadratic regularization terms to avoid over-fitting.",
"u and v are corresponding regularization parameters.",
"I ij is the indicator function which equals 1 if i -th user rated j -th item, and equals 0 otherwise.",
"Hidden Factors as Topics (HFT) (McAuley and Leskovec, 2013) provides an effective approach to integrates topic modeling into traditional CF models.",
"It utilizes LDA, the simplest topic model which assumes there are k topics T = { t 1 , t 2 , ..., t k } in document corpus D .",
"Each document d D has a topic distribution d over T and each topic has a word distribution over a fixed vocabulary.",
"To connect the document-topic distribution and item factors v , HFT proposes a transformation function: j,k = exp( v j,k ) P k exp( v j,k ) , (2) where v j,k is the k -th latent factor in item vector v j and j,k is the k -th topic probability in item document-topic distribution j , is the parameter controlling the peakiness of the transformation.",
"Besides, HFT introduces an additional variable to ensure the word distribution k is a stochastic vector which satisfies P w k,w = 1 , the relation is denoted as follows: k,w = exp( k,w ) P w exp( k,w ) s.t. X w k,w = 1 .",
"(3) The final loss function is : L = MX i NX j I ij ( R ij R ij ) 2 t NX d X n N d log d,z d,n z d,n ,w d,n , (4) where R ij is predicted ratings, and are the topic and word distribution respectively, w d,n is the n -th word in document d and z d,n is the word's corresponding topic, t is a regularization parameters.",
"We propose the LSTM-Topic matrix factorization (LTMF) model, which integrates LSTM and topic modeling for recommendation.",
"The model utilizes both rating and review information.",
"For the rating part, we use probabilistic matrix factorization to extract rating latent vectors.",
"For the review part, we use LDA (following the way of HFT) to extract topic latent vectors and adopt an LSTM architecture to generate document latent vectors .",
"Then we combine the three vectors into a unified model.",
"The overview of LTMF model is shown in Figure 1. 4.1 Parameter Relation The left of Figure 1 is the parameters relations in LTMF model, which can be divided into three parts: = {U , V} is the parameters associated with rating MF, = { , } is the parameters associated with topic model, = { W, l } is the parameters associated with LSTM.",
"The shaded nodes are data (R:rating, D: reviews) where the others are parameters.",
"Single connection lines represent there are constraint relationship between the two nodes.",
"Double connections (e.g. V and ) mean the relationship is bidirectional so they can affect each other's results.",
"The right of Figure 1 is the LSTM architecture used in our models.",
"For the j -th item, we concatenate all of its reviews as one document se-1607 R D l W u v Topic Modeling LSTM PMF Document sequence: D j Embedding layer LSTM LSTM prefer fruits choosing when LSTM apple than banana LSTM LSTM LSTM LSTM LSTMLSTM layer I Full Connect layer LSTM latent vector Document latent vector: l j ... p p p p ...",
"quence D j .",
"Every word in the document sequence D j = ( w 1 , w 2 , ..., w nj ) will firstly be embedded into a p dimension vector.",
"Next, word vectors are sent into LSTM network according to the word order in D j and produces a latent vector.",
"Finally, the latent vector is sent to a full connect layer whose output is the document latent vector l j .",
"The above process can be written as: l j = LST M ( D j , W ) , (5) where D j is the input document sequence, W represents weights and bias variables in LSTM network.",
"Gaussian distribution is the basic prior hypothesis in our model.",
"We place zero-mean spherical Gaussian priors on user latent features u , LSTM weights W and observed ratings R .",
"For item vector v , we place the Gaussian prior on its difference with LSTM output l j : p ( v | l j , 2 v ) = NY j =1 N ( v j l j | 0 , 2 v I ) = NY j =1 N ( v j | l j , 2 v I ) .",
"The function is important for connecting ratings and reviews.",
"Although document vector l j is closed to item feature vector v j for they both reflect item's properties.",
"There still exists some discrepancies.",
"For example, when writing reviews, users usually write more about appearance and only briefly mention price.",
"So in review based document vector l j , the weight of appearance will be larger than rating based latent vector v j .",
"To preserve the discrepancy between v j and l j , we import the Gaussian noise vector v as the offset.",
"Finally, we maximize the log-posterior of the three parts and get the objective function as follows:",
"L = MX i NX j I ij ( R ij u Ti v j ) 2 t NX d X n N d log z d,n z d,n ,w d,n + u MX i k u i k 2 F + v NX j k v j l j k 2 F + W N k k W k k 2 F , (6)",
"X k where N k is the number of weighs in LSTM network, u , v , W are regularization parameters.",
"z is the topic assignment for each word, t is the regularization parameters to control the proportion of topic part.",
"The objective function of LTMF can be considered as an extended PMF model where the information from topic modeling and LSTM is included as regular terms.",
"In the next section, we will explain how LTMF leverages the information from topic modeling and LSTM, and why LTMF can combine the information of the two parts.",
"As shown in Figure 1, item vectors V connect with both topic part and LSTM part, which means the information from the two part will both affect the",
"result of item vectors.",
"If we take partial derivative of",
"Eq.(6) with respect to v j , the constraint relationship can be clearer: L v j = MX i =1 2 I ij ( R ij u Ti v j ) u i + 2 v ( v j l j ) t KX k =1 ( n j,k N j exp( v j,k ) P exp( v j,k )) , (7) In",
"Eq.(7), the optimization direction of v j is subject to two regular terms.",
"The former one is controlled by LSTM vector l j .",
"The latter one is controlled by topic parameters ( , n j,k , N j ).",
"Hence, we can leverages the information from both LSTM and topic modeling for recommendation.",
"Besides, note the double connections between item vector V and topic distribution in Figure 1. They mean the information from topic modeling can affect the result of V , while the change in V can also be passed to topic part and affect the review understanding result of topic modeling by Eq.2.",
"For V and LSTM vector l , the analysis is the same.",
"Indeed, item vectors V plays the role of transporter to connect LSTM part and topic modeling part.",
"This is why LTMF can combine the information of topic modeling and LSTM to make a deeper understanding of user reviews.",
"Furthermore, LTMF provides an effective framework to integrate topic model with deep learning networks for recommendation.",
"In experiments, we replace the LSTM part with CNN to make a comparison model.",
"Experiments show both models boost the rating prediction accuracy.",
"Our objective is to search:",
", ,z,, L",
"(8) Recall that is the parameters associated with ratings MF, is the parameters associated with topic modeling, z is the topic assignment for each word, is the peakiness parameter to control the transformation between item vector v and topic distribution , is the parameters associated with LSTM.",
"For v j is coupled with the parameters of topic modeling and LSTM vector, we cannot optimize these parameters independently.",
"We adopt a procedure that alternates between two steps.",
"In each step we fix some parameters and optimize the others.",
"The optimization process is shown below: 1. solve the objective by fixing z t and t : arg min , , L ( , , z t , , t ) to update t +1 , t +1 , t +1 .",
"2.",
"(a) update t +1 with fixing v jt +1 and document sequence D j .",
"(b) sample z t +1 d,j with probability p ( z t +1 d,j = k ) = t +1 k,w d,j .",
"In the step 1, we fix z and to update remaining terms , , by L-BFGS algorithm.",
"In the step 2, we fix , and to update LSTM parameters and topic model parameters z .",
"Since LSTM part and topic part are independent when item vectors V are certain, we can update the two term respectively.",
"In step",
"2(a), we update by back propagation algorithm.",
"With fixing the other parameters, the objective function of W can be seen as a weighted squared error function ( k v j l j k 2 F ) with L 2 regularized terms ( k W k 2 F ), which means we can use D j as the input and v j is the label to run the back propagation process.",
"In step",
"2(b), we iterates through all documents and each word within to update z d,j via Gibbs Sampling.",
"The reason why we do not divide the process into three steps is that the step",
"2(a) and",
"2(b) are independent with step 1 finished, which means we can parallelize the two steps.",
"Finally, we repeat these two steps until convergence.",
"In practice, we run the step 1 with 5 gradient iterations using LBFGS, then we iterate the LSTM part 5 times.",
"At the same time, we update the topic model part once.",
"The whole process is called a cycle, and it usually takes 30 cycles to reach a local optimum.",
"In addition to the gradient of v j , the gradients of other parameters used in step 1 are listed as follows: L u i = NX j =1 2 I ij ( R ij u Ti v j ) v j + 2 u u i .",
"(9) L = t N w X w =1 KX k =1 (cid:18) n k,w N k exp( k,w ) z w (cid:19) (10) L = t NX j =1 KX k =1 v j,k (cid:18) n jk N j exp( v j,k ) z j (cid:19) .",
"(11) where is used to determine word distribution by",
"Eq.(3); n k,w is the number of times that word 1609 Dataset users items ratings av.",
"w occurs in topic k ; N w is the word vocabulary size of the document corpus; N k is the number of words in topic k ; n j,k is the number of times when topic k occurs in the document of item j ; N j is the total number of words in document j ; z w and z j are the corresponding normalizers: z w = KX k =1 exp( k,w ) , z j = KX k =1 exp( v j,k ) .",
"We use the real-world Amazon dataset 1 (collected by McAuley et al. (2015)) for our experiments.",
"For the original dataset is too large, we choose 10 sub datasets in experiments.",
"To increase data density, we remove users which have less than 3 ratings.",
"For raw review texts, we adopt the same preprocessing methods as ConvMF 2 : set the maximum length of a item document to 300; remove common stop words and document specific words which have document frequency higher than 0.5; choose top 8000 distinct words as the vocabulary; remove all non-vocabulary words to construct input document sequences.",
"After preprocessing, the statistics of datasets are listed in Table 1, where the abbreviations of datasets are shown in parentheses.",
"The baselines used in our experiments are listed as follows:",
"PMF: Probabilistic Matrix Factorization (PMF) (Mnih and Salakhutdinov, 2008) is a standard matrix factorization model for RSs.",
"It only uses rating information.",
"HFT: This is a state-of-art method that combines reviews with ratings (McAuley and Leskovec, 2013).",
"It utilizes LDA to capture unstructured textual information in reviews.",
"ConvMF: Convolutional Matrix Factorization (ConvMF) (Kim et al., 2016) is a recently proposed recommendation model.",
"It utilizes CNN to capture contextual information of item reviews.",
"LMF: LSTM Matrix Factorization (LMF) is a submodel of LTMF without the topic part.",
"We can compare it with ConvMF to show the effectiveness of LSTM than CNN on review understanding.",
"CTMF: We modify the LTMF model by replacing the LSTM part with CNN (following the structure of ConvMF) and construct the comparison model CNN-Topic Matrix Factorization (CTMF).",
"CTMF can be used to evaluate the effectiveness of combining deep learning and topic modeling.",
"In experiments, we randomly split one dataset into training set, test set, validation set under proportions of 80%, 10%, 10%, where each user and item appears at least once in the training set.",
"We use Mean Square Error (MSE) as metric to evaluate various models.",
"For all models, we set the dimension of user and item latent vectors K = 5 , and initialize the vectors randomly between 0 and 1. Topic number and the dimension of document latent vector l are also set to 5.",
"For methods using deep learning, we initialized word latent vectors randomly with the embedding dimension p = 200 .",
"The optimization algorithm used in back propagation is rmsprop and the activation function used in fully connected layer is tanh .",
"In LSTM network, we set the output dimension to 128 and dropout rate 0.2.",
"For CTMF, we adopt the same setting as ConvMF where the sliding window sizes is { 3 , 4 , 5 } and the shared weights per window size is 100.",
"Hyper parameters are set as follows.",
"For PMF, u = v = 0 .",
"1 .",
"For HFT, we select t { 1 , 5 } which gives better result in each experiment.",
"For LMF and ConvMF, we set u = 0 .",
"1 and v = 5 .",
"For LTMF and CTMF, we select t { 0 .",
"05 , 0 .",
"1 , 0 .",
"5 } which gives the lowest validation set error.",
"We evaluate these models and report the lowest test set error on each dataset.",
"The MSE results are shown in Table 2 where the best result of each dataset is highlighted in bold and the standard deviations of corresponding MSE are recorded in parenthesis.",
"We can see that the LTMF model consistently outperform these baselines on all datasets .",
"This clearly confirms the effectiveness of our proposed method.",
"To make a more intuitive comparison, the improvement histograms of these models are 0% 5% 10% 15% 20% AIV AFA BB MI OP PS GGF VG PLG DMI m p r ov e m e n t ov e r PMF HFTConvMFLMF 0% 1% 2% 3% 4% 5% AIV AFA BB MI OP PS GGF VG PLG DMI m p r ov m e n t CTMF vs ConvMF LTMF vs LMF Figure 2: Above: Improvements of HFT, ConvMF and LMF, compared with PMF on different datasets.",
"The figure above are the improvements of HFT, ConvMF and LMF compared with PMF on different datasets, where PMF only uses rating information and the other three use both rating and review information with different approaches.",
"We observe that all three methods make significant improvements over PMF, which indicates review information is helpful to model user and item features as well as improve recommendation results.",
"Compared with HFT, LMF makes over 3% improvement on 9 out of the 10 datasets.",
"ConvMF performs better than HFT while LMF still obtains over 3% improvement than ConvMF on 7 datasets.",
"The differences between HFT, ConvMF and LMF can be attributed to their individual methods for re-1611 1.20 1.30 1.40 1.50 1.60 1.70 1.80 1.90 2.00 1 2 3 4 5 6 7 8 9 10 MSE a PMF b HFT c ConvMFd CTMF e LMF f LTMF 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 1 2 3 4 5 6 7 8 9 10 I n c r ea s e d v a l u e t h a n PMFHFT ConvMF CTMF LMF LTMF Figure 3: Results for recommendation within limited ratings and reviews.",
"views understanding.",
"As mentioned in Section 1, Topic Modeling based HFT only considers the coexistence of words in texts and ignores structural context information.",
"CNN based ConvMF lacks the ability to capture global context information due to the size limitation of sliding windows.",
"This is exactly what LSTM possesses and why LSTM based LMF model outperforms ConvMF.",
"The figure below is the comparison of two integrated models (LTMF and CTMF) that import topic information with two original models that only use deep learning (LMF and ConvMF).",
"We can see that both integrated models outperform the original models, which confirms our conjecture that recommendation results can be improved by combining structural and unstructured information .",
"For CTMF model, it makes over 2% improvement on 5 out of 10 datasets compared with ConvMF.",
"As to LTMF model, it achieves nearly 1% improvements that LMF on 7 out of 10 datasets.",
"The reason why LTMF gains less promotion can be explained from two sides.",
"Numerically, for the comparison model LMF is already a strong baseline proposed by ourselves, it's more difficult to make a significant improvement.",
"Theoretically, since LSTM can persist enough global information when the input sentence is relatively short, the supplements of topic information in LTMF are not so remarkable.",
"As an illustration, we can compare the results on datasets DM and VG.",
"For the dataset DM, as shown in Table 1, it has the fewest words per item (38.79) and the improvement of LTMF is minimum.",
"But for the dataset VG, it has the most words per item (92.55).",
"The global context information obtained by LSTM will still decrease with such long sentences, and the topic information can make an effective supplement.",
"So the improvement of LTMF on VG is greater and comparable with CTMF.",
"Rating data and review data are always sparse in RSs.",
"To compare these models on making recommendation in different data sparsity, especially for new users who only have limited ratings, we choose the dataset Baby and refilter it to make sure every user has at least N ratings ( N varies from 1 to 10).",
"A greater N means the user has rated more items, so the data sparsity problem is weaker.",
"We test all models on the 10 subsets of Baby with the same dataset split ratio and text preprocessing.",
"The final results are shown in Figure 3, where the left one is the MSE values of all models, and the right one is the increase of the other models compared with PMF.",
"We can observe that all models gain better recommendation accuracy with the increment of user rating number N .",
"In other words, user and item latent features can be better extracted with more useful information.",
"When N is small, especially when N = { 2 , 3 } , the models which utilize both review and rating information achieve biggest improvements over PMF.",
"It suggests that review information can provide effective supplement when rating data is scarce .",
"With the increase of N , the improvements of all review used models become smaller.",
"This is because models can extract more features from gradually dense ratings data, and the effectiveness of review data begins to decrease.",
"Same as the previous experiment, our LTMF model achieve the best results in the comparison with other models.",
"In HFT, the result of topic words only depends on the information from Topic Modeling.",
"But in our 1612 Office Product (OP) topic1 topic2 topic3 topic4 topic5 envelope markers pins wallet planner erasers compatible scale notebooks keyboard needs lead huge window tab numbers mail credit notebook remove letters nice document cardboard stickers christmas camera attach plug clips Table 3: Top topic words discovered by HFT Office Product (OP) topic1 topic2 topic3 topic4 topic5 bands bags scale wallet folder drum camera document clock folders remote cabinet magnets coins binder chalk compatible monitors notebooks stickers presentation tray pins shredder remove buttons party fax bookmark head Table 4: Top topic words discovered by LTMF proposed LTMF framework, the information extracted by LSTM and Topic Modeling will both affect the final word clustering results.",
"So, we can compare the topic words discovered by HFT and LTMF to evaluate whether combing LSTM and Topic Modeling is able to make a better understanding of user reviews.",
"We choose the dataset Office Product (OP) and show the top topic words of HFT and LTMF in Table 3 and Table 4.",
"As we can see, there are many words existed in both tables (e.g. wallet, note-books, document).",
"These words are closely related to the category of dataset Office Product, which implies both models can get a good interpretation of user reviews.",
"However, when we carefully compare the two tables, there exists some differences.",
"In Table 3, there are some adjectives and verbs which have little help for topic clustering (e.g. nice, huge, attach), but they still get large weights and appear in the front of topic words list.",
"Obviously, HFT misinterprets these words for they usually appear together with the real topic words.",
"In Table 4, we are not able to find them in top words list, because extra information from LSTM makes a timely supplement.",
"Besides, similar situations also occur on words document and compati-ble.",
"The word document is an apparent topic word, so LTMF gives it a larger weight in topic words list.",
"For the word compatible, as an adjectives, it can provide less topic information than nouns, so LTMF decreases its weight and put camera in the second place.",
"In this paper, we investigate the approach to effectively utilize review information for RSs.",
"We propose the LTMF model which integrates both LSTM and Topic modeling in context aware recommendation.",
"In the experiments, our LTMF model outperforms HFT and ConvMF in rating prediction especially when the data is sparse.",
"Furthermore, LTMF shows better ability on making topic clustering than traditional topic model based method HFT, which implies integrating the information from deep learning and topic modeling is a meaningful approach to make a better understanding of reviews.",
"In the future, we plan to evaluate more complex networks for recommendation tasks under the framework proposed by LTMF.",
"Besides, we are interested to apply the method of combing topic model and deep learning into some traditional NLP tasks.",
"We thank the National Key Research and Development Program of China (2016YFB0201900), National Natural Science Foundation of China (U1611262), Guangdong Natural Science Funds for Distinguished Young Scholar (2017A030306028), Pearl River Science and Technology New Star of Guangzhou, and Guangdong Province Key Laboratory of Big Data Analysis and Processing for the support of this research."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"objective",
"method",
"other"
] |
[
"Parody is a figurative device used to imitate an entity for comedic or critical purposes and represents a widespread phenomenon in social media through many popular parody accounts.",
"In this paper, we present the first computational study of parody.",
"We introduce a new publicly available data set of tweets from real politicians and their corresponding parody accounts.",
"We run a battery of supervised machine learning models for automatically detecting parody tweets with an emphasis on robustness by testing on tweets from accounts unseen in training, across different genders and across countries.",
"Our results show that political parody tweets can be predicted with an accuracy up to 90%.",
"Finally, we identify the markers of parody through a linguistic analysis.",
"Beyond research in linguistics and political communication, accurately and automatically detecting parody is important to improving fact checking for journalists and analytics such as sentiment analysis through filtering out parodical utterances.",
"1 1 Introduction Parody is a figurative device which is used to imitate and ridicule a particular target (Rose, 1993) and has been studied in linguistics as a figurative trope distinct to irony and satire (Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997).",
"Traditional forms of parody include editorial cartoons, sketches or articles pretending to have been authored by the parodied person.",
"2 A new form Equal contribution.",
"2 The Kapou Opa' column by K. Maniatis parodying Greek popular persons was a source of inspiration for this work https://www.oneman.gr/originals/to -imerologio-karantinas-tou-dimitri-koutsoumpa/",
"of parody recently emerged in social media, and Twitter in particular, through accounts that impersonate public figures.",
"Highfield (2016) defines parody accounts acting as: a known, real person, for obviously comedic purposes.",
"There should be no risk of mistaking their tweets for their subject's actual views; these accounts play with stereotypes of these figures or juxtapose their public image with a very different, behind-closed-doors persona .",
"A very popular type of parody is political parody which plays an important role in public speech by offering irreverent interpretations of political personas (Hariman, 2008).",
"Table 1 shows examples of very popular (over 50k followers) and active (thousands of tweets sent) political parody accounts on Twitter.",
"Sample tweets show how the style and topic of parody tweets are similar to those from the real accounts, which may pose issues to automatic classification.",
"While closely related figurative devices such as irony and sarcasm have been extensively studied in computational linguistics (Wallace, 2015; Joshi et al., 2017), parody yet to be explored using computational methods.",
"In this paper, we aim to bridge this gap and conduct, for the first time, a systematic study of political parody as a figurative device in social media.",
"To this end, we make the following contributions: 1. A novel classification task where we seek to automatically classify real and parody tweets.",
"For this task, we create a new large-scale publicly available data set containing a total of 131,666 English tweets from 184 parody accounts and corresponding real accounts of politicians from the US, UK and other countries (Section 3); 2. Experiments with featureand neural-based machine learning models for parody detection, which achieve high predictive accuracy of up to 89.7% F1.",
"These are focused on the robustness of classification, with test data from:",
"a) users;",
"b) genders;",
"c) locations; unseen in training (Section 5); 3. Linguistic analysis of the markers of parody tweets and of the model errors (Section 6).",
"We argue that understanding the expression and use of parody in natural language and automatically identifying it are important to applications in computational social science and beyond.",
"Parody tweets can often be misinterpreted as facts even though Twitter only allows parody accounts if they are explicitly marked as parody 3 and the poster does not have the intention to mislead.",
"For example, the Speaker of the US House of Representatives, Nancy Pelosi, falsely cited a Michael Flynn parody tweet; 4 and many users were fooled by a Donald Trump parody tweet about Dow Joans'.",
"5 Thus, accurate parody classification methods can be useful in downstream NLP applications such as automatic fact checking (Vlachos and Riedel, 2014) and rumour verifi-cation (Karmakharm et al., 2019), sentiment analysis (Pang et al., 2008) or nowcasting voting intention (Tumasjan et al., 2010; Lampos et al., 2013; Tsakalidis et al., 2018).",
"Beyond NLP, parody detection can be used in:",
"(i) political communication, to study and understand the effects of political parody in the public speech on a large scale (Hariman, 2008; Highfield, 2016);",
"(ii) linguistics, to identify characteristics of figurative language (Rose, 1993; Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997);",
"(iii) network science, to identify the adoption and diffusion mechanisms of parody (Vosoughi et al., 2018).",
"Parody in Linguistics Parody is an artistic form and literary genre that dates back to Aristophanes in ancient Greece who parodied argumentation styles in Frogs .",
"Verbal parody was studied in linguistics as a figurative trope distinct to irony and satire (Kreuz and Roberts, 1993; Rossen-Knill and Henry, 1997) and researchers long debated its definition and theoretic distinctions to other types of humor (Grice et al., 1975; Sperber, 1984; Wilson, 2006; Dynel, 2014).",
"In general, verbal parody 3 Both the profile description and account name need to mention this https://help.twitter.com/en/ru les-and-policies/parody-account-policy 4 https://tinyurl.com/ybbrh74g 5 https://tinyurl.com/s34dwgm involves a highly situated, intentional, and conventional speech act (Rossen-Knill and Henry, 1997) composed of both a negative evaluation and a form of pretense or echoic mention (Sperber, 1984; Wilson, 2006; Dynel, 2014) through which an entity is mimicked or imitated with the goal of criticizing it to a comedic effect.",
"Thus, imitative composition for amusing purpose is an an inherent characteristic of parody (Franke, 1971).",
"The parodist intentionally re-presents the object of the parody and flaunts this re-presentation (Rossen-Knill and Henry, 1997).",
"Parody on Social Media Parody is considered an integral part of Twitter (Vis, 2013) and previous studies on parody in social media focused on analysing how these accounts contribute to topical discussions (Highfield, 2016) and the relationship between identity, impersonation and authenticity (Page, 2014).",
"Public relation studies showed that parody accounts impact organisations during crises while they can become a threat to their reputation (Wan et al., 2015).",
"Satire Most related to parody, satire has been tangentially studied as one of several prediction targets in NLP in the context of identifying disinformation (McHardy et al., 2019; de Morais et al., 2019).",
"(Rashkin et al., 2017) compare the language of real news with that of satire, hoaxes, and propaganda to identify linguistic features of unreliable text.",
"They demonstrate how stylistic characteristics can help to decide the text's veracity.",
"The study of parody is therefore relevant to this topic, as satire and parodies are classified by some as a type of disinformation with no intention to cause harm but has potential to fool' (Wardle and Der-akhshan, 2018).",
"Irony and Sarcasm There is a rich body of work in NLP on identifying irony and sarcasm as a classification task (Wallace, 2015; Joshi et al., 2017).",
"Van Hee et al. (2018) organized two open shared tasks.",
"The first aims to automatically classify tweets as ironic or not, and the second is on identifying the type of irony expressed in tweets.",
"However, the definition of irony is usually a trope whose actual meaning differs from what is literally enunciated' (Van Hee et al., 2018), following the Gricean belief that the hallmark of irony is to communicate the opposite of the literal meaning (Wilson, 2006), violating the first maxim of Quality (Grice et al., 1975).",
"In this Account type Twitter Handle Sample tweet Real @realDonaldTrump The Republican Party, and me, had a GREAT day yesterday with respect to the phony Impeachment Hoax, & yet, when I got home to the White House & checked out the news coverage on much of television, you would have no idea they were reporting on the same event.",
"sense, irony is treated in NLP in a similar way as sarcasm (Gonzalez-Ibanez et al., 2011; Khattri et al., 2015; Joshi et al., 2017).",
"In addition to the words in the utterance, further using the user and pragmatic context is known to be informative for irony or sarcasm detection in NLP (Bamman and Smith, 2015; Wallace, 2015).",
"For instance, Oprea and Magdy (2019) make use of user embeddings for textual sarcasm detection.",
"In the design of our data splits, we aim to limit the contribution of this aspects from the results.",
"Relation to other NLP Tasks The pretense aspect of parody relates our task to a few other NLP tasks.",
"In authorship attribution, the goal is to predict the author of a given text (Stamatatos, 2009; Juola et al., 2008; Koppel et al., 2009).",
"However, there is no intent for the authors to imitate the style of others and most differences between authors are in the topics they write about, which we aim to limit by focusing on political parody.",
"Further, in our setups, no tweets from an author are in both training and testing to limit the impact of terms specific to a particular person.",
"Pastiche detection (Dinu et al., 2012) aims to distinguish between an original text and a text written by someone aiming to imitate the style of the original author with the goal of impersonating.",
"Most similar in experimental setup to our task, Preotiuc-Pietro and Devlin Marier (2019) aim to distinguish between tweets published from the same account by different types of users: politicians or their staff.",
"While both pastiches and staff writers aim to present similar content with similar style to the original authors, the texts lack the humorous component specific of parodies.",
"explored the inference of user characteristics.",
"Past research studied predicting the type of a Twitter account, most frequently between individual or organizational, using linguistic features (De Choud-hury et al., 2012; McCorriston et al., 2015; Mac Kim et al., 2017).",
"A broad literature has been devoted to predicting personal traits from language use on Twitter, such as gender (Burger et al., 2011), age (Nguyen et al., 2011), ge-olocation (Cheng et al., 2010), political preference (Volkova et al., 2014; Preotiuc-Pietro et al., 2017), income (Preotiuc-Pietro et al., 2015; Aletras and Chamberlain, 2018), impact (Lampos et al., 2014), socio-economic status (Lampos et al., 2016), race (Preotiuc-Pietro and Ungar, 2018) or personality (Schwartz et al., 2013; Preotiuc-Pietro et al., 2016).",
"We define parody detection in social media as a binary classification task performed at the social media post level.",
"Given a post T , defined as a sequence of tokens T = { t 1 , ..., t n } , the aim is to label T either as parody or genuine.",
"Note that one could use social network information but this is out of the paper's scope as we only focus on parody as a linguistic device.",
"We create a new publicly available data set to study this task, as no other data set is available.",
"We perform our analysis on a set of users from the same domain (politics) to limit variations caused by topic.",
"We first identify real and parody accounts of politicians on Twitter posting in English from the United States of America (US), the United Kingdom (UK) and other accounts posting in English from the rest of the world.",
"We opted to use Twitter because it is arguably the most popular platform for politicians to interact with the public or with other politicians (Parmelee and Bichard, 2011).",
"For example, 67% of prospective parliamentary candidates for the 2019 UK general election have an active Twitter account.",
"6 Twitter also allows to maintain parody accounts, subject to adding explicit markers in both the user bio and handle such as parody , fake .",
"7 Finally, we label tweets as parody or real, depending on the type of account they were posted from.",
"We highlight that we are not using user description or handle names in prediction, as this would make the task trivial.",
"We first query the public Twitter API using the following terms: { parody, #parody, parody account, fake, #fake, fake account, not real } to retrieve candidate parody accounts according to Twitter's policy.",
"From that set, we exclude any accounts matching fan or commentary in their bio or account name since these are likely to be not posting parodical content.",
"We also exclude private and deactivated accounts and accounts with a majority of non-English tweets.",
"After collecting this initial set of parody candidates, the authors of the paper manually inspected up to the first ten original tweets from each candidate to identify whether an account is a parody or not following the definition of a public figure parody account from Highfield (2016) (see Section 1), further filtering out non-parody accounts.",
"We keep a single parody account in case of multiple parody accounts about the same person.",
"Finally, for each remaining account, the authors manually identified the corresponding real politician account to collect pairs of real and parody.",
"Following the process above, we were able to identify parody accounts of 103 unique people, with 81 having a corresponding real account.",
"The authors also identified the binary gender and location (country) of the accounts using publicly available records.",
"This resulted in 21.6% female accounts (women parliamentarians percentages as of 2017: 19% US, 30% UK, 28.8% OECD average).",
"8 6 https://www.mpsontwitter.co.uk/ 7 https://help.twitter.com/en/rules-an d-policies/parody-account-policy 8 https://data.oecd.org/inequality/wom en-in-politics.htm Person Train Dev Test Total Avg.",
"The majority of the politicians are located in the US (44.5%) followed by the UK (26.7%) while 28.8% are from the rest of the world (e.g. Germany, Canada, India, Russia).",
"We collect all of the available original tweets, excluding retweets and quoted tweets, from all the parody and real politician accounts.",
"9 We further balance the number of tweets in a real parody account pair in order for our experiments and linguistic analysis not to be driven by a few prolific users or by imbalances in the tweet ratio for a specific pair.",
"We keep a ratio of maximum 20% between the real and parody tweets per pair by keeping all tweets from the less prolific account and randomly down-sampling from the more prolific one.",
"Subsequently, for the parody accounts with no corresponding real account, we sample a number of tweets equal to the median number of tweets for the real accounts.",
"Finally, we label tweets as parody or real, depending on the type of account they come from.",
"In total, the data set contains 131,666 tweets, with 65,710 real and 65,956 parody.",
"To test that automatically predicting political parody is robust and generalizes to held-out situations not included in the training data, we create the following three data splits for running experiments:",
"Person Split We first split the data by adding all tweets from each real parody account pair to a single split, either train, development or test.",
"To obtain a fairly balanced data set without pairs of accounts with a large number of tweets dominating any splits, we compute the mean between real and parody tweets for each account, and stratify them, with pairs of proportionally distributed means across the train, development, and test sets (see Table 2).",
"9 Up to maximum 3200 tweets/account according to Twitter API restrictions.",
"Gender Split We also split the data by the gender of the politicians into training, development and test, obtaining two versions of the data:",
"(i) one with female accounts in train/dev and male in test; and",
"(ii) male accounts in train/dev and female in test (see Table 3).",
"Location split Finally, we split the data based on the location of the politicians.",
"We group the accounts in three groups of locations: US, UK and the rest of the world ( RoW ).",
"We obtain three different splits, where each group makes up the test set and the other two groups make up the train and development set (see Table 4).",
"We preprocess text by lower-casing, replacing all URLs and anonymizing all mentions of usernames with placeholder token.",
"We preserve emoticons and punctuation marks and replace tokens that appear in less than five tweets with a special un-known' token.",
"We tokenize text using DLATK (Schwartz et al., 2017), a Twitter-aware tokenizer.",
"We experiment with a series of approaches to classification of parody tweets, ranging from linear models, neural network architectures and pretrained contextual embedding models.",
"Hyperpa-rameter selection is included in the Appendix.",
"LR-BOW As a first baseline, we use a logistic regression with standard bag-of-words (LR-BOW) representation of the tweets.",
"LR-BOW+POS We extend LR-BOW using syntactic information from Part-Of-Speech (POS) tags.",
"We first tag all tweets in our data using the NLTK tagger and then we extract bag-of-words features where each unigram consists of a token with its associated POS tag.",
"The first neural model is a bidirectional Long-Short Term Memory (LSTM) network (Hochre-iter and Schmidhuber, 1997) with a self-attention mechanism (BiLSTM-Att; Zhou et al. (2016)).",
"Tokens t i in a given tweet T = { t 1 , ..., t n } are mapped to embeddings and passed through a bidirectional LSTM.",
"A single tweet representation ( h ) is computed as the sum of the resulting contex-tualized vector representations ( (cid:80) i a i h i ) where a i is the self-attention score in timestep i .",
"The tweet representation ( h ) is subsequently passed to the output layer using a sigmoid activation function.",
"The Universal Language Model Fine-tuning (ULMFit) is a method for efficient transfer learning (Howard and Ruder, 2018).",
"The key intuition is to train a text encoder on a language modelling task (i.e. predicting the next token in a sequence) where data is abundant, then fine-tune it on a target task where data is more limited.",
"During fine-tuning, ULMFit uses gradual layer unfreezing to avoid catastrophic forgetting.",
"We experiment with using AWD-LSTM (Merity et al., 2018) as the base text encoder pretrained on the Wiki-text 103 data set and we fine-tune it on our own parody classification task.",
"For this purpose, after the AWS-LSTM layers, we add a fully-connected layer with a ReLU activation function followed by an output layer with a sigmoid activation function.",
"Before each of these two additional layers, we perform batch normalization.",
"Bidirectional Encoder Representations from Transformers (BERT) is a language model based on transformer networks (Vaswani et al., 2017) pre-trained on large corpora (Devlin et al., 2019).",
"The model makes use of multiple multi-head attention layers to learn bidirectional embeddings for input tokens.",
"It is trained for masked language modelling, where a fraction of the input tokens in a given sequence are masked and the task is to predict a masked word given its context.",
"BERT uses wordpieces which are passed through an embedding layer and get summed together with positional and segment embeddings.",
"The former introduce positional information to the attention layers, while the latter contain information about the location of a segment.",
"Similar to ULMFit, we fine-tune the BERT-base model for predicting parody tweets by adding an output dense layer for binary classification and feeding it with the classification' token.",
"We further experiment with RoBERTa (Liu et al., 2019), which is an extenstion of BERT trained on more data and different hyperparameters.",
"RoBERTa has been showed to improve performance in various benchmarks compared to the original BERT (Liu et al., 2019).",
"XLNet is another pre-trained neural language model based on transformer networks (Yang et al., 2019).",
"XLNet is similar to BERT in its structure, but is trained on a permutated (instead of masked) language modelling task.",
"During training, sentence words are permuted and the model predicts a word given the shuffled context.",
"We also adapt XLNet for predicting parody, similar to BERT and ULMFit.",
"Linear models For the LR-BOW, we use n-grams with n = (1, 2), n { (1, 1), (1, 2), (1, 3) weighted by TF.IDF.",
"For the LR-BOW+POS, we use TF with POS n-grams where n = (1, 3).",
"For both baselines we use L2 regularization.",
"BiLSTM-Att We use 200-dimensional GloVe embeddings (Pennington et al., 2014) pre-trained on Twitter data.",
"The maximum sequence length is set to 50 covering 95% of the tweets in the training set.",
"The LSTM size is h = 300 where h { 50 , 100 , 300 } with dropout d = 0.5 where d { .",
"2 , .",
"5 } .",
"We use Adam (Kingma and Ba, 2014) with default learning rate, minimizing the binary cross-entropy using a batch size of 64 over 10 epochs with early stopping.",
"ULMFit We first update only the AWD-LSTM weights with a learning rate l = 2e-3 for one epoch where l { 1e-3, 2e-3, 4e-3 } for language modeling.",
"Then, we update both the AWD-LSTM and embedding weights for one more epoch, using a learning rate of l = 2e-5 where l { 1e-4, 2e-5, 5e-5 } .",
"The size of the intermediate fully-connected layer (after AWD-LSTM and before the output) is set by default to 50 .",
"Both in the intermediate and output layers we use default dropout of 0 .",
"08 and 0 .",
"1 respectively from Howard and Ruder (2018).",
"BERT and RoBERTa For BERT, we used the base model (12 layers and 110M total parameters) trained on lowercase English.",
"We fine-tune it for 1 epoch with a learning rate l = 5e-5 where l { 2e-5, 3e-5, 5e-5 } as recommended in Devlin et al. (2019) with a batch size of 128 .",
"For RoBERTa, we use the same fine-tuning parameters as BERT.",
"XLNet We use the same parameters as BERT except for the learning rate, which we set at l = 4e-5 where l { 2e-5, 4e-5, 5e-5 } .",
"This section contains the experimental results obtained on all three different data splits proposed in Section 3. We evaluate our methods (Section 4) using several metrics, including accuracy, precision, recall, macro F1 score, and Area under the ROC (AUC).",
"We report results over three runs using different random seeds and we report the average and standard deviation.",
"Table 5 presents the results for the parody prediction models with the data split by person.",
"We observe the architectures using pre-trained text encoders (i.e. ULMFit, BERT, RoBERTa and XLNet) outperform both neural (BiLSTM-Att) and feature-based (LR-BOW and LR-BOW+POS) by a large margin across metrics with transformer architectures (BERT, RoBERTa and XLNet) performing best.",
"The highest scoring model, Person Model Acc P R F1 AUC LR-BOW 73.95 0.00 70.08 0.01 83.53 0.02 76.19 0.00 73.96 0.00 LR-BOW+POS 74.33 0.00 71.34 0.00 81.19 0.00 75.95 0.00 74.34 0.00 BiLSTM-Att 79.92 0.01 81.63 0.01 77.11 0.03 79.29 0.02 79.91 0.01 ULMFit 81.11 0.38 75.57 2.03 84.97 0.87 81.05 0.42 81.10 0.38 BERT 87.65 0.29 87.63 0.58 87.67 0.40 87.65 0.18 87.65 0.32 RoBERTa 90.01 0.35 90.90 0.55 88.45 0.22 89.66 0.33 90.05 0.29 XLNet 86.45 0.41 88.24 0.52 85.18 0.40 86.68 0.37 86.45 0.36 Table 5: Accuracy (Acc), Precision (P), Recall (R), F1-Score (F1) and ROC-AUC for parody prediction splitting by person ( std. dev.).",
"RoBERTa, classifies accounts (parody and real) with an accuracy of 90, which is more than 8% greater than the best non-transformer model (the ULMFit method).",
"RoBERTa also outperforms the Logistic Regression baselines (LR-BOW and LR-BOW+POS) by more than 16 in accuracy and 13 in F1 score.",
"Furthermore, it is the only model to score higher than 90 on precision.",
"Table 6 shows the F1-scores obtained when training on the gender splits, i.e. training on male and testing on female accounts and vice versa.",
"We first observe that models trained on the male set are in general more accurate than models trained on the female set, with the sole exception of ULMFit.",
"This is probably due to the fact that the data set is imbalanced towards men as shown in Table 3 (see also Section 3).",
"We also do not observe a dramatic performance drop compared to the mixed-gender model on the person split (see Table 5).",
"Again, RoBERTa is the most accurate model when trained in both splits, obtaining an F1-score of 87.11 and 84.87 for the male and female data respectively.",
"The transformer-based architectures are again the best performing models overall, but the difference between them and the feature-based methods is smaller than it was on the person split.",
"Table 7 shows the F1-scores obtained training our models on the location splits:",
"(i) train/dev on UK and RoW, test on US;",
"(ii) train/dev on US and RoW, test on UK; and",
"(iii) train/dev on US and UK, test on RoW.",
"In general, the best results are obtained by training on the US & UK split, while results of the models trained on the RoW & US, Gender Model M F F M LR-BOW 78.89 76.63 LR-BOW+POS 78.74 76.74 BiLSTM-Att 77.00 77.11 ULMFit 81.20 82.53 BERT 85.85 84.40 RoBERTa 87.11 84.87 XLNet 85.69 84.16 Table 6: F1-scores for parody prediction splitting by gender (Male-M, Female-F).",
"and RoW & UK splits are similar.",
"The model with the best performance trained on US & UK, and RoW & UK splits is RoBERTa with F1 scores of 87.70 and 85.99 respectively.",
"XLNet performs slightly better than RoBERTa when trained on RoW & US data split.",
"Through experiments over three different data splits, we show that all models predict parody tweets consistently above random, even if tested",
"on people unseen in training.",
"In general, we observe that the pre-trained contextual embedding based models perform best, with an average of around 10 F1 better than the linear methods.",
"From these methods, we find that RoBERTa outperforms the other methods by a small, but consistent margin, similar to past research (Liu et al., 2019).",
"Further, we see that the predictions are robust to any location or gender specific differences, as the performance on held-out locations and genders are close to when splitting by person with a maximum of < 5 F1 drop, also impacted by training on less data (e.g. female users).",
"This highlights the fact that our models capture information beyond topics or features specific to any person, gender or location and can potentially identify stylistic differences between parody and real tweets.",
"We finally perform an analysis based on our novel data set to uncover the peculiarities of political parody and understand the limits of the predictive models.",
"We first analyse the linguistic features specific of real and parody tweets.",
"For this purpose, we use the method introduced in (Schwartz et al., 2013) and used in several other analyses of user traits (Preotiuc-Pietro et al., 2017) or speech acts (Preotiuc-Pietro et al., 2019).",
"We thus rank the feature sets described in Section 4 using univariate Pearson correlation (note that for the analysis we use POS tags instead of POS n-grams).",
"Features are normalized to sum up to unit for each tweet.",
"Then, for each feature, we compute correlations independently between its distribution across posts and the label of the post (parody or not).",
"Table 8 presents the top unigrams and part-of-speech features correlated with real and parody tweets.",
"We first note that the top features related to either parody or genuine tweets are function words or related to style, as opposed to the topic.",
"This enforces that the make-up of the data set or any of its categories are not impacted by topic choice and parody detection is mostly a stylistic difference.",
"The only exception are a few hashtags related to parody accounts (e.g. #imwithme), but on a closer inspection, all of these are related to tweets from a single parody account and are thus not useful in prediction by any setup, as tweets containing these Real Parody Feature r Feature r Unigrams our 0.140 i 0.181 in 0.131 ?",
"The top features related to either category of tweets are pronouns (our' for genuine tweets, i' for parody tweets).",
"In general, we observe that parody tweets are much more personal and include possessives (me', my', i', i'm, PRP) or second person pronouns (you').",
"This indicates that parodies are more personal and direct, which is also supported by use of more @-mentions and quotation marks.",
"The real politician tweets are more impersonal and the use of our' indicates a desire to include the reader in the conversation.",
"The real politician tweets include more stop-words (e.g. prepositions, conjunctions, determin-ers), which indicate that these tweets are more well formed.",
"Conversely, the parody tweets include more contractions (don't, i'm), hinting to a less formal style (dude').",
"Politician tweets frequently use their account to promote events they participate in or are relevant to the day-to-day schedule of a politician, as hinted by several prepositions (at', on') and words (meeting', to-day') (Preotiuc-Pietro and Devlin Marier, 2019).",
"For example, this is a tweet of the U.S. Senator from Connecticut, Chris Murphy: Rudy Giuliani is in Ukraine today , meeting with Ukranian leaders on behalf of the President of the United States, representing the President's re-election",
"campaign.[...] Through part-of-speech patterns, we observe that parody accounts are more likely to use verbs in the present singular (VBZ, VBP).",
"This hints that parody tweets explicitly try to mimic direct quotes from the parodied politician in first person and using present tense verbs, while actual politician tweets are more impersonal.",
"Adverbs (RB) are used predominantly in parodies and a common sequence in parody tweets is adverbs followed by verbs (RB VB) which can be used to emphasize actions or relevant events.",
"For example, the following is a tweet of a parody account ( @ Queen Europe) of Angela Merkel: I mean, the Brexit Express literally appears to be going backwards but OK < url > 6.2 Error Analysis Finally, we perform an error analysis to examine the behavior of our best performing model (RoBERTa) and identify potential limitations of the current approaches.",
"The first example is a tweet by the former US president Barack Obama which was classified as parody while it is in fact a real tweet: Summer's almost over, Senate Leaders.",
"Similarly, the next tweet was posted by the real account of the Virginia governor, Ralph Northam:",
"Both of the tweets above contain humoristic elements and come off as confrontational, aimed at someone else which is more prevalent in parody.",
"We hypothesize that the model picked up this information to classify these tweets as parody.",
"From the previous analyses, we noticed that tweets by real politicians often convey information in a more neutral or impersonal way.",
"On the other hand, the following tweet was posted by a Mitt Romney parody account and was classified as real: It's up to you, America: do you want a repeat of the last four years, or four years staggeringly worse than the last four years?",
"This parody tweet, even though it is more opinionated, is more similar in style to a slogan or campaign speech and is therefore missclassified.",
"Lastly, the following is a tweet from former President Obama that was misclassified as parody: It's the # GimmeFive challenge, presidential style.",
"The reason behind is that there are politicians, such as Barack Obama, who often write in an informal manner and this may cause the models to misclassify this kind of tweets.",
"We presented the first study of parody using methods from computational linguistics and machine learning, a related but distinct linguistic phenomenon to irony and sarcasm.",
"Focusing on political parody in social media, we introduced a freely available large-scale data set containing a total of 131,666 English tweets from 184 real and corresponding parody accounts.",
"We defined parody prediction as a new binary classification task at a tweet level and evaluated a battery of feature-based and neural models achieving high predictive accuracy of up to 89.7% F1 on tweets from people unseen in training.",
"In the future, we plan to study more in depth the stylistic and figurative devices used for parody, extend the data set beyond the political case study and explore human behavior regarding parody, including how this is detected and diffused through social media.",
"We thank Bekah Hampson for providing early input and helping with the data annotation.",
"NA is supported by ESRC grant ES/T012714/1 and an Amazon AWS Cloud Credits for Research Award."
] | [
"abstain",
"objective",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"Knowledge distillation has proven to be effective in model acceleration and compression.",
"It transfers knowledge from a large neural network to a small one by using the large neural network predictions as targets of the small neural network.",
"But this way ignores the knowledge inside the large neural networks, e.g., parameters.",
"Our preliminary study as well as the recent success in pre-training suggests that transferring parameters are more effective in distilling knowledge.",
"In this paper, we propose Weight Distillation to transfer the knowledge in parameters of a large neural network to a small neural network through a parameter generator.",
"On the WMT16 En-Ro, NIST12 Zh-En, and WMT14 En-De machine translation tasks, our experiments show that weight distillation learns a small network that is 1.88 2.94 faster than the large network but with competitive BLEU performance.",
"When fixing the size of the small networks, weight distillation outperforms knowledge distillation by 0.51 1.82 BLEU points.",
"Knowledge Distillation (KD) is a popular model acceleration and compression approach (Hinton et al., 2015).",
"It assumes that a lightweight network (i.e., student network, or student for short) can learn to generalize in the same way as a large network (i.e., teacher network, or teacher for short).",
"To this end, a simple method is to train the student network with predicted probabilities of the teacher network as its targets.",
"But KD has its limitation: the student network can only access the knowledge in the predictions of the teacher network.",
"It does not consider the knowledge in the teacher network parameters.",
"These parameters contain billions of entries for the teacher Authors contributed equally.",
"network to make predictions.",
"Yet in KD the student only learns from those predictions with at most thousands of categories.",
"This way results in an inferior student network, since it learns from the limited training signals.",
"Our analysis in Section 5.1 shows that KD performs better if we simply cut off parts of parameters from the teacher to initialize the student.",
"This fact implies that the knowledge in parameters is complementary to KD but missed.",
"It also agrees with the recent success in pre-training (Yang et al., 2019; Liu et al., 2019; Devlin et al., 2019), where parameters reusing plays the main role.",
"Based on this observation, a superior student is expected if all parameters in the teacher network could be exploited.",
"However, this imposes a great challenge as the student network is too small to fit in the whole teacher network.",
"To fully utilize the teacher network, we propose Weight Distillation (WD) to transfer all the parameters of the teacher network to the student network, even if they have different numbers of weight matrices and (or) these weight matrices are of different shapes.",
"We first use a parameter generator to predict the student network parameters from the teacher network parameters.",
"After that, a fine-tuning process is performed to improve the quality of the transferred parameters.",
"See Fig. 1 for a comparison of KD and WD.",
"We test the WD method in a well-tuned Transformer-based machine translation system.",
"The experiments are run on three machine translation benchmarks, including the WMT16 English-Roman (En-Ro), NIST12 Chinese-English (Zh-En), and WMT14 English-German (En-De) tasks.",
"With a similar speedup, the student network trained by WD achieves BLEU improvements of 0.51 1.82 points over KD.",
"With similar BLEU performance, the student network trained by WD is 1.11 1.39 faster than KD.",
"More interestingly, it is found that WD is very effective in improving the student net-Teacher Network Prediction Student Network Prediction GroundTruth T 1 T 2 ...",
"work when its model size is close to the teacher network.",
"On the WMT14 En-De test data, our WD-based system achieves a strong result (a BLEU score of 30.77) but is 1.88 faster than the big teacher network.",
"In this work, we choose Transformer (Vaswani et al., 2017) for study because it is one of the state-of-the-art neural models in natural language processing.",
"Transformer is a Seq2Seq model, which consists of an encoder and a decoder.",
"The encoder maps an input sequence to a sequence of continuous representations and the decoder maps these representations to an output sequence.",
"Both the encoder and the decoder are composed of an embedding layer and multiple hidden layers.",
"The decoder has an additional output layer at the end.",
"The hidden layer in the encoder consists of a self-attention sub-layer and a feed-forward network (FFN) sub-layer.",
"The decoder has an additional encoder-decoder attention sub-layer between the self-attention and the FFN sub-layers.",
"For more details, we refer the reader to (Vaswani et al., 2017).",
"KD encourages the student network to produce outputs close to the outputs of the teacher network.",
"where L is the cross-entropy loss, y T is the teacher prediction, T is the teacher parameters, y S is the student prediction and S is the student parameters.",
"In practice, Eq.",
"1 serves as a regularization term.",
"A more effective KD variant for Seq2Seq models is proposed by Kim and Rush (2016).",
"They replace the predicted distributions y T by the generated sequences from the teacher network.",
"The proposed parameter generator transforms the teacher parameters T to the student parameters",
"It is applied to the encoder and decoder separately.",
"The process is simple: it first groups weight matrices in the teacher network into different subsets, and then each subset is used to generate a weight matrix in the student network.",
"Though using all teacher weights to predict student weights is possible, its efficiency becomes an issue.",
"For instance, the number of parameters in a simple linear transformation will be the product of the numbers of entries in its input and output, where in our case these input and output contain billions of entries (from the teacher and student weights), making it intractable to keep this simple linear transformation in the memory.",
"Grouping is an effective way L t = 6 O t = 2048 I t = 512 T TeacherNetwork G r oup 1 G r oup2 O s = 2048 I s = 512 L t / L s = 3 O s = 2048 I s = 512 L t / L s = 3 L t / L s = 3 1 O t = 2048 O s = 1024 I s = 256 I t = 512 WL WTOWIE q .",
"Before the discussion, we define the weight class as a weight matrix from the network formulation, and the weight instance as the instantiation of a weight class.",
"Take the FFN for an example.",
"Its formulation is defined as: FFN( x ) = max( xW 1 + b 1 , 0) W 2 + b 2 (2) where W 1 , b 1 , W 2 and b 2 are learnable weight matrices.",
"In this case, W 1 in Eq.",
"2 defines a weight class.",
"Then all the corresponding weight matrices from FFNs in different layers of the network are the instantiations of this W 1 weight class.",
"From this sense, a weight class determines the role of its instantiations in design, e.g., extracting features for W 1 in Eq.",
"2.",
"This means that when transferring parameters, different weight classes will contribute little to each other as they have different roles.",
"Therefore, when predicting a student weight matrix, it is sufficient to consider the teacher weight matrices with the same weight class only, which makes the prediction efficient.",
"So our parameter generator groups the teacher weight matrices by the weight class they belong to, i.e., different weight classes clusters all their instantiations to form their own groups.",
"In the previous example, the W 1 weight class will form a group [ T 1 , T 2 , , TL t ] , where each T i is the W 1 weight instance in the i -th FFN and L t is the number of layers in the teacher network.",
"These weight matrices are then used to generate the W 1 weight instances in the student network.",
"The parameter generator further divides each group into smaller subsets with weight matrices from adjacent layers, because the adjacent layers function similarly (Jawahar et al., 2019) and so as their weights.",
"This way additionally makes the later transformation more light-weighted.",
"Namely, given a group of L t weight matrices, the parameter generator splits it into L s subsets, where L s is the number of layers in the student network.",
"For example, the i -th subset of the group of W 1 weight class in the previous example will be (cid:2) T ( i 1) L t /L s +1 , T ( i 1) L t /L s +2 , , T i L t /L s (cid:3) .",
"This subset is used to generate the weight matrix S i , which corresponds to W 1 weight instance in the i -th FFN of the student network.",
"Given a subset of teacher weight matrices, the parameter generator then transforms them to the desired student weight matrix, as shown in the right of Fig. 2.",
"Let us see the process of generating the weight matrix S RI s O s from the subset (cid:2) T 1 , T 2 , , TL t /L s (cid:3) with each T i RI t O t , where I s and O s are the input and output dimensions of the student weight matrix, I t and O t are the input and output dimensions of the teacher weight matrix.",
"The parameter generator first stacks all weight matrices in this subset into a tensor T RI t O t L t /L s .",
"Then it uses three learnable weight matrices, WI RI t I s , WO RO t O s , WL RL t /L s 1 , to transform T to the shape I s O s 1 sequentially: T jk T jk WI , j [1 , O t ] , k [1 , L (cid:48) ] (3) T j k T j k WO , j [1 , I s ] , k [1 , L (cid:48) ] (4) T jk T jk WL , j [1 , I s ] , k [1 , O s ] (5) where L (cid:48) = L t /L s .",
"Finally we transform T (with 1 in its shape get eliminated) to produce S , as follows: S = tanh ( T ) (cid:12) W + B (6) where W and B are learnable weight matrices of the parameter generator and have the same shape as T .",
"(cid:12) denotes the Hadamard product.",
"The tanh function provides non-linearity.",
"W and B are used to scale and shift the tanh output to any desirable value.",
"Note that we do not share WI , WO , WL , W and B when generating different S .",
"If the encoder is of the same size in both the teacher and student networks, only Eq.",
"6 is needed to map each weight matrix from the teacher network to the student network.",
"There are two training phases in WD: In the first phase (Phase 1), we train the parameter generator = { WI , WO , WL , W, B } to predict the student network S ; In the second phase (Phase 2), we fine-tune the generated student network S to obtain better results.",
"Phase 2 is necessary because the parameter generator is simply a feed-forward network with one hidden layer and thus has no enough capacity to produce a good enough student network at once.",
"A more sophisticated parameter generator is an alternative, but it is expensive due to its large input and output spaces.",
"The task of Phase 1 is to minimize the loss of the student network with parameters S predicted by the parameter generator from the teacher parameters T .",
"= arg min [(1 ) L ( y T , y ) + L ( y, y )] (7)",
"where L is the cross-entropy loss, y T is the teacher prediction, y is the prediction of the student network generated by the parameter generator , y is the ground truth, and is a hyper-parameter that balances two losses and is set to 0.5 by default.",
"The first term of Eq.",
"7 is the KD loss as in Eq.",
"1, and the second term is the standard loss.",
"S = arg min S [(1 ) L ( y T , y S ) + L ( y, y S )]",
"We evaluate our methods on the WMT16 English-Roman (En-Ro), NIST12 Chinese-English (Zh-En), and WMT14 English-German (En-De) tasks.",
"For the En-Ro task, we use the WMT16 English-Roman dataset (610K pairs).",
"We choose newsdev-2016 as the validation set and newstest-2016 as the test set.",
"For the Zh-En task, we use 1.8M sentence Chinese-English bitext provided within NIST12 OpenMT 1 .",
"We choose the evaluation data of mt06 as the validation set, and mt08 as the test set.",
"For the En-De task, we use the WMT14 English-German dataset (4.5M pairs).",
"We share the source and target vocabularies.",
"We choose newstest-2013 as the validation set and newstest-2014 as the test set.",
"For all datasets, we tokenize every sentence using the script in the Moses toolkit and segment every word into subword units using Byte-Pair Encoding (Sennrich et al., 2016).",
"The number of the BPE merge operations is set to 32K.",
"We remove sentences with more than 250 subword units (Xiao et al., 2012).",
"In addition, we evaluate the results using multi-bleu.perl .",
"LDC2000T47, LDC2000T50, LDC2003E14, LDC2005T10, LDC2002E18, LDC2007T09, LDC2004T08",
"Our baseline system is based on the open-source implementation of the Transformer model presented in Ott et al. (2019)'s work.",
"For all machine translation tasks, we experiment with the Transformer-base (base) setting.",
"We additionally run the Transformer-big (big) (Vaswani et al., 2017) and Transformer-deep (deep) (Wang et al., 2019; Zhang et al., 2020) settings on the large En-De dataset.",
"All systems consist of a 6-layer encoder and a 6-layer decoder, except that the Transformer-deep encoder has 48 layers (depth) (Li et al., 2020).",
"The embedding size (width) is set to 512 for Transformer-base/deep and 1,024 for Transformer-big.",
"The FFN hidden size equals to 4 embedding size in all settings.",
"We stop training until the model stops improving on the validation set.",
"All experiments are done on 8 NVIDIA TITIAN V GPUs with mixed-precision training (Micikevicius et al., 2018).",
"At test time, the model is decoded with a beam of width 4/6/4, a length normalization weight of 1.0/1.0/0.6 and a batch size of 64 for the En-Ro/Zh-En/En-De tasks with half-precision.",
"Note that our method can also be seen as an advanced version of Tucker Decomposition (Tucker, 1966).",
"So we also implement a baseline based on Tucker Decomposition.",
"Unfortunately, this model does not converge to a good optima and performs extremely poor.",
"For the KD baseline, we adopt Kim and Rush (2016)'s method, which has proven to be the most effective for Seq2Seq models (Kim et al., 2019).",
"It generates the pseudo data from the source side of the bilingual corpus.",
"The choices of student networks are based on the observation that the encoder has a greater impact on performance and the decoder dominates the decoding time (Kasai et al., 2020).",
"Therefore we vary the depth and width of the decoder.",
"We test two student network configu-rations: TINY halves the decoder width and uses a 1-layer decoder (the fastest WD student network with the performance close to the teacher network); SMALL uses a 2-layer decoder whose width is the same as the teacher network (the fastest KD student network with the performance close to the teacher network).",
"All hyper-parameters of WD are identical to the baseline system, except that WD uses 1/4 warmup steps in Phase 2.",
"For the parameter generator initialization, we use Glorot and Bengio (2010)'s method to initialize WI , WO , WL in Eqs.",
"3 5.",
"W and B in Eq.",
"6 are initialized to constants 1 and 0 respec-System Depth Width Test BLEU Valid Params Speed Speedup b i g Teacher 6 1024 29.11 -27.66 281M 123.92",
"tively.",
"All results are the average of three identical runs with different random seeds.",
"Table 1 shows the results of different approaches on different student networks with Transformer-base as the teacher network.",
"In all three tasks and different sized student networks, WD outperforms KD by 0.77, 1.57, and 0.66 BLEU points on En-Ro, Zh-En, and En-De on average.",
"Our method (TINY ) can obtain similar performance to the teacher network with only half of its parameters and is 2.57 2.80 faster, while KD (SMALL ) uses more parameters and has only a 1.94 2.26 speedup in the same case.",
"We attribute the success of WD to that the parameter generator uses parameters of the teacher network to provide a good initialization for the student network, as Phase 1 behaves like the initialization, and the effectiveness of a good initialization has been widely proven (Erhan et al., 2010; Mishkin and Matas, 2016).",
"Interestingly, both KD and WD surpass the teacher network when the student network size is close to the teacher network (SMALL ).",
"This is due to that KD has a form similar to data augmentation (Gordon and Duh, 2019).",
"Table 2 shows the results of larger networks, i.e., Transformer-big/deep.",
"The phenomenon here is similar to that in Table 1.",
"The acceleration on Transformer-big is more obvious than on Transformer-base (2.94 vs. 2.57 for TINY and 2.10 vs. 1.95 for SMALL in WD).",
"This is because the decoder in Transformer-big occupies a larger portion of the decoding time than in Transformer-base.",
"But the acceleration on Transformer-deep is less obvious than on Transformer-base (2.13 vs. 2.57 for TINY and 1.88 vs. 1.95 for SMALL in WD), as a deeper encoder consumes more inference time.",
"Moreover, compared with such a strong Transformer-deep teacher, WD (SMALL ) can still outperform it by 1.34 BLEU points with a 1.88 speedup, achieving the state-of-the-art.",
"To test whether KD misses knowledge in parameters, we initialize the student network with the teacher parameters.",
"If the teacher and student networks have different depths, we initialize the student network with the bottom layers of the teacher network (Sanh et al., 2019).",
"If they have different Teacher Student KD WD 120 170 220 50 51 52 53 Speed (sentences/s) BLEU 1 3 5 7 9 45 50 Learning rate ( 10 4 ) BLEU 1 2 3 4 5 50 51 52 53 #Warmup ( 10 3 ) BLEU Figure 3: Sensitivity analysis on SMALL .",
"widths, we slice the teacher weight matrices to fit the student network (Wang et al., 2020).",
"Table 3 shows that initializing the student networks with the teacher parameters improves KD, supporting our claim that knowledge in parameters is complementary to KD but missed.",
"We also see that WD outperforms this simple initialization, which implies that using all teacher parameters helps to obtain a better student.",
"The left part of Fig. 3 studies how sensitive the performance (BLEU) of different methods are to various levels of inference speedup (obtained by varying decoder depth and width).",
"It shows that WD distributes on the upper right of the figure, which means that WD produces student networks that are consistently faster and better.",
"We also investigate how sensitive different methods are to the training hyper-parameters, i.e., the learning rate and warmup steps.",
"Here we focus on Phase 2 of WD, as it directly impacts the final performance.",
"The middle part of Fig. 3 shows that WD can endure learning rates in a wide range, because its performance does not vary much.",
"However, a very large learning rate still negatively impacts the performance.",
"The right part of Fig. 3 is the opposite, where WD is more sensitive to the warmup steps than the learning rate.",
"This is because more warmup steps will run the network with a high learning rate in a longer period.",
"A high learning rate has been proven to be harmful as shown in the middle part of Fig. 3.",
"Table 4 studies which weight matrices in the teacher network are the most effective.",
"It is achieved by training the parameter generator with only the intended weight matrices and without the KD loss term in Eq.",
"7.",
"We see that using any weight matrix brings a significant improvement over the baseline.",
"This observation shows that weight matrices in the teacher network do contain abundant knowledge.",
"Among these, the encoder weight matrices produce the most significant result, which agrees with the previous study claiming that the encoder is more important than the decoder (Wang et al., 2019; Bapna et al., 2018).",
"As the previous experiments focus on a lightweight decoder for acceleration, the compression is limited as the encoder remains large.",
"To examine the effectiveness of WD on model compression, we shrink the depth and width of the encoder and decoder simultaneously.",
"As shown in Table 5, WD consistently outperforms KD by about 1 BLEU 1 2 3 4 5 6 7 8 9 10 12345678910 #Epoch (Phase",
"point under various compression ratios (ranging from 1.00 to 3.40 ).",
"Note that decreasing the width brings more significant compression.",
"This is because a large portion of the parameters is from the embedding matrices and the output projection.",
"The sizes of these matrices are determined by the width and a fixed vocabulary size.",
"Fig. 4 studies the training efficiency of WD by comparing the final BLEU scores when two training phases end in different epochs.",
"As shown in Fig. 4, Phase 1 has little impact on Phase 2, because Phase 2 converges to optimums with similar BLEU scores once Phase 1 runs for a few epochs (say, 3 epochs).",
"If we run Phase 1 longer, then Phase 2 converges faster.",
"This phenomenon suggests that Phase 1 already transfers the knowledge in the teacher parameters within the first few epochs, and the remaining epochs merely do the fine-tuning (Phase",
"2) job.",
"This implies that the training of WD is efficient, since we can just train the parameter generator for several epochs first, then fine-tune the generated network as in KD, and finally obtain a much better result than KD.",
"Though we could train the parameter generator for just a few epochs as suggested, Phase 1 is still time-consuming.",
"The reasons are two folds:",
"1) the parameter generator consumes a lot of memory and we have to resort to gradient accumulation;",
"2) the parameter generator involves many large matrix multiplications.",
"For the experiments in Table 1 and Table 2, it takes us 0.66 days for WD to finish training on average, whereas 0.55 days for the teacher network baseline and 0.31 days for both the student network baseline and KD.",
"Knowledge distillation (Hinton et al., 2015; Freitag et al., 2017) is a widely used model acceleration and compression technique (Jiao et al., 2019; Sanh et al., 2019; Liu et al., 2020).",
"It treats the network predictions as the knowledge learned by the teacher network, since these predicted distributions contain the ranking information on similarities among categories.",
"It then transfers this knowledge to the student network by enforcing the student network to have similar predictions.",
"The followed work extends this idea by providing more knowledge from different sources to the student network.",
"FitNets (Romero et al., 2015) uses not only the predictions but also the intermediate representations learned by the teacher network to supervise the student network.",
"For the Seq2Seq model, Kim and Rush (2016) proposes to use the generated sequences as the sequence-level knowledge to guide the student network training.",
"Moreover, self-knowledge distillation (Hahn and Choi, 2019) even shows that knowledge (representations) from the student network itself can improve the performance.",
"Our weight distillation, on the other hand, explores a new source of knowledge and a new way to leverage this knowledge.",
"It transfers the knowledge in parameters of the teacher network to the student network via a parameter generator.",
"Therefore, it is orthogonal to other knowledge distillation variants.",
"Transfer learning aims at transferring knowledge from a source domain to a target domain.",
"Based on what knowledge is transferred to the model in the target domain, transfer learning methods can be classified into three categories (Pan and Yang, 2010): instance-based methods reuse certain parts of the data in the source domain (Jiang and Zhai, 2007; Dai et al., 2007); feature-based methods use the representation from the model learned in the source domain as the input (Peters et al., 2018; Gao et al., 2008); parameter-based methods directly fine-tune the model learned in the source domain with the target domain data (Yang et al., 2019; Liu et al., 2019; Devlin et al., 2019).",
"Perhaps the most related work is Platanios et al. (2018)'s work.",
"Their method falls into the parameter-based category.",
"They use a universal parameter generator to share the knowledge among translation tasks.",
"This parameter generator produces a translation model from a given language-specific embedding.",
"Though we similarly employ the idea of a parameter generator, our weight distillation aims at transferring knowledge from one model to another rather than from one translation task to another.",
"Therefore our parameter generator takes a model instead of a language-specific embedding as its input and is only used once.",
"In this work, we propose weight distillation to transfer knowledge in the parameters of the teacher network to the student network.",
"It generates the student network from the teacher network via a parameter generator.",
"Our experiments on three machine translation tasks show that weight distillation consistently outperforms knowledge distillation by producing a faster and better student network.",
"This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801), and the Ministry of Science and Technology of the PRC (Nos. 2019YFF0303002 and 2020AAA0107900).",
"The authors would like to thank anonymous reviewers for their comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"other",
"other"
] |
[
"Conditional Random Field (CRF) based neural models are among the most performant methods for solving sequence labeling problems.",
"Despite its great success, CRF has the shortcoming of occasionally generating illegal sequences of tags, e.g. sequences containing an Itag immediately after an O tag, which is forbidden by the underlying BIO tagging scheme.",
"In this work, we propose Masked Conditional Random Field (MCRF), an easy to implement variant of CRF that impose restrictions on candidate paths during both training and decoding phases.",
"We show that the proposed method thoroughly resolves this issue and brings consistent improvement over existing CRF-based models with near zero additional cost.",
"Sequence labeling problems such as named entity recognition (NER), part of speech (POS) tagging and chunking have long been considered as fundamental NLP tasks and drawn researcher's attention for many years.",
"Traditional work is based on statistical approaches such as Hidden Markov Models (Baum and Petrie, 1966) and Conditional Random Fields (Lafferty et al., 2001), where handcrafted features and task-specific resources are used.",
"With advances in deep learning, neural network based models have achieved dominance in sequence labeling tasks in an end-to-end manner.",
"Those models typically consist of a neural encoder that maps the input tokens to embeddings capturing global sequence information, and a CRF layer that models dependencies between neighboring labels.",
"Popular choices of neural encoder have been convolutional neural network (Collobert et al., 2011), and bidirectional LSTM (Huang et al., 2015).",
"Recently, pretrained language models such as ELMo (Peters et al., 2018) or BERT Corresponding author.",
"(Devlin et al., 2019) have been proven far superior as a sequence encoder, achieving state-of-the-art results on a broad range of sequence labeling tasks.",
"Most sequence labeling models adopt a BIO or BIOES tag encoding scheme (Ratinov and Roth, 2009), which forbids certain tag transitions by design.",
"Occasionally, a model may yield sequence of predicted tags that violates the rules of the scheme.",
"Such predictions, subsequently referred to as illegal paths , are erroneous and must be dealt with.",
"Existing methods rely on hand-crafted post-processing procedure to resolve this problem, typically by retaining the illegal segments and re-tagging them.",
"But as we shall show in this work, such treatment is arbitrary and leads to suboptimal performance.",
"The main contribution of this paper is to give a principled solution to the illegal path problem.",
"More precisely:",
"1. We show that in the neural-CRF framework the illegal path problem is intrinsic and may accounts for non-negligible proportion (up to 40%) of total errors.",
"To the best of our knowledge we are the first to conduct this kind of study.",
"2. We propose Masked Conditional Random Field (MCRF), a constrained version of the CRF that is by design immune to the illegal paths problem.",
"We also devise an algorithm for MCRF that incurs almost zero overhead and requires only a few lines of code to implement.",
"Further, we provide a theoretical justification of the proposed method.",
"3. We show in comprehensive experiments that MCRF performs significantly better than its CRF counterpart, and that its performance is on par with and sometimes better than more sophisticated models.",
"We achieve new State-of-the-Arts in two Chinese NER datasets.",
"and existing strategies that resolve it.",
"In Section 3 we propose MCRF, its motivation and an approximate implementation.",
"Section 4 is devoted to numerical experiments.",
"We conclude the current work in Section",
"5. 2 The illegal path problem 2.1 Problem Statement As a common practice, most sequence labeling models utilize a certain tag encoding scheme to distinguish the boundary and the type of the text segments of interest.",
"An encoding scheme makes it possible by introducing a set of tag prefixes and a set of tag transition rules.",
"For instance, the popular BIO scheme distinguishes the B eginning, the I nside and the O utside of the chunks of interest, imposing that any I tag must be preceded by a B tag or another I tag of the same type.",
"Thus O O O I-LOC I-LOC O is a forbidden sequence of tags because the transition O I-LOC directly violates the BIO scheme design.",
"Hereafter we shall refer to a sequence of tags that contains at least one illegal transition an illegal path .",
"As another example, the BIOES scheme further identifies the E nding of the text segments and the S ingleton segments, thereby introducing more transition restrictions than BIO.",
"e.g. an I tag must always be followed by an E tag of the same type, and an S tag can only be preceded by an O , an E or another S tag, etc.",
"For a comparison of the performance of the encoding schemes, we refer to (Ratinov and Roth, 2009) and references therein.",
"When training a sequence labeling model with an encoding scheme, generally it is our hope that the model should be able to learn the semantics and the transition rules of the tags from the training data.",
"However, even if the dataset is noiseless, a properly trained model may still occasionally make predictions that contains illegal transitions.",
"This is especially the case for the CRF-based models, as there is no hard mechanism built-in to enforce those rules.",
"The CRF ingredient by itself is only a soft mechanism that encourages legal transitions and penalizes illegal ones.",
"The hard transition rules might be violated when the model deems it necessary.",
"To see this, let us consider a toy corpus where every occurrence of the token America is within the context of North America, thus the token is always labeled as I-LOC .",
"Then, during training, the model may well establish the rule America I-LOC (Rule 1), among many other rules such as an I-LOC tag does not follow an O tag (Rule 2), etc.",
"Now consider the test sample Nathan left America last month, which contains a stand-alone America labeled as B-LOC .",
"During inference, as the model never saw a stand-alone America before, it must generalize.",
"If the model is more confident on Rule 1 than Rule 2, then it may yield an illegal output O O I-LOC O O .",
"The phenomenon of illegal path has already been noticed, but somehow regarded as trivial matters.",
"The output of a chunk recognizer may contain inconsistencies in the chunk tags in case a word tagged I-X follows a word tagged O or I-Y , with X and Y being different.",
"These inconsistencies can be resolved by assuming that such I-X tags starts a new chunk.",
"This simple strategy has been adopted by CoNLL2000 as a standard post-processing procedure 1 for the evaluation of the models' performance, and gain its popularity ever since.",
"We argue that such treatment is not only arbitrary, but also suboptimal.",
"In preliminary experiments we have studied the impact of the illegal path problem using the BERT-CRF model for a number of tasks and datasets.",
"Our findings (see Table 1) suggest that although the illegal segments only account for a small fraction (typically around 1%) of total predicted segments, they constitute approximately a quarter of the false positives.",
"Moreover, we found that only a few illegal segments are actually true positives.",
"This raises the question of whether retaining the illegal segments is beneficial.",
"As a matter of fact, as we will subsequently show, a much higher macro F1-score can be obtained if we simply discard every illegal segments.",
"Although the strategy of discarding the illegal segments may be superior to that of (Sang et al., 2000), it is nonetheless a hand-crafted, crude rule that lacks some flexibility.",
"To see this, let us take the example in Fig.",
"1. The prediction for text segment World Boxing Council is ( B-MISC , I-ORG , I-ORG ), which contains an illegal transition B-MISC I-ORG .",
"Clearly, neither of the post-processing strategies discussed above is capable of resolving the problem.",
"Ideally, an optimal solution should convert the predicted tags to either ( B-MISC , I-MISC , I-MISC ) or ( B-ORG , I-ORG , I-ORG ), whichever is more likely.",
"This is exactly the starting point of MCRF, which we introduce in the next section.",
"In this section we introduce the motivation and implementation of MCRF.",
"We first go over the 1 We are referring to the conlleval script, available from https://www.clips.uantwerpen.be/ conll2000/chunking/ .",
"conventional neural-based CRF models in Section 3.1.",
"We then introduce MCRF in Section 3.2.",
"Its implementation will be given in Section 3.3.",
"Conventional neural CRF models typically consist of a neural network and a CRF layer.",
"The neural network component serves as an encoder that usually first maps the input sequence of tokens to a sequence of token encodings, which is then transformed (e.g. via a linear layer) into a sequence of token logits .",
"Each logit therein models the emission scores of the underlying token.",
"The CRF component introduces a transition matrix that models the transition score from tag i to tag j for any two consecutive tokens.",
"By aggregating the emission scores and the transition scores, deep CRF models assign a score for each possible sequence of tags.",
"Before going any further, let us introduce some notations first.",
"In the sequel, we denote by x = { x 1 , x 2 , . . . , x T } a sequence of input tokens, by y = { y 1 , . . . , y T } their ground truth tags and by l = { l 1 , . . . , l T } the logits generated by the encoder network of the model.",
"Let d be the number of distinct tags and denote by [ d ] := { 1 , . . . , d } the set of tag indices.",
"Then y i [ d ] and l i R d for 1 i T .",
"We denote by W the set of all trainable weights in the encoder network, and by A = ( a ij ) R d d the transition matrix introduced by the CRF, where a ij is the transition score from tag i to tag j .",
"For convenience we call a sequence of tags a path .",
"For given input x , encoder weights W and transition matrix A , we define the score of a path p = { n 1 , . . . , n T } as s ( p, x, W, A ) = T (cid:88) i =1 l i,n i + T 1 (cid:88) i =1 a n i ,n i +1 , (1) where l i,j denotes the j -th entry of l i .",
"Let S be the set of all training samples, and P be the set of all possible paths.",
"Then the loss function of neural CRF model is the average of negative log-likelihood over S : L ( W, A ) = 1 |S| (cid:88) ( x,y ) S log exp s ( y, x ) (cid:80) p P exp s ( p, x ) (2) where we have omitted the dependence of s ( , ) on ( W, A ) for conciseness.",
"One can easily minimize L ( W, A ) using any popular first-order methods such as SGD or Adam.",
"x test is the path having the highest score, i.e. y opt = argmax p P s ( p, x test , W opt , A opt ) .",
"The decoding problem can be efficiently solved by the Viterbi algorithm.",
"Our major concern on conventional neural CRF models is that no hard mechanism exists to enforce the transition rule, resulting in occasional occurrence of illegal predictions.",
"Our solution to this problem is very simple.",
"Denote by I the set of all illegal paths.",
"We propose to constrain the path space in the CRF model to the space of all legal paths P / I , instead of the entire space of all possible paths P .",
"To this end,",
"1. during training, the normalization term in (2) should be the sum of the exponential scores of the legal paths;",
"2. during decoding, the optimal path should be searched over the space of all legal paths.",
"which is obtained by replacing the P in (2) by P / I",
"Similarly, the second modification leads to y (cid:48) opt = argmax p P / I s ( p, x test , W (cid:48) opt , A (cid:48) opt ) (5) obtained by replacing the P in (3) by P / I , where ( W (cid:48) opt , A (cid:48) opt ) is a minimizer of (4).",
"Note that the decoding objective (5) alone is enough to guarantee the complete elimination of illegal paths.",
"However, this would create a mismatch between the training and the inference, as the model would attribute non-zero probability mass to the ensemble of the illegal paths.",
"In Section 4.1, we will see that a naive solution based on (5) alone leads to suboptimal performance compared to a proper solution based on both (4) and (5).",
"Although in principle it is possible to directly minimize (4), thanks to the following proposition we can also achieve this via reusing the existing tools originally designed for minimizing (2), thereby saving us from making extra engineering efforts.",
"Proposition",
"1. Denote by [ d ] [ d ] the set of all illegal transitions.",
"For a given transition matrix A , we denote by A ( c ) = (cid:0) a ij ( c ) (cid:1) the masked transition matrix of A defined as (see Fig. 2) a ij ( c ) = (cid:26) c if ( i, j ) , a ij otherwise , (6) where c (cid:28) 0 is the transition mask .",
"Then for arbitrary model weights ( W 0 , A 0 ) , we have lim c L ( W 0 , A 0 ( c )) = L (cid:48) ( W 0 , A 0 ) (7) lim c WL ( W 0 , A 0 ( c )) = WL (cid:48) ( W 0 , A 0 ) (8) and for all ( i, j ) lim c a ij L ( W 0 , A 0 ( c )) = a ij L (cid:48) ( W 0 , A 0 ) .",
"(9) Moreover, for negatively large enough c we have argmax p P s ( p, x test , W, A ) = argmax p P / I s ( p, x test , W, A ) Proof.",
"See Appendix.",
"Proposition 1 states that for any given model state ( W, A ) , if we mask the entries of A that correspond to illegal transitions (see Figure 2) by a negatively large enough constant c , then the two objectives (2) and (4), as well as their gradients, can be arbitrarily close.",
"This suggests that the task of minimizing (4) can be achieved via minimizing (2) combined with keeping A masked (i.e. making a ij = c constant for all ( i, j ) ) throughout the optimization process.",
"Intuitively, the purpose of transition masking is to penalize the illegal transitions in such a way that they will never be selected during the Viterbi decoding, and the illegal paths as a whole only constitutes negligible probability mass during training.",
"In this section, we run a series of experiments 2 to evaluate the performance of MCRF.",
"The datasets used in our experiments are listed as follows: 2 Our code is available on https://github.com/ DandyQi/MaskedCRF .",
"Algorithm 1 (MCRF) 1: Input: Library for computing the gradients of conventional CRF loss (2), training dataset S , stopping criterion C , set of illegal transitions , masking constant c (cid:28) 0 .",
"2: Initialize: model weight W and tag transition matrix A = ( a ij ) .",
"3: while C is not met do 4: Sample a mini-batch from S 5: Update W and A based on batch gradient 6: for ( i, j ) do 7: a ij c (cid:46) maintain the mask 8: end for 9: end while 10: Output: Optimized W and A .",
"Chinese NER: OntoNotes 4.0 (Weischedel et al., 2011), MSRA (Levow, 2006), Weibo (Peng and Dredze, 2015) and Resume (Zhang and Yang, 2018).",
"English NER: CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) Slot Filling: ATIS (Hemphill et al., 1990) and SNIPS (Coucke et al., 2018) Chunking: CoNLL2000 (Sang et al., 2000) The statistics of these datasets are summarized in Table",
"2. dataset task lan.",
"For Chinese NER tasks, we use the public-available 3 BERTBASE as the pretrained model.",
"For English NER and Chunking tasks, we use the cased version of BERTBASE model.",
"We use uncased BERTBASE for English slot filling tasks.",
"In preliminary experiments, we found out that the discriminative fine-tuning approach (Howard and Ruder, 2018) yields slightly better results than 3 https://github.com/google-research/ bert Resume MSRA Ontonotes Weibo Lattice (Zhang and Yang, 2018) 94.5 93.2 73.9 58.8 Glyce (Meng et al., 2019) 96.5 95.5 81.6 67.6 SoftLexicon (Ma et al., 2020) 96.1 95.4 82.8 70.5 FLAT (Li et al., 2020a) 95.9 96.1 81.8 68.6 MRC (Li et al., 2020b) -95.7 82.1 DSC (Li et al., 2020c) -96.7 84.5 -BERT-tagger-retain 95.7 (94.7) 94.0 (92.7) 78.1 (76.8) 67.7 (65.3) BERT-tagger-discard 96.2 (95.5) 94.6 (93.6) 80.7 (79.2) 69.7 (67.5) BERT-CRF-retain 95.9 (94.8) 94.2 (93.7) 81.8 (81.2) 70.8 (64.5) BERT-CRF-discard 97.2 (96.6) 95.5 (94.9) 83.1 (82.4) 71.9 (65.7) BERT-MCRF-decoding 97.3 (96.6) 95.6 (95.0) 83.2 (82.5) 72.2 (65.8) BERT-MCRF-training 97.6 (96.9 ) 95.9 (95.3) 83.7 (82.7) 72.4 (66.5) Table 3: Results on Chinese NER datasets.",
"the standard fine-tuning as recommended by (De-vlin et al., 2019).",
"In discriminative fine-tuning, one uses different learning rates for each layer.",
"Let r L be the learning rate for the last ( L -th) layer and be the decay factor.",
"Then the learning rate for the ( L n ) -th layer is given by r L n = r L n .",
"In our experiments, we use r L { 1 e 4 , 5 e 5 } and { 1 / 2 , 2 / 3 } depending on the dataset.",
"The standard Adam optimizer is used throughout, and the mini-batch size is fixed to be 32.",
"We always fine-tune for 5 epochs or 10000 iterations, whichever is longer.",
"In this section we present the MCRF results on sequence labeling datasets.",
"The baseline models are the following: BERT-tagger: The output of the final hidden representation for to each token is fed into a classification layer over the label set without using CRF.",
"This is the approach recommended in (Devlin et al., 2019).",
"BERT-CRF: BERT followed by a CRF layer, as is described in Section 3.1.",
"We use the following strategies to handle the illegal segments (See Table 4 for an example): retain: Keep and retag the illegal segments.",
"This strategy agrees with (Sang et al., 2000).",
"discard: Discard the illegal segments completely.",
"MCRF-decoding: A naive version of MCRF that does masking only in decoding.",
"The training process is the same as that in conventional CRF.",
"MCRF-training: The proper MCRF approach proposed in this work.",
"The masking is maintained in the training, as is described in Section 3.3.",
"We also refer to it as the MCRF for simplicity.",
"For each dataset and each model we ran the training 10 times with different random initializations and selected the model that performed best on the dev set for each run.",
"We report the best and the average test F1-scores as the final results.",
"If the dataset does not provide an official development set, we randomly split the training set and use 10% of the samples as the dev set.",
"The results on Chinese NER tasks are presented in Table",
"3. It can be seen that the MCRF-training approach significantly outperforms all baseline models and establishes new State-of-the-Arts for Resume and Weibo datasets.",
"From these results we can assert that the improvement brought by MCRF is mainly due to the effect of masking in training, not in decoding.",
"Besides, we notice that the discard strategy substantially outperforms the retain strategy, which agrees with the statistics presented in Table",
"1. We also plotted in Fig. 3 the loss curves of CRF and MCRF on the development set of MSRA.",
"It can be clearly seen that MCRF incurs a much lower loss during training.",
"This confirms our hypothesis that the CRF model attributes non-zero probability mass to the ensemble of the illegal paths, as otherwise the denominators in (4) and in (2) would have been equal, and in that case the loss curves of CRF and MCRF would have converged to the same level.",
"Note that some of the results listed in Table 3 are based on models that utilize additional resources.",
"Zhang and Yang (2018) and Ma et al. (2020) utilized Chinese lexicon features to enrich the token representations.",
"Meng et al. (2019) combined Chinese glyph information with BERT pre-training.",
"In contrast, the proposed MCRF approach is simple yet performant.",
"It achieves comparable or better results without relying on additional resources.",
"One of the main features of the AITS and SNIPS datasets is the large number of slot labels (79 and 39 respectively) with relatively small training set (4.5k and 13k respectively).",
"This requires the sequence labeling model learn the transition rules in a sample-efficient manner.",
"Both ATIS and SNIPS provide an intent label for each utterance in the datasets, but in our experiments we did not use this information and rely solely on the slot labels.",
"The results are reported in Table",
"5. It can be seen that MCRF-training outperforms the baseline models and achieves competitive results compared to previous published results.",
"The results on CoNLL2000 chunking task are reported in Table.",
"6. The proposed MCRF-training outperforms the CRF baseline by 0.4 in F1-score.",
"In this section, we investigate the influence of various factors that may impact the performance of MCRF.",
"In particular, we are interested in the quantity MCRF gain , which we denote by , defined simple as the difference of F1-score of MCRF-training and that of the conventional CRF (with either retain or discard strategy).",
"In the previous experiments we have always used the BIO scheme.",
"It is of interest to explore the performance of MCRF under other tagging schemes such as BIOES.",
"The BIOES scheme is considered retain discard MCRF 93 94 95 96 97 98 resume BIO BIOES retain discard MCRF 64 65 66 67 68 69 70 71 72 weibo retain discard MCRF 77 78 79 80 81 82 ontonotes retain discard MCRF 93.0 93.5 94.0 94.5 95.0 95.5 96.0 96.5 msra Figure 4: Ablation over the tagging scheme (BIO vs. BIOES).",
"more expressive than BIO as it introduces more labels and more transition restrictions.",
"We have re-run the experiments in Section 4.1.1 using the BIOES scheme.",
"Our results are reported in Fig. 4 and Table",
"7. It is clearly seen that under the BIOES scheme MCRF still always outperforms the CRF baselines.",
"Note that compared to the case under BIO scheme, the MCRF gain is less significant against the CRF-retain baseline, but larger against CRF-discard.",
"One may hypothesize that the occurrence of illegal paths might be due to the scarcity of training data, i.e. a model should be less prone to illegal paths if the training dataset is larger.",
"To test this hypothesis, we randomly sample 10% of the training data from MSRA and Ontonotes, creating a smaller version of the respective dataset.",
"We compare the proportion of the illegal segments produced by BERT-CRF trained on the original dataset with the one trained on the smaller dataset.",
"We also report the performance gain brought by MCRF in these two scenarios.",
"Our findings are summarized in Table",
"8. As can be seen from the table, the models trained with fewer data do yield slightly more illegal segments, but the MCRF gains under the two scenarios are close.",
"So far we have experimented with BERT-based models.",
"Now we explore effect of neural architecture.",
"We trained a number of models on CoNLL2003 with varying encoder architectures.",
"The key components are listed as follows: ELMo: pretrained language model 4 that serves as an sequence encoder.",
"CNN: CNN-based character embedding layer, with weights extracted from pretrained ELMo.",
"It is used to generate word embeddings for arbitrary input tokens.",
"LSTMn : n -layer bidirectional LSTM with hidden dimension h = 200 .",
"The results of our experiments are given in Table",
"9. We observe that the encoder architecture has a large impact on the occurrence of illegal paths, and the BERT-based models appear to generate much more illegal paths than ELMo-based ones.",
"This is probably due to the fact that transformer-encoders are not sequential in nature.",
"A further study is needed to investigate this phenomenon, but it is beyond the scope of the current work.",
"We also notice that 4 Model downloaded from https://github.com/ allenai/bilm-tf the MCRF gain seems to be positively correlated with the proportion of the illegal paths generated by the underlying model.",
"This is expected, since the transition-blocking mechanism of MCRF will (almost) not take effect if the most probable path estimated by the underlying CRF model is already legal.",
"Some models are able to solve sequence labeling tasks without relying on BIO/BIOES type of tagging scheme to distinguish the boundary and the type of the text segments of interest, thus do not suffer from the illegal path problems.",
"For instance, Semi-Markov CRF (Sarawagi and Cohen, 2005) uses an additional loop to search for the segment spans, and directly yields a sequence of segments along with their type.",
"The downside of Semi-Markov CRF is that it incurs a higher time complexity compared to the conventional CRF approach.",
"Recently, Li et al. (2020b) proposed a Machine Learning Comprehension (MRC) framework to solve NER tasks.",
"Their model uses two separate binary classifiers to predict whether each token is the start or end of an entity.",
"They introduced an additional module to determine which start and end tokens should be matched.",
"We notice that the CRF implemented in PyTorch-Struct (Rush, 2020) has a different interface than usual CRF libraries in that it takes not two tensors for emission and transition scores, but rather one score tensor of the shape (batch size, sentence length, number of tags, number of tags).",
"This allows one to incorporate even more prior knowledge in the structured prediction by setting a constraint mask as a function of not only a pair of tags, but also words on which the tags are assigned.",
"Such feature may be exploited in future work.",
"Finally, we acknowledge that the naive version of MCRF that does constrained decoding has already been implemented in AllenNLP 5 (Gardner et al., 2018).",
"As shown in Section 4.1, such approach is suboptimal compared to the proposed MCRF-training method.",
"Our major contribution is the proposal of MCRF, a constrained variant of CRF that masks illegal transitions during CRF training, eliminating illegal outcomes in a principled way.",
"We have justified MCRF from a theoretical perspective, and shown empirically in a number of datasets that MCRF consistently outperforms the conventional CRF.",
"As MCRF is easy to implement and incurs zero additional overhead, we advocate always using MCRF instead of CRF when applicable.",
"We thank all anonymous reviewers for their valuable comments.",
"We also thank Qin Bin and Wang Gang for their support.",
"This work is also supported by the National Natural Science Foundation of China (NSFC No. 61701547)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"objective",
"objective",
"method",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"objective",
"result",
"method",
"other",
"other",
"other"
] |
[
"Neural machine translation (NMT) has proven to be facilitated by curriculum learning which presents examples in an easy-to-hard order at different training stages.",
"The keys lie in the assessment of data difficulty and model competence.",
"We propose uncertainty-aware curriculum learning , which is motivated by the intuition that: 1) the higher the uncertainty in a translation pair, the more complex and rarer the information it contains; and 2) the end of the decline in model uncertainty indicates the completeness of current training stage.",
"Specifically, we serve cross-entropy of an example as its data difficulty and exploit the variance of distributions over the weights of the network to present the model uncertainty.",
"Extensive experiments on various translation tasks reveal that our approach outperforms the strong baseline and related methods on both translation quality and convergence speed.",
"Quantitative analyses reveal that the proposed strategy offers NMT the ability to automatically govern its learning schedule.",
"Neural machine translation (NMT) has advanced the state-of-the-art on various translation tasks (Hassan et al., 2018; Chen et al., 2018).",
"A well-performed NMT is trained using an end-to-end framework (Sutskever et al., 2014) that profits from large-scale training corpus and various optimization tricks (Ott et al., 2018; Xu et al., 2019; Li et al., 2020).",
"These techniques boost the translation quality, in the meanwhile, leading to massive hyper-parameters to be tuned and expensive development costs (Popel and Bojar, 2018).",
"Recent studies (Zhang et al., 2018, 2019; Platanios et al., 2019; Liu et al., 2020) have proven that feeding training examples in a meaningful order rather than considering them randomly can accelerate the model Corresponding author Figure 1: The change of confidence in an area during the learning.",
"convergence thus reducing the computational cost.",
"Such methods refer to curriculum learning (CL, Bengio et al., 2009), in which a model is taught as a human from simple concepts to complex ones.",
"There exists two open problems in the integration of CL with NMT, i.e. the assessment of data difficulty and the programme of learning schedule.",
"Considering the former, prior studies (Kocmi and Bojar, 2017; Platanios et al., 2019) intuitively treat human linguistic knowledge, e.g. either sentence length or word rarity, as the measure of difficulty.",
"Nevertheless, each linguistic feature merely considers an aspect of sentences which fails to fully cope with the data difficulty for a model (Jiang et al., 2015).",
"For the latter, existing methods pre-define the duration of curriculum based on an assumption that the model confidence monotonically increases with the training (Zhang et al., 2018, 2019).",
"We argue that this assumption does not conform to human behavior, i.e. Dunning-Kruger Curve (Fig-ure 1, Kruger and Dunning, 1999), and limits the adaptability and flexibility of curriculum learning.",
"In response to these problems, we propose to strengthen CL for NMT through determining the data difficulty and scheduling the curriculum according to model ability rather than human intuitions.",
"We introduce a novel uncertainty-aware curriculum learning framework, which serves uncertainty as its principle to order the input examples and control the duration of each training stage.",
"Specifically, we measure the data uncertainty of a sentence pair according to its joint distribution that is estimated by a language model pre-trained on the training corpus.",
"The intuition behind is that the higher the cross-entropy and uncertainty have in an example, the harder it is to learn and translate (Brown et al., 1990).",
"Besides, we calculate the model uncertainty using the variance of the distribution over the network presented by Bayesian neural networks (Buntine and Weigend, 1991).",
"Accordingly, the model uncertainty reflects whether our model can best describe the data distribution (Xiao and Wang, 2019), and the stop of its decline indicates the completeness of the current training stage.",
"One principle in our work is to maintain the simplicity and efficiency in CL.",
"Several researchers may doubt that the use of Bayesian inference over the training corpus may significantly raise the computational cost.",
"To this end, we apply Monte Carlo Dropout (Gal and Ghahramani, 2016) to approximate Bayesian inference.",
"Besides, we categorize examples into subsets according to their difficulty, which is then be progressively added into the training set at different training stages, namely baby step (Cirik et al., 2016).",
"The model uncertainty can be calculated after each epoch using the samples randomly selected from the current training set, thus avoiding affect training efficiency.",
"We evaluate the effectiveness of our methods on WMT16 English-to-German, IWSLT15 English-to-Vietnamese, and WMT17 Chinese-to-English translation tasks.",
"The experimental results demonstrate that the proposed model consistently improves translation performance over the strong TRANSFORMER (Vaswani et al., 2017) baseline and related methods that exploit CL into NMT.",
"Extensive analyses confirm that: 1) our approach significantly speeds up the model convergence; 2) using data uncertainty to present the translation difficulty surpasses its sentence length and word rarity counterparts, and this superiority can be further expanded by exploiting a language model that is trained on large-scale external data, i.e. BERT (Devlin et al., 2019); 3) the model uncertainty performs a self-adaptive manner to assess the model competence regardless the pre-defined patterns.",
"NMT uses a single, large neural network to build translation model, aiming to maximize the conditional distribution of sentence pairs using parallel corpus (Sutskever et al., 2014; Bahdanau et al., 2015; Yang et al., 2019; Wan et al., 2020).",
"Formally, the learning objective is to minimize the following loss function over the training corpus D = { ( x n , y n ) } Nn =1 , with the size being N : L = E ( x n ,y n ) D [ log P ( y n | x n ; )] (1) where x n and y n indicate the source and target sides of the n -th example in training data.",
"denotes the trainable parameters of NMT model.",
"During the training, the examples randomly feed to vanilla model, regardless of their order, making the development of a well-performed NMT system time-consuming (Sennrich et al., 2016a; Popel and Bojar, 2018; Yang et al., 2020).",
"An alternative way to speed up the training process and boost the performance of a neural network is to exploit CL (Elman, 1993; Krueger and Dayan, 2009; Bengio et al., 2009).",
"Related Work on Exploring CL Several studies have shown the effectiveness of CL in the field of computer vision (Sarafianos et al., 2017; Wang et al., 2019c; Guo et al., 2018), as well as a range of NLP tasks, including math word problem (Zaremba and Sutskever, 2014), sentiment analysis (Cirik et al., 2016), and natural answer generation (Liu et al., 2018).",
"They point out that CL can solve the problem in some tasks that is hard to train through presenting training data in an easy-to-hard order.",
"Kocmi and Bojar (2017) first apply CL into NMT and suggest two sticking points, i.e. data difficulty and learning schedule.",
"Partially inspired by their findings, Thompson et al. (2018), Zhang et al. (2019), Wang et al. (2019b) and Kumar et al. succeed on handling the problem in domain translation.",
"Concerning the general translation tasks, Zhang et al. (2018) investigate a variety of difficulty criteria based on human intuition, e.g. sentence length and word rarity, which show distinct performance across language pairs and model settings.",
"While Platanios et al. (2019) pay attention to the schedule that determines the duration of each curriculum.",
"They introduce monotonically increased curves, e.g. either linear or square root, to represent the changes of the model ability across the training process.",
"These early successes presuppose the limited heuristic knowledge on both the data difficulty and the tendency of model competence.",
"Motivation As mentioned above, one of the main challenges in CL is the identification of easy and hard samples which is onerous and conceptually difficult in translation community.",
"For example, neither the sentence length or word rarity can fully express the complexity of a translation.",
"Another problem in CL is the programme of learning schedule, in which the patterns pre-defined by humans lack in adaptability and lead to massive additional hyper-parameters that have to be tuned.",
"Even if these artificial supervisions are feasible, what is intuitively easy and competent for a human may not match that for neural networks (Ku-mar et al., 2010; Jiang et al., 2015).",
"To this end, we approach these problems from the model perspective.",
"In this section, we first introduce data uncertainty to quantify the translation difficulty for each training example (Section 3.1).",
"Then, we propose to predict the model uncertainty at the training time which is a self-adaptive manner to govern curriculum by the model itself (Sec-tion 3.2).",
"Finally, we describe how to exploit the proposed two factors in NMT training (Section 3.3).",
"The proposed framework is illustrated in Figure",
"2. 3.1 Data Uncertainty In order to estimate the data uncertainty, we propose to pre-train a language model (LM) over the monolingual sentences from the parallel training corpus D to account the cross-entropy of each sentence.",
"The intuition behind this is that the higher cross-entropy and perplexity represents an uncertain sentence, since it is hard to be generated and determined by the LM (Brown et al., 1990).",
"This provides an explainable and comprehensive way to evaluate the difficulty of an example.",
"Accordingly, we assign several types of data uncertainty, which can be used individually or combined together: Source Difficulty The difficulty of a source sentence affects the language understanding of NMT model.",
"Inspired by Zhang et al. (2018) and Platanios et al. (2019), an interpretable way is to use the source difficulty to approximate the complexity of a sentence pair.",
"Given the source sentence x n , we can calculate the source uncertainty u data ( x n ) by Figure 2: Illustration of the proposed uncertainty-aware curriculum learning framework.",
"Target Difficulty Since the complex and rare target sentence directly makes NMT have a harder time in generating the sentence (Kocmi and Bojar, 2017), another natural choice is to apply the target uncertainty to present the data difficulty.",
"Analogous to the source side, the target uncertainty u data ( y n ) is: u data ( y n ) = 1 JJ (cid:88) j =1 log P ( y ni | y n<i ) (3) where J denotes the length of target sentence y n .",
"Joint Difficulty Intuitively, the complexity of a translation pair should be contributed by two sides, thus reflecting the difficulty of both understanding and generating processes in NMT.",
"We can combine the concepts of source and target uncertainty: u data ( x n , y n ) = u data ( x n ) + u data ( y n ) (4) To our best knowledge, due to the lack of interpretability on scoring the joint difficulty in a sentence pair, all the existing methods that exploit CL into NMT merely measure data difficulty on either source or target.",
"Our method provides an alternative way to tackle this problem with the concept of joint probability distribution.",
"We expect the joint uncertainty to further improve the performance.",
"In this paper, we examine three widely used LMs to appraise the data uncertainty: a statistical n gram LM KENLM (Heafield, 2011), a neural LM RNNLM (Mikolov et al., 2010), and a multilingual neural LM that trained on billions of external sentences BERT (Devlin et al., 2019).",
"Note that, the modeling of data uncertainty is not limited to our approach.",
"It can be also quantified by other manners, e.g. estimating the data likelihood with Monte Carlo approximation (Der Kiureghian and Ditlevsen, 2009) or validating the translation distribution using a well-trained NMT model (Zhang et al., 2018).",
"In contrast to these time-consuming techniques, LM marginally increases the computational cost and easy to be applied, conforming to the original motivation of CL. 3.2 Model Uncertainty Moreover, we propose to regulate the duration of each curriculum by quantifying the model uncertainty rather than presetting before the training.",
"Model uncertainty, which is also known as epistemic uncertainty (Kendall and Gal, 2017), can be used to measure whether the model parameters are able to best describe the data distribution (Dong et al., 2018; Xiao and Wang, 2019).",
"In our work, a small score of model uncertainty indicates the model is confident that the current training data has been well learned (Wang et al., 2019a), and the termination of the decline in scores represents the signal to shift to the next curriculum stage.",
"The model uncertainty can be quantified by Bayesian neural networks (Buntine and Weigend, 1991; Neal, 1996), which place a probabilistic distribution over the model parameters on constant input and output data, and serve its variance as the uncertainty.",
"For reasons of computational efficiency, we adopt widely used Monte Carlo Dropout (Gal and Ghahramani, 2016) to approximate Bayesian inference.",
"Given a dataset used to examine the model uncertainty DU = { ( x m , y m ) } Mm =1 which consists of M sentence pairs, we perform K passes of forward propagation through the NMT model.",
"1 In each pass, part of neurons in network are randomly deactivated.",
"Eventually, we yield K samples on model parameters { 1 , , K } and corresponding translation probabilities.",
"The model 1 K is empirically set to 10 in our work.",
"uncertainty on DU can be formally expressed as: u mod ( ) = 1 MM (cid:88) m =1 Var (cid:104) P ( y m | x m , k ) (cid:105) K k =1 (5) Here, Var [ ] denotes the variance of a distribution which calculated following the common setting in Dong et al. (2018) and Xiao and Wang (2019).",
"In this way, the model is offered the ability to determine its model competence by itself.",
"In this work, we adopt a widely used CL strategy called baby step (Cirik et al., 2016; Zhang et al., 2018) to arrange training data and organize the training process.",
"Specifically, the whole training set D is divided into different buckets, i.e. steps {D 1 , , DT } , in which those examples with similar data uncertainty scores u data are categorized into the same bucket.",
"The training starts from the step that consists of examples with the lowest uncertainty.",
"After that, data in the next step is aggregated to the current training dataset C when the model uncertainty ceases its reduction.",
"Following existing studies (Platanios et al., 2019; Kocmi and Bojar, 2017) that the model should be trained from easy samples to hard ones, we schedule the curriculum with the order of increasing uncertainty.",
"2 To avoid overfitting and useless training, partially inspired by early stopping, we treat the third time when current model uncertainty is higher than the score 2 Our preliminary experiments show that the model with a reverse order does not gain any performance improvement to the baseline model.",
"evaluated last time as the sign that the model is at the level of expert for the current curriculum.",
"The hyperparameter of stopping criterion is important.",
"A small value makes the training to easily enter the next baby step, and the current baby step fails to be fully trained, while a large value reduces training efficiency and cause over-fitting.",
"Considering that performing Monte Carlo Dropout over the NMT model on all the examples in C is time-consuming, while the superiority of CL lies in its ability to accelerate the model convergence.",
"In order to maintain this advantage, we propose to estimate the model uncertainty after each epoch rather than every model updating steps.",
"Furthermore, we randomly extract M = 1 k samples from current training dataset C as DU .",
"Then, the evaluation of model uncertainty is conducted on DU to mirror the confidence over the current curriculum.",
"Therefore, our approach reserves the efficiency in CL, in the meanwhile, guiding the duration of each curriculum in a self-adaptive fashion.",
"The overall procedure is described in Algorithm",
"1. 4 Experiments We examine our method upon advanced TRANSFORMER (Vaswani et al., 2017) and conduct experiments on widely used translation tasks: IWSLT15 English-to-Vietnamese (En Vi), WMT16 English-to-German (En De) and WMT17 Chinese-to-English (Zh En).",
"3 4.1 Experimental Setting Dataset To compare with the results reported by previous work (Platanios et al., 2019), we evaluate our methods on IWSLT15 En Vi and WMT16 En De translation tasks.",
"Our models are trained using all of the available parallel corpora from the IWSLT15 and WMT16 datasets, consisting of 133k and 4.5M sentence pairs.",
"In order to verify the universality of the proposed method, we also conduct experiments on the large-scale training corpus, i.e. WMT17 Zh En, in which, 20M examples are extracted as the training set.",
"We use the standard validation and test sets provided in each translation task.",
"The Chinese sentences are segmented by the word segmentation toolkit Jieba, 4 while the sentences in other languages are tokenized using the scripts provided in Moses.",
"5 All the data are 3 Our code is available at https://github.com/ NLP2CT/ua-cl-nmt 4 https://github.com/fxshy/jieba 5 https://github.com/mosesdecoder processed by byte-pair encoding to alleviate the Out-of-Vocabulary problem (Sennrich et al., 2016b) with 32K merge operations for both language pairs.",
"The case-sensitive 4-gram NIST BLEU score (Pap-ineni et al., 2002) is used as the evaluation metric.",
"Model Our experiments are based on TRANSFORMER (Vaswani et al., 2017) and the compared methods are re-implemented on top of our in-house codes.",
"Considering the small-scale translation task En Vi, we use the setting same as Platanios et al. (2019) in which the dropout ratio is set to 0.3 and each iteration batch consists of 4,096 tokens.",
"For translation models on En De and Zh En, we follow the common Base setting in Vaswani et al. (2017) except that we set dropout ratio to 0.1 and train models with a total batch of 32,768 tokens.",
"As to LMs, we train 4-gram KENLM (Heafield, 2011) 6 and 2 layers RNNLM (Mikolov et al., 2010) with dimensionality being 200 on monolingual side of each training corpus.",
"Besides, we also score sentences using multilingual BERT (Devlin et al., 2019) that pre-trained on external data with Base setting for comparison.",
"We investigate the following methods: LENGTH measures data difficulty with sentence length (Kocmi and Bojar, 2017).",
"RARITY measures data difficulty with word rarity (Zhang et al., 2018).",
"DATA-U represents the proposed method which measures difficulty with data uncertainty on source sentence (src), target sentence (trg), and both sides (joint).",
"SQRT governs curriculum with the square root model competence (Platanios et al., 2019).",
"MOD-U governs curriculum with the proposed model uncertainty.",
"In our experiments, we set baby steps to 4 as default.",
"In this section, we evaluate the effectiveness of different components in CL on the En De task.",
"In the first two series of experiments, we investigate the effects of different measures of data difficulty and model competence.",
"Then, we check how the baby steps applied in our training strategy affect the performance.",
"The results are concluded in Table",
"1. 6 https://github.com/kpu/kenlm Model SQRTMOD-U TRANSFORMER 32.76 LENGTH 32.80 33.23 RARITY 32.84 33.39 KENLM (src) 33.03 33.64 DATA-UKENLM (trg) 33.09 33.69 KENLM (joint) 33.15 33.85 RNNLM (joint) 33.17 33.73 BERT (joint) 33.35 33.93 Table 1: Ablation study of various measures with respect to data difficulty and model competence for CL in NMT.",
"Effectiveness of Data Uncertainty We first compare different difficulty measures in CL.",
"Considering the existing methods, both the LENGTH and RARITY yield improvements over the baseline model, which is consistent with prior findings in Kocmi and Bojar (2017), Zhang et al. (2018) and Platanios et al. (2019).",
"The proposed data uncertainty strategies outperform the baseline and existing measures.",
"This verifies our hypothesis that data uncertainty is of higher relevance in respect to the difficulty of an example for a NMT model than its sentence length and word rarity counterparts.",
"Specifically, the results show the utility of estimating the uncertainty on either the source or target side of a translation pair.",
"Among the two strategies, the target one performs better.",
"We attribute this to the fact that the target uncertainty brings a more direct reflex of the sentence generation difficulty, thus playing a crucial role in CL.",
"Moreover, joint, which provides a more comprehensive way to model data uncertainty, achieves the best results.",
"This success indicates that the two strategies are complementary to each other and the complexity of a translation pair is contributed by both sides.",
"We attempt three kinds of LMs to quantify data uncertainty.",
"As seen, all the models contribute to the model performance.",
"Concerning LMs trained on the monolingual side of a parallel corpus, KENLM and RNNLM get comparable translation qualities.",
"Besides, as a state-of-the-art LM, BERT has recently attracted a lot of interests since it learns from billions of external sentences.",
"As expected, it outperforms all the LMs trained on internal data.",
"Although this comparison is unfair, the results suggest that the performance of LM significantly affects the evaluation of data uncertainty.",
"Since the statistical approach can be faster developed and it does not rely on external data, we choose KENLM as the default in the subsequent experiments.",
"Effectiveness of Model Uncertainty In this experiment, we evaluate the impacts of different assessments on model competence.",
"Obviously, our approach M OD -U consistently gains improvements over the vanilla method S QRT with the same setting.",
"These results reveal that applying model uncertainty to determine the duration of each curriculum by the model itself is conductive to CL in NMT.",
"Moreover, the combination of data uncertainty and model uncertainty can progressively boost the model performance, confirming that the two methods are complementary to each other.",
"Different Baby Steps We further explore the effects of the number of baby steps on En De translation task.",
"The experiments are conducted on the proposed uncertainty-aware CL model as plotted in Figure 3.",
"The vanilla NMT system without using any curriculum strategy could be considered as the model that sets the total number of steps to",
"1. As seen, dividing training corpus into 4 baby steps is superior to other settings.",
"Before that, the translation performance increases with progressively subdividing baby steps, since the model with fine-grained steps can benefit more from CL.",
"When the total number of subsets is greater than 4, the tendency of translation qualities decreases.",
"A plausible explanation is that to train the model on an over-small subset leads to the problem of overfitting.",
"In this section, we evaluate the proposed approach on both IWSLT15 En Vi, WMT16 En De, as well as WMT17 Zh En tasks, as listed in Table",
"2. Our baseline TRANSFORMER and re-implemented existing methods outperform the reported results in Platanios et al. (2019), which we believe makes the evaluation convincing.",
"As seen, the proposed uncertainty-aware curriculum learning strategy consistently outperforms strong baselines and recent studies that exploit CL into NMT across language pairs.",
"These results demonstrate the universality and effectiveness of the proposed approach.",
"It is encouraging to see that the improvement does not diminish but enlarges with the increase of training data, indicating that the model is conducive to the large scale translation tasks.",
"Interestingly, our model with BERT is superior to that with KENLM trained on small scale data, while the gap becomes minor when KENLM learns from a larger training corpus (e.g. 20M Zh En task).",
"We attribute this to the fact that, with the use of the large-scale training examples, KENLM can describe its data distribution well, and the superiority of BERT tends to marginal in these tasks.",
"We conduct extensive analyses on En De task to better understand our model.",
"We investigate three problems: 1) whether the proposed model indeed speeds up the model convergence; 2) how different are between difficulty measures; and 3) how the model uncertainty exactly changes during training.",
"As aforementioned, one intuition of CL is to speed up the model convergence.",
"Figure 4 shows the learning curves of different models on En De validation set.",
"As seen, the conventional NMT model reaches the highest BLEU at 140k steps, while related CL method SQRT +R ARITY obtains the same performance at step 98k, which achieves 30% accelerate rate.",
"The acceleration effect is slightly asthenic than that reported in Platanios et al. (2019).",
"This could be explained by the fact that their examined models are trained with a batch of 5,120 tokens, which is much smaller than 32,768 used in our experiments.",
"The large batch facilitates the training (Popel and Bojar, 2018), thus weakening the acceleration effect.",
"In spite of that, our model converges 53.6% faster than the baseline to get the same BLEU score (step 65k), showing the action of the proposed method on speeding up the training.",
"It is interesting to investigate the discrepancy among data difficulty measures.",
"Accordingly, we compare the composition of the corresponding baby steps sorted by different difficulty methods.",
"Figure 5 shows the percentage of distinct sentence contained in each subset of our method (KENLM) to that in others (LENGTH , RARITY , and BERT ).",
"As seen, there exists considerable diversity among associated baby steps produced by our method and existing approaches.",
"Moreover, the difference in the middle period of curriculums, i.e. step 2 and step 3, is greater than that in step 1 and step 4.",
"This phenomenon reveals that the most simple and complex sentences quantified by different measures are relatively similar, and the main diversity lies in those sentences of which the difficulties hardly to be distinguished.",
"Therefore, we argue that the improvements of the proposed method may mainly contribute by the differences in these two steps.",
"Besides, the subsets divided by KENLM and BERT have big gaps, which suggest again that the performance of LM plays a crucial role in our approach.",
"In this section, we discuss the training process from the model uncertainty perspective.",
"For better illustration, we define the model confidence as the reciprocal of model uncertainty ( 1 /u mod ), since the two features are negative correlation (Dong et al., 2018; Wang et al., 2019a).",
"Figure 6 visualizes the curves concerning the average of model confidence on En De validation set during the curriculum learning.",
"We analyze those models trained on two baby steps divided by different data difficulty measures, i.e. KENLM, BERT , and RARITY , for comparison.",
"Obviously, different models draw similar changing trends of model confidence during training, that is, the model confidence first increases sharply, then drops and rises, eventually balances.",
"Surprisingly, the tendency highly accords with the psychology of human students when they getting into a new area, i.e. Dunning Kruger Curve (Figure 1, Kruger and Dunning, 1999).",
"That is, starting from scratch, peoples rapidly grow their knowledge, they therefore have a large amount of confidence.",
"Then, peoples begin to have awareness about how lit-tle they really know and are discouraged by their inability.",
"Over time, humans gradually improve, making them more and more confident, and experienced.",
"To some extent, both the artificial neural networks and human beings can be regarded as connectionist models (Munakata and McClelland, 2003).",
"Accordingly, this interpretation can be also used to explain the phenomenon in NMT training.",
"Such kind of fluctuates model confidence confirms that the curriculum duration should not be fixed, and the predefined strategies may be insufficient to cope with the model training.",
"In addition, the models trained in different curriculums with various difficulty measures perform distinct change amplitudes on model uncertainty, indicating the adaptability of our method.",
"These findings support our assumption that the model uncertainty is an effective and self-adaptive indicator to guide the CL. 6 Conclusion and Future Work We propose a novel uncertainty-aware framework to improve the two key components in CL for NMT, i.e. data difficulty measurement and curriculum arrangement.",
"Our contributions are mainly in: We propose to estimate the data uncertainty of each training example as its difficulty, which is more explainable and comprehensive.",
"We introduce a self-adaptive CL strategy that evaluates the model uncertainty to govern the curriculum by the model itself.",
"The extensive experiments on various translation tasks and model settings demonstrate the universal-effectiveness of the proposed framework.",
"Our method is able to achieve over 50% accelerate rate on model convergence.",
"Quantitative and qualitative analyses indicate Figure 6: Curves of model confidence ( 1 /u mod ) on En De validation set at different checkpoints.",
"that the model confidence is fluctuant at the training time.",
"It surprisingly draws a similar changing curve as human confidence.",
"As our model is not limited to machine translation, it is interesting to validate the proposed framework into other NLP tasks that need to exploit CL.",
"Another promising direction is to design more powerful training strategies to replace the baby step.",
"This work was supported in part by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of the Science and Technology Development Fund, Macau SAR and National Natural Science Foundation of China (Grant No. 045/2017/AFJ), the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST).",
"We thank the anonymous reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Typing every character in a text message may require more time or effort than strictly necessary.",
"Skipping spaces or other characters may be able to speed input and reduce a user's physical input effort.",
"This can be particularly important for people with motor impairments.",
"In a large crowdsourced study, we found workers frequently abbreviated text by omitting mid-word vowels.",
"We designed a recognizer optimized for expanding noisy abbreviated input where users often omit spaces and mid-word vowels.",
"We show using neural language models for selecting conversational-style training text and for rescoring the recognizer's n-best sentences improved accuracy.",
"On noisy touchscreen data collected from hundreds of users, we found accurate abbreviated input was possible even if a third of characters was omitted.",
"Finally, in a study where users had to dwell for a second on each key, sentence abbreviated input was competitive with a conventional keyboard with word predictions.",
"After practice, users wrote abbreviated sentences at 9.6 words-per-minute versus word input at 9.9 words-per-minute.",
"Experienced desktop and touchscreen typists can often achieve fast and accurate text input by simply typing all the characters in their desired text.",
"However, for some users, such quick and precise input is difficult due to a motor disability.",
"Such users may use a virtual touchscreen keyboard, but their touch locations may be slow and inaccurate, e.g. people with Cerebral palsy.",
"Other users may need to click keys by pointing at them with a heador eye-tracker and dwelling for a fixed time, e.g. people with amyotrophic lateral sclerosis (ALS).",
"When a person's typing is slow or inaccurate, word completions may provide more efficient input.",
"Word completions predict the most probable words based on the current typed prefix.",
"However, monitoring predictions carries a cognitive cost and may not always improve performance (Trnka et al., 2009).",
"Further, monitoring predictions can be difficult without visual feedback.",
"Eyes-free text input can be slow for users who are visually-impaired (Nicolau et al., 2019), and even slower for users who are motorand visually-impaired (Nel et al., 2019).",
"Finally, eyes-free text input may be needed in future augmented reality (AR) interfaces where visual feedback is limited or non-existent (e.g. due to lighting or device limitations).",
"In audio-only AR, it is still possible to type on an invisible virtual keyboard (Vertanen et al., 2013; Zhu et al., 2018).",
"All these cases motivate our interest in exploring alternatives to conventional word completion.",
"Here we investigate accelerating input by allowing users to skip typing spaces and mid-word vowels.",
"We decided to abbreviate in this manner based on past results on touchscreen text input without spaces (Vertanen et al., 2015, 2018), and a study we present here in which 200 people abbreviated email messages.",
"Our interaction approach of abbreviation is similar to features in commercial assistive interfaces (e.g. Grid 3, NuVoice, Lightwriter).",
"Our whole utterance prediction approach is similar to features in touchscreen phone keyboards and in commercial assistive interfaces (e.g. dwell-free sentence input in Tobii Communicator 5).",
"We modified a probabilistic recognizer to accurately expand abbreviated input by",
"1) improving our language models by selecting well-matched training data via a neural network,",
"2) modifying the search to model the insertion of mid-word vowels, and",
"3) adding a neural language model to the search.",
"We validate our method in computational experiments on over six thousand sentences typed on touchscreen devices.",
"We found that even when 28% of letters were omitted, we recognized sentences with no errors 70% of the time.",
"Selecting 6575 from the top three sentences, user could obtain their intended sentence 80% of the time.",
"Finally, we compare word completion and abbreviated sentence input in a user study.",
"In this study, users had to dwell for one second to trigger a tap.",
"We found sentence input was slightly slower than using word completions, but still saved substantial time compared to typing all the characters.",
"Users obtained their desired sentence 68% of the time.",
"Abbreviated input.",
"Demasco and McCoy (1992) investigated expanding uninflected words (e.g. ap-ple eat john) into syntactic sentences (e.g. the apple is eaten by john).",
"Gregory et al. (2006) created abbreviation codes (e.g. rmb = remem-ber).",
"Users selected words from a menu or by typing a code's letters.",
"Typing codes was the most efficient.",
"Pini et al. (2010) detected abbreviated phrases using a Support Vector Machine and expanded them via a Hidden Markov Model (HMM).",
"Their detector and expander were 90% and 95% accurate respectively.",
"Users decreased keystrokes and input time by 32% and 26% respectively.",
"Shieber and Nelken (2007) allowed users to drop non-initial vowels and repeated consonants.",
"This deleted 26% of the total characters.",
"Using an n-gram word language model and a spelling transducer for each word, they expanded abbreviated text at an error rate of 3.3%.",
"Our work differs in that we:",
"1) removed spaces between words,",
"2) did not remove consecutive consonants,",
"3) used a character language model with no fixed vocabulary.",
"Tanaka-Ishii et al. (2001) explored Japanese text input with digits.",
"They used an HMM to expand a sequence of digits into characters.",
"Users saved 35% of keystrokes typing on a mobile phone.",
"Han et al. (2009) also used an HMM to expand abbreviations learned from a corpus of Java code.",
"Their approach did not require memorizing abbreviations and provided incremental feedback while typing.",
"In two studies with 31 users, Willis et al. (2002, 2005) identified common abbreviation behaviors such as vowel deletion, phonetic replacement, and word truncation.",
"They did not release their data and it was on a relatively small number of people.",
"Based on their work, we conducted an abbreviation study with 200 users and also share our data.",
"Data selection.",
"Mismatch between the training and target text domains can lead to sub-optimal language models.",
"A variety of methods have been developed to address this problem.",
"Lin et al. (1997), Gao et al. (2002), and Yasuda et al. (2008) used language modeling and in-domain perplexity to select training data.",
"In this approach, a language model is trained on a small in-domain dataset.",
"Training instances from an out-of-domain dataset are selected if they are below some perplexity threshold.",
"Other work has investigated data selection using cross-entropy or cross-entry difference between in-and out-of-domain datasets (Axelrod et al., 2011; Moore and Lewis, 2010; Schwenk et al., 2012; Rousseau, 2013; Mansour et al., 2011; Vertanen and Kristensson, 2011b).",
"In this approach, an in-domain and out-of-domain language models are first trained.",
"Sentences are selected based on a cross-entropy threshold or cross entropy difference calculated from the two language models.",
"Hildebrand et al. (2005) and Lu et al. (2007) applied information retrieval based techniques to select data.",
"Other method include selecting based on infrequent n-gram occurrences (Gasco et al., 2012; Parcheta et al., 2018), or Levenshtein distance and word vectors (Chinea-Rios et al., 2018).",
"Duh et al. (2013) employed the data selection method of Axelrod et al. (2011), which builds upon Moore and Lewis (2010)'s approach.",
"The main distinction is that they used neural language models for selection rather than n-gram models.",
"Chen and Huang (2016), Peris et al. (2017), and Chen et al. (2016) selected based on convolutional and bidirectional long short-term memory neural networks.",
"Bidirectional neural models like BERT (Devlin et al., 2019) has proven effective in many natural language tasks.",
"Ma et al. (2019) used BERT for domain-discriminative data selection.",
"Hur et al. (2020) used BERT for domain adaptation and instance selection for disease classification.",
"Our selection method is similar to these methods but focuses on selecting conversational-style sentences.",
"Decoding noisy input.",
"Text entry interfaces often use a probabilistic decoder to infer a user's text from time sequence data (Vertanen et al., 2015; Kristensson and Zhai, 2004; Zhai et al., 2002; Zhai and Kristensson, 2008).",
"Typically, a keyboard likelihood model and a language model prior are used to infer a user's text from input with incorrect, missing, or extra characters.",
"To date, these approaches have mostly used n-gram language models.",
"Ghosh and Kristensson (2017) corrected typos in tweets to a low character error rate of 2.4% by using a character convolutional neural network, an encoder with gated recurrent units, and a decoder with attention.",
"The twitter typo data contained sequences with a similar number of characters to the target.",
"In our work, we show acceptable character error rate can be achieved on input not only with typos, but also with missing spaces and mid-word vowels.",
"We show the advantage of using a recurrent neural network language model (RNNLM) directly in the decoder's search or to rescore hypotheses.",
"To better understand how people do free-form abbreviation, we conducted a study on Amazon Mechanical Turk.",
"As a pilot, we had 26 workers abbreviate an email from the Enron mobile data set (Ver-tanen and Kristensson, 2011a).",
"We designed our instructions based on Willis et al. (2005).",
"Workers abbreviated the same email three times.",
"Each time the worker was asked to abbreviate in three ways: heavily, as little as possible, or as they saw fit.",
"In our pilot, we found workers abbreviated similarly regardless of instructions.",
"Thus, we designed a single set of instructions for our main study that asked workers to imagine they were using artifi-cially intelligent (AI) software that was good at guessing their intended text from an abbreviated form.",
"They were told to shorten words by removing or changing letters, but they should avoid shortening words that might be hard for the system to guess and that they should not omit words entirely.",
"See the appendix for our instructions.",
"Our supplementary data contains all the data from the study.",
"We recruited 200 workers who each abbreviated ten emails.",
"In our analysis, we used 1,308 of the 2,000 emails.",
"We filtered out emails that did not have the same number of words as their original emails.",
"This filtering helped us to align the sentences by word.",
"Punctuation was removed except apostrophes and at signs.",
"We lowercased the text.",
"We found 90% of abbreviated words were an in-order subsets of their full spelling.",
"On average, 21% of a word's letters were deleted.",
"Of these, 16% were consonants and 42% were vowels.",
"In the set of six common letters in English, e t o a i n , consonants were less likely to be deleted than vowels.",
"Surprisingly, the six least common letters, z q x j v k were often deleted.",
"Considering letter position in words, 14% of first letters, 35% of last letters, and 90% of middle letters were deleted.",
"Our study confirmed some of our initial beliefs about how people would do free-form abbreviation.",
"We found people deleted vowels more frequently than consonants and people usually retained the first letter of words.",
"Other aspects we found surprising such as the frequent deletion of uncommon letters.",
"The percentage of middle letters deleted was high.",
"One reason for this was some workers persistently only used the first letter of each word.",
"We selected 564 passages where each word was an in-order subset of the full word.",
"We implemented a search that proposed inserting all characters at all positions in words in workers' input.",
"The search was guided by the language models described in Vertanen et al. (2015).",
"We used beam search to keep the search tractable.",
"See the appendix for example input and the expanded output.",
"We measured accuracy using character error rate (CER).",
"CER is the number of insertions, substitutions, and deletions required to transform the expanded text into the original text (typically multiplied by 100).",
"As shown in Figure 1, the expansion had a CER of less than 5% for compression of up to 30%.",
"Beyond that, much of the input was only the first letter of each word and our algorithm simply imagined probable text consistent with the provided letters.",
"We think these results are promising given our search simply proposed the insertion of all characters at all positions.",
"We think abbreviated input may most benefit users with slow input.",
"From this point on, we focus on optimizing our system for use by Augmentative and Alternative Communication (AAC) users.",
"AAC users may not be able to speak due to a condition such as ALS.",
"AAC users slow input rate make taking part in conversations difficult (Arnott et al., 1992).",
"Sentence abbreviation may be particularly useful for short phrases with predictable language.",
"Our search-based approach to abbreviation expansion relies crucially on a well-trained language model.",
"For a language model to work well it needs to be trained on data that is suited to the target domain.",
"Ideally we would train our language models on large amounts of conversational communications written by AAC users.",
"For privacy and ethical reasons, it is difficult to find large amounts of such data.",
"Therefore, in this section, we explore selecting training data from an out-of-domain dataset using a small amount of in-domain AAC-like data.",
"As our in-domain set, we used 29 K words of AAC-like crowdsourced messages (Vertanen and Kristensson, 2011b).",
"For our out-of-domain training set, we used one billion words of web text from Common Crawl 1 .",
"We only kept sentences consisting of AZ, apostrophes, spaces, commas, periods, question marks, and exclamation point.",
"We compared three ways to select training sentences: Random selection.",
"Cross entropy difference selection.",
"Following Moore and Lewis (2010), we trained an in-domain 4-gram word language model on our AAC-like data, and an out-of-domain 4-gram model on a random subset of web text (disjoint from the training set).",
"We calculated the cross-entropy difference of training sentences using the inand out-of-domain models.",
"We selected the highest scoring sentences until we reached 100 million characters.",
"BERT selection.",
"BERT is a language representation model built using self-attentive transformers (Devlin et al., 2019).",
"We took the inand out-of-domain data from the previous step and labeled each sentence based on its set.",
"We then trained a binary classifier using bert-base-uncased 2 .",
"We ran our classifier on each sentence in the training set yielding the probability of a sentence belonging to the in-domain set.",
"We selected the top sentences until we reached 100 million characters.",
"As shown in Table 1, random sentences from Common Crawl averaged 30 words.",
"The cross-entropy 1 https://commoncrawl.org/ 2 https://github.com/google-research/bert/ Method Words OOV Enron Daily Enron sent.",
"difference and BERT methods selected shorter sentences of 14 and 11 words respectively.",
"This is likely good given our goal of supporting short, conversational messages.",
"For comparison, sentences averaged 13 words in the in-domain AAC set and 10 words in DailyDialog (Li et al., 2017).",
"DailyDialog consists of two-sided everyday dialogues.",
"We calculated the out-of-vocabulary (OOV) rate with respect to a vocabulary of 100 K words.",
"Our randomly selected sentences had a much higher 1.2% OOV rate compared to cross-entropy and BERT selected data at 0.3% and 0.4% respectively (Table 1).",
"Again this suits our purpose as we suspect abbreviated input is best suited for sentences without uncommon words.",
"For comparison, the OOV rates of DailyDialog and our AAC-like set were both low at 0.2%.",
"See the appendix for samples of sentences selected by each method.",
"We trained 12-gram character language models with Witten-Bell smoothing on each 100 million character training set.",
"We trained without count cutoffs and did not prune the models.",
"The binary BerkeleyLM (Pauls and Klein, 2011) size of the random, cross-entropy difference, and BERT models were 1.7 GB, 1.3 GB, and 1.2 GB respectively.",
"We evaluated these character language models on the Enron mobile (Vertanen and Kristensson, 2011a) and DailyDialog (Li et al., 2017) datasets.",
"Before evaluation, we split each dialog turn in DailyDialog into single sentences and randomized their order.",
"We calculated the average per-character perplexity of these two datasets.",
"As shown in Table 1, the cross-entropy and BERT models had perplexities around 6% lower than the random model with the BERT model having the lowest perplexity.",
"We also compared the recognition accuracy of the three language models using the recognizer and data to be described in the next section.",
"As shown in Table 1 (right column), these perplexities reductions did translate into improvements in recognition accuracy on touchscreen input where spaces and 6578 50% of mid-word vowels were removed.",
"We extended the VelociTap touchscreen keyboard decoder (Vertanen et al., 2015).",
"VelociTap searches for the most likely text given a sequence of 2D taps.",
"Each tap has a likelihood under a 2D Gaussians centered at each key.",
"Taps can be deleted without generating a character by incurring a deletion penalty.",
"Adding characters to a hypothesis incur penalties based on a character language model.",
"The decoder can insert characters without consuming a tap.",
"A general insertion penalty allows all possibles characters to be inserted.",
"The decoder also has separate space and apostrophe insertion penalties.",
"We extend this further by adding a vowel insertion penalty for inserting the vowels: a , e , i , o , u .",
"However, this penalty is only used if the prior character is not a space.",
"This models that vowels should not be skipped at the start of words.",
"The search is performed in parallel, with different threads extending partial hypotheses.",
"When a hypothesis consumes all taps, it is added to an n-best list.",
"To keep the search tractable, a config-urable beam controls whether partial hypotheses are pruned.",
"A wider beam searches more thoroughly, but at the cost of more time and memory.",
"To date, VelociTap has only used n-gram language models.",
"We extend the decoder to use a recurrent neural network language model (RNNLM) either as a replacement for the character n-gram during search, or to rescore the n-best list.",
"When used for rescoring, we compute the log probability of each sentence under the RNNLM.",
"We multiply this probability by an RNNLM scale factor and add the result to a hypothesis' log probability.",
"We trained an RNNLM on the BERT-selected training data.",
"After a hyperparameter search, we settled on 512 LSTM units, a character embedding size of 64, two hidden layers, a learning rate of 0.001, and a dropout probability of 0.5.",
"We trained using the Adam optimizer.",
"On the Enron Mobile and DailyDialog test sets, our RNNLM had a perplexity of 4.50 and 2.64 respectively.",
"To allow efficient hypothesis extension during RNNLM-based search, we augmented our partial hypotheses to track the state of the neural network.",
"However, as we will see, RNNLM search required substantial memory and computation time.",
"While we experimented with using a GPU for RNNLM queries, we found parallel CPU search was faster.",
"We tested our improvements on noisy, abbreviated, touchscreen keyboard input.",
"We wanted noisy input to ensure our system was robust to mistakes AAC users may make when typing (e.g. when using a mouth stick or an eye-tracker).",
"We created a test and development set using data collected on touchscreen phones (Vertanen et al., 2015, 2013) and watches (Vertanen et al., 2018, 2019).",
"We limited our data to sentences from the Enron Mobile set.",
"We concatenated taps to create single sentence sequences without spaces.",
"We removed sentences where the number of taps did not match the length of its reference.",
"This resulted in a test and development set of 6,631 and 731 sentences respectively.",
"We played back taps to our decoder, deleting mid-word vowels with a given vowel drop probability .",
"We tested drop probabilities of 0.5 and 1.0.",
"In our test set, 17.7% of characters were spaces.",
"With a drop probability of 0.5, 27.9% of characters (including spaces) were deleted.",
"If all mid-word vowels were dropped, 38.2% of characters were saved.",
"For the n-gram search and RNNLM rescoring setups and two drop probabilities, we tuned decoder parameters to minimize CER on the development set.",
"Tuning used a random restart hill-climbing approach.",
"We tuned each of the four setups for 600 CPU hours.",
"Due to the computational costs, we used the parameters found for the n-gram search for the RNNLM search.",
"We report the character error rate (CER), as well as word error rate (WER), and sentence error rate (SER) on our test set.",
"We also report the Top-5 SER which is the lowest SER of the top five hypotheses.",
"We searched in parallel using 24 threads on a dual Xeon E5-2697 v2 server.",
"This large number of threads mainly sped up the RNNLM search.",
"As shown in Table 2, using the RNNLM in the search instead of the n-gram model reduced error rates by 23% and 12% relative for a vowel drop probability of 0.5 and 1.0 respectively.",
"This however came at a much higher cost with decoding taking much longer and requiring more memory.",
"Using the n-gram model for search and rescoring with the RNNLM resulted in similar error rates 6579 Decoder Drop CER WER SER Top-5 SER Decode Memory search prob.",
"to searching with the RNNLM, but only caused modest increases in decode time and memory.",
"Dropping half of vowels, we recognized the correct sentence 72% of the time using RNNLM rescoring.",
"If we assume an interface allowing selection from the top five results, this increased to 85%.",
"Dropping all vowels was harder; we recognized the correct sentence only 59% of the time.",
"Providing the top five sentences increased this to 74%.",
"Interestingly, our vowel drop probability 1.0 setups were faster.",
"We investigated this by varying the tuned beams, measuring CER on the development set.",
"We found for drop 0.5, a narrower beam increased CER while a wider beam provided no gain.",
"For drop 1.0, a narrower beam also increased CER, but even a modestly wider beam increased CER slightly (3% relative).",
"The tuned penalty for vowel insertion was small (0.8 probability).",
"We observed in sentences with errors at a narrow beam, a wider beam sometimes resulted in more inserted vowels.",
"This may have allowed more probable text, but ultimately a higher CER.",
"This suggests we may need a more nuanced model of how users abbreviate, e.g. by penalizing contiguous vowel insertions.",
"Thus far, we tested abbreviated sentence input only in offline experiments.",
"To see if our method offers competitive performance in practice, we conducted a user study using a touchscreen web application.",
"We designed a touchscreen keyboard that runs in a mobile web browser.",
"The keyboard has two modes: Word This mode has the keys AZ, apostrophe, spacebar, and backspace (Figure 2, left).",
"The keyboard has three prediction slots above the keyboard.",
"The left slot shows the exact letters typed.",
"The center and the right slots show predictions based on a user's taps and any previous text.",
"Predictions and recognition occur after each key press.",
"Pressing the spacebar normally selects the left slot.",
"Similar to the iPhone keyboard, if a user's input is noisy and we predict an auto-correction with high probability, we highlight this slot instead.",
"In this case, pressing spacebar selects the auto-correction.",
"A done button signals completion of a sentence.",
"Sentence This mode is similar but has no spacebar or suggestion slots (Figure 2, right).",
"Input is recognized only after the done button is pressed.",
"To simulate users with a slow input rate, users had to dwell on a key for one second to click it.",
"We chose one second because this is a common default setting in dwell-based eye typing, for example, 1.2 seconds in Tobii Communicator.",
"We display a progress circle around a user's finger location showing the dwell time.",
"After a click, the keyboard border flashes and the nearest key is added to the text area above the keyboard.",
"Due to memory and computation requirements, we ran our decoder on a server at our university.",
"The keyboard client makes requests to the server to recognize input.",
"In word mode, at the start of 6580 Metric WORDSENTENCE Statistical test Entry rate (wpm) 9.9 1.5 [6.6, 12.4] 9.0 1.5 [5.7, 11.5] t(27) = -3.92, r = 0.60, p < 0.001 Error rate (CER %) 0.3 0.5 [0.0, 2.5] 7.2 5.4 [1.0, 23.6] t(27) = 6.72, r = 0.79, p < 0.001 Table 3: User performance in each condition in our user study.",
"each key press, we request predictions for the keyboard slots.",
"In sentence mode, we request sentence recognition at the start of pressing the done button.",
"By making the server request at the start of a key press, we effectively eliminated the need to wait for predictions.",
"The average round trip time for requests in our user study was 0.41 s (sd 0.21) in the word mode and 0.58 s (sd 0.29) in sentence mode.",
"We recruited 28 Amazon Mechanical Turk workers.",
"The study took 3040 minutes.",
"Workers were paid $10.",
"We also offered a $5 bonus for the fastest 10% of workers in each condition subject to having a CER below 5%.",
"This was a within-subject experiment with two counterbalanced conditions: WORD and SENTENCE .",
"The conditions used the word and sentence mode of the keyboard respectively.",
"Workers typed 26 phrases in each condition.",
"The first two were practice phrases which we did not analyze.",
"Workers wrote phrases written by people with ALS for voice banking purposes (Costello, 2014).",
"We used phrases with 36 words (1,182 total phrases).",
"Workers received a random set of phrases and never wrote the same phrase twice.",
"Table 3 and Figure 3 show results and statistical tests.",
"We calculated entry rate in words-per-minute (wpm).",
"We considered a word to be five characters including space.",
"We measured the entry time from a worker's first tap until they finished dwelling on the done button.",
"The entry rate in WORD was faster at 9.9 wpm versus SENTENCE at 9.0 wpm.",
"This 0 2 4 6 8 10 1 2 3 4 5 6 SentenceWord E n t r y r a t e ( w p m ) Phrase block Figure 4: Entry rates for each block of four phrases.",
"As shown in Figure 4, participants started out slower in SENTENCE compared to WORD , but the entry rate gap closed as they wrote more phrases.",
"We averaged performance in the first eight and last eight phrases.",
"In WORD , the entry rate was 9.7 wpm in the first set and 9.9 wpm in the last set.",
"In SENTENCE , the entry rate was 8.6 wpm in the first set and 9.6 wpm in the last set.",
"This is promising, as perhaps with more practice, sentence abbreviation might achieve comparable speed but without requiring monitoring of word predictions.",
"Participants were less accurate in SENTENCE with a CER of 7.2% versus 0.3% in WORD .",
"This difference was significant (Table 3).",
"Participants obtained a completely correct phrase 97% of the time in WORD , but only 68% in SENTENCE .",
"We think the lower accuracy in SENTENCE was mostly due to some users abbreviating phrases too aggressively.",
"In phrases recognized completely correctly, the compression rate was 35%.",
"In phrases with recognition errors, the compression rate was 43%.",
"We classified phrases in SENTENCE according to their input length versus the reference length minus spaces and mid-word vowels.",
"252 phrases had the correct length, 162 were longer, and 258 were shorter.",
"These sets correspond to phrases that were likely correctly abbreviated, under-abbreviated, and over-abbreviated.",
"The error rates of these sets were 3.2%, 2.1%, and 14.0% respectively.",
"We found five workers over-abbreviated 20 or more phrases.",
"Removing these workers lowered the overall CER to 5.7%.",
"While not as accurate as word input, sentence input did have acceptable accuracy 7 9 11 Sentence Word 0 5 10 15 20 Error rate (CER %) E n t r y r a t e ( w p m ) Figure5: Participants'entryanderrorrateineachconditionoftheuserstudy.",
"userswhohaveaslowinputrate;fasttypistsmayonlybeslowedbythecognitiveoverheadsofde-cidingwhatletterstoomitorbydisruptingtheirmusclememoryfortypingfamiliarwords.Thisledustolimitingtheinputrateinourstudybyrequir-ingusersdwellforonesecond.Whilethisstudyallowedustoconfirmourabbreviationmethodiscompetitivewithaconventionalkeyboardwithwordpredictions,thisneedsvalidationwithuserswithactualinputratelimits.AACuserinterac-tionmayfeaturemoreimprecisekeypresses,moreaccidentalkeypresses,andmayintroducecom-plicationsrelatedtoattendingtowordpredictions(e.g.themidastouchproblemineyetracking).Further,weonlytestedoneinputrate,itispossibleourmethodmaybebetterorworseatdifferentin-putspeeds.Wethinkourapproachmayalsoofferadvantagesforeyes-freetextinput,butthisalsoneedscomparisonagainstconventionaleyes-freeinputapproaches(e.g.iPhone'sVoiceOverfeature).",
"Our",
"modelmaybenefitfrommoresophisticatedmodel-ingonhowandwhenvowelsareinserted(e.g.pe-nalizingrepeatedvowelinsertions).Ideallyim-provedmodelswouldbebasedondatacollectedbyusersengagedinactualabbreviatedinput.Asour 6582 results show, correctly inferring the intended sentences was challenging even when we asked users to obey a few simple behaviours, namely removing spaces and mid-word vowels.",
"While an ideal system would support a wide-range of abbreviation behaviors and even adapt to individuals, we suspect this may be challenging given our current lack of training data on this task.",
"In our initial study, participants abbreviated email text that was displayed visually.",
"An alternative approach would be to play audio of the text.",
"While this might be a more realistic abbreviation task, it also presents practical challenges to participants such as remembering the text and spelling any difficult words.",
"Perhaps an even more externally valid approach would be to have workers compose novel abbreviated sentences.",
"This would require another step to obtain the unabbreviated compositions (Vertanen and Kristensson, 2014; Gaines et al., 2021).",
"Given we now have a competent initial system, it would be interesting to undertake such a data collection effort.",
"Our results suggest a simple correction interface based on selecting from the top sentences would often, but not always work.",
"Designing an efficient and easy-to-use interface for correcting a few words within such sentence results would be interesting future work.",
"This might be especially challenging to design for users with diverse motor abilities.",
"We used language models trained on only 100 M characters of text.",
"While this allowed us to compare the efficacy of the language model types and decoder configurations, substantially more training data is available along with neural architectures that scale to large training sets, e.g. GPT-2 (Rad-ford et al., 2019).",
"We suspect further recognition accuracy gains are possible for abbreviated, noisy input by incorporating such models.",
"Further, we could likely obtain additional improvements from the n-gram model by training on more data and then pruning the model to reduce its size.",
"We avoided doing this in this work to fairly compare the n-gram and RNN language models when trained on the same amount of text.",
"Our language model training data was drawn from Common Crawl.",
"We used a corpus of AAC-like crowdsourced messages to select training sentences from Common Crawl.",
"Other sources of training data such as Twitter or Reddit are likely more conversational in style.",
"It would be interesting to investigate whether data selecting from a more targeted large-scale training source provides additional improvements in language modeling.",
"We did not specifically investigate how our method would support text containing difficult words such as acronyms or proper names.",
"Users can often anticipate and alter their input behavior to avoid auto-correct errors, e.g. by force (Weir et al., 2014), by long pressing a key (Vertanen et al., 2019), or by switching to a precise input mode (Dudley et al., 2018).",
"Similarly, our abbreviated input method needs a way to specify words that should not be expanded or auto-corrected.",
"At the onset, we did not know that our proposed abbreviation technique would be competitive to conventional word completion.",
"The results from our user study tell us we need to make further improvements to our recognition, better train users to abbreviate in supported ways, and conduct a longitudinal evaluation.",
"Further, testing an abbreviated input prototype with AAC users will undoubtedly lead to new insights.",
"This paper is a first step in producing a viable prototype for testing with users with rate-limited input abilities.",
"We explored accelerating text communication by abbreviated sentence input.",
"We conducted a user study to learn how users abbreviate.",
"We showed the efficacy of a neural classifier to select conversational-style training instances from a large text corpus.",
"We found that dropping spaces and mid-word vowels can provide compression of sentences from 28% to 38%.",
"Such abbreviated and noisy input can often be expanded correctly 59% to 72% of the time.",
"We also showed how the accuracy of a statistical virtual keyboard decoder can be improved by using a neural language model to re-rank the top recognition results.",
"Finally, after practice, users wrote only slightly slower using sentence abbreviated input at 9.6 words-per-minute compared to a conventional keyboard with word predictions at 9.9 words-per-minute.",
"If a phrase was abbreviated by removing spaces and mid-word vowels, our system expanded the abbreviated input to the intended phrase 90% of the time.",
"This material is based upon work supported by the NSF under Grant No.",
"IIS-1750193."
] | [
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"result",
"abstain",
"method",
"other",
"other"
] |
[
"We propose a general framework to study language emergence through signaling games with neural agents.",
"Using a continuous latent space, we are able to",
"(i) train using backpropagation,",
"(ii) show that discrete messages nonetheless naturally emerge.",
"We explore whether categorical perception effects follow and show that the messages are not compositional.",
"In a signaling game, artificial agents learn to communicate to achieve a common goal: a sender sees some piece of information and produces a message, which is then sent to a receiver that must take some action (Lewis, 1969; Skyrms, 2010).",
"If the action is coherent with the sender's initial piece of information, the choice of the message and its interpretation is reinforced.",
"For instance, in a referential game, sender and receiver see a set of objects, and the sender knows which of these the receiver must pick; the sender then sends a message to the receiver, who must interpret it to pick up the right object (Lazaridou et al., 2017, 2018; Havrylov and Titov, 2017; Chaabouni et al., 2019).",
"This setting has been used to study the factors influencing the emergence of various fundamental properties of natural language, such as compositionality (Kirby et al., 2015; Franke, 2016; Steinert-Threlkeld, 2016; Mordatch and Abbeel, 2018; Lazaridou et al., 2018; Choi et al., 2018).",
"In this paper, we add focus on two other so-called de-sign features' of natural language (Hockett, 1960): discreteness (i.e. words form clusters in acoustic space), and displacement (i.e. efficient communication can occur about objects and facts beyond the immediate context of the conversation).",
"From an implementation point of view, we follow the recent literature which has shown that a signaling game is essentially an autoencoder setting, with the encoder playing the role of the sender, and the decoder the role of the receiver (see Fig. 1).",
"In this literature, however, the discreteness of the communication protocol is assumed, since the networks then traditionally use a (normally sequential and) discrete latent space (Havrylov and Titov, 2017; Chaabouni et al., 2019; Kharitonov et al., 2019).",
"Our main contribution is a generalization of the current implementation of signaling games as au-toencoders.",
"Our implementation covers a broader variety of signaling games, and it crucially incorporates the possibility of displacement and makes no a priori assumption of discreteness.",
"Our main result is that under appropriate conditions, discreteness emerges spontaneously: if the latent space is thought about as a continuous acoustic space, then trained messages form coherent clusters, just like regular words do.",
"We also show that the messages are not compositional.",
"In addition to contributing to our understanding of the emergence of communication protocols with features like natural language, our results have technical significance: by using a continuous communication protocol, with discreteness spontaneously emerging, we can train end-to-end using standard backpropagation, instead of reinforcement learning algorithms like REINFORCE and its refinements (Williams, 1992; Schulman et al., 2015; Mnih et al., 2016), which are difficult to use in practice.",
"A related line of work attempts to avoid the dif-ficulties of reinforcement learningused when there are stochastic nodes in a computation graph by reparameterization and/or non-stochastic estimators (Bengio et al., 2013; Schulman et al., 2015).",
"In the emergent communication case, where the stochastic nodes are discrete (e.g. sampling a message from a sender distribution), the Gumbel-Softmax estimator has become increasingly popular (Jang et al., 2017; Maddison et al., 2017).",
"That work enables standard backpropagation to be used for training by optimizing approximations to the true reinforcement learning signal.",
"By contrast, we do not approximate the discrete RL learning signal, but rather ask under what conditions discreteness will emerge.",
"Several earlier papers explore similar topics in the emergence of discrete symbols.",
"Nowak et al. (1999) show that the division of the acoustic space is an emergent property of language use under noise.",
"It assumes that speakers have a fixed language and asks which such ones are stable.",
"In our setting, the language itself is changing as the result of reinforcement from communication and transmission itself is not noisy.",
"De Boer (2000) simulates the emergence of vowel systems in artificial agents modeled after phonetic production and perception in humans, resulting in a self-discretizing acoustic space and a vowel system that resembles human ones.",
"This makes the agents much closer to what we know about humans, but also limits its scope.",
"Results about emergent communication can tell us both about the emergence of human language, but also about communication protocols in general, that may be used by very different agents, e.g. autonomous ones, or animals (Steinert-Threlkeld et al., 2020).",
"We here introduce a general communication game setting, which we call Function Games.",
"Our games contain three basic components:",
"(i) a set of contexts C ,",
"(ii) a set of actions A ,",
"(iii) a family of functions F , from contexts to actions.",
"One play of a Function Game game runs as follows: 1. Nature chooses f F and a context c C .",
"2. Sender sees the context c and f .",
"3. Sender sends a message m to Receiver.",
"4. Receiver sees a possibly different context c (cid:48) and the message m and chooses an action a (cid:48) .",
"5. Both are rewarded' iff a (cid:48) = f ( c (cid:48) ) .",
"Abstractly, the function f represents some piece of knowledge available primarily for Sender, and which determines what action is appropriate in any given context.",
"Two concrete interpretations will help illustrate the variety of communication protocols and goals that this framework encompasses.",
"Generalized referential games.",
"A reference game is one in which Sender tries to get Receiver to pick the correct object out of a given set (Skyrms, 2010; Lazaridou et al., 2017, 2018; Havrylov and Titov, 2017; Chaabouni et al., 2019).",
"Here, contexts are sets of objects (i.e. an m n matrix, with m objects represented by n features).",
"Normally (though we will drop this assumption later), c (cid:48) = shuffled ( c ) : Sender and Receiver see the same objects, but in a different arrangement.",
"Actions are the objects, and the functions f F are choice functions : f ( c ) c for every context c .",
"Belief update games.",
"We will mostly focus on the previous interpretation, but illustrate the generality of the setting with another interpretation here.",
"Contexts can represent the (possibly different) belief states of the agents.",
"Actions' can represent updated belief states ( A = C ), the different functions in F then representing how to update an agent's beliefs in the light of learning a particular piece of information (passed directly to Sender, and only through the message to Receiver).",
"Because we are interested in the simultaneous emergence both of discrete and of compositional signals, we use a Function Game called the Extremity Game designed to incentivize and test rich compositionality (Steinert-Threlkeld, 2018, 2020).",
"In this game, one may think of the n dimensions of the objects as gradable properties, e.g. size and darkness, so that a 2D object is determined by a given size and shade of gray.",
"For the functions, we set F = { arg min i , arg max i : 0 i < n } .",
"An emerging language may contain compositional messages like MOST + BIG ', LEAST + DARK '.",
"Our model (Figure 1) resembles an encoder-decoder architecture, with Sender encoding the context/target pair into a message, and Receiver decoding the message (together with its context c (cid:48) ) into an action.",
"Both the encoder and decoder are multi-layer perceptrons with two hidden layers of 64 ReLU units (Nair and Hinton, 2010; Glorot et al., 2011).",
"A smaller, intermediate layer without an activation function bridges the encoder and decoder and represents the transformation of the input information to messages.",
"We manipulate the following parameters: Context identity.",
"In the shared setting, Receiver sees a shuffled version of Sender's context ( c (cid:48) = shuffled ( c ) ).",
"In the non-shared setting, Receiver's context c (cid:48) is entirely distinct from Sender's.",
"This forces displacement and may incentivize compositional messages, since Sender cannot rely on the raw properties of the target object in communication.",
"Context strictness.",
"In strict contexts, there is a one-to-one (and onto) correspondence between F and A (as in the original Extremity Game from Steinert-Threlkeld, 2018, 2020).",
"In non-strict contexts, an object may be the arg max or arg min of several dimensions, or of no dimension.",
"In all experiments, the latent space (message) dimension is always 2, and objects have 5 dimensions.",
"Strict contexts therefore contain 10 objects, while non-strict contexts contain 5, 10, or 15 objects.",
"We use the Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001, 1 = 0 .",
"9 , and 2 = 0 .",
"999 .",
"The model is trained for 5,000 steps by feeding the network mini-batches of 64 contexts concatenated with one-hot function selectors.",
"The network's loss is taken as the MSE between the target object f ( c (cid:48) ) and the object generated by the Receiver.",
"For each setting of the above parameters, we run 20 trials with different random seeds.",
"1 5 Results 5.1 Communicative success We measure the communicative success of the network by calculating the accuracy of recovering the correct object from c (cid:48) .",
"Receiver's prediction is considered correct if its output is closer to f ( c (cid:48) ) than 1 The project's code for extension and reproduction is available at https://github.com/0xnurl/signaling-auto-encoder.",
"to all other objects in c (cid:48) .",
"Accuracy of the different settings is reported in Table 1. While the network handles displacement well ( non-shared contexts ), the model struggles with non-strict contexts.",
"Note that although accuracy is not 100% , it is still well above chance, since e.g. for a context of 10 objects random guessing yields an expected accuracy of 10% (which we observe in our model before training).",
"Figure 2 depicts message vectors sampled from the latent space layer, before and after training.",
"It is apparent that discrete messages emerge from the imposed learning regime.",
"We measure cluster tendency more quantitatively through two measures, one considering Sender's production , and the other Receiver's perception .",
"First, we sample 100 contexts, and collect the output of the trained encoder for each of these contexts combined with each possible function f .",
"We apply an unsupervized clustering algorithm to this set of produced messages (DBSCAN, Ester et al., 1996, with (cid:15) = 0 .",
"5 ).",
"A label is assigned to each cluster using the ground truth: the label of a cluster is the function f that was most often at the source of a point in this cluster.",
"This allows us to compute F1-scores, which are reported in Table 2. The model reached near-optimal clusteriza-Shared Non-shared Strict 10 objects 1 .",
"tion measures in 7 out of 8 parameter settings, with the Non-strict, Non-shared context with 5 objects being the exception.",
"The second approach is akin to studying perception.",
"Given the clusterization of the message space, we sample new messages from each cluster, and test Receiver's perception of these artificial' messages, which have never been produced by Sender.",
"To sample artificial messages, we take the average of 10 messages from a (now labelled) cluster.",
"These artificial messages are fed to Receiver for 100 different contexts.",
"The output object accuracy for these artificial messages is shown in Table 3. The model achieves recovery accuracy similar to when interpreting actual messages.",
"In sum, we can identify discrete, abstract regions of the latent space corresponding to different functions in the input, just like words form clusters in acoustic space.",
"Our agents are capable of communicating in abstract situations, namely some in which their contexts are different in the first place.",
"This generalizability suggests that the messages may be compo-sitional'.",
"We here probe for a candidate compositional structure to the latent space, by asking how the messages relate to the structure of the family of functions F .",
"First, the pioneering Mikolov et al., 2013 looks for compositionality at the level of word embeddings (WE) through addition, most classically asking whether WE (queen)= WE (king)-WE (man)+ WE (woman).",
"In the current Game, we can ask whether the messages are related as follows, for any dimensions i and j : M ( c, arg max i )= M ( c, arg max j )-M ( c, arg min j )+ M ( c, arg min i ).",
"For each such pair of object dimensions we calculate the right-hand side of the equation above for 100 contexts, feed it to Receiver, compare Receiver's output to the output that would have been obtained if M ( c, arg max i ) (the left-hand side) had been sent in the first place.",
"This leads to important degradation of average communicative success: a drop of at least 24 percentage points across parameter combinations, to around chance level.",
"Full results are in the left column of Table 4. Second, we note as others that the composition-as-addition assumption is disputable, both in general and in the original application case (Linzen, 2016; Chen et al., 2017).",
"To abstract away from this issue, we train a composition network' (an MLP with 2 hidden layers of 64 ReLU units) on the task of predicting M ( c, arg max i ) from M ( c, arg max j ), M ( c, arg min j ) and M ( c, arg min i ), therefore letting it discover any function for mixing values, and not involving addition a priori .",
"We leave out one dimension i 0 from training, and feed Receiver with the message predicted by the composition network' from M ( c, arg max j ), M ( c, arg min j ) and M ( c, arg min i 0 ).",
"If the language was compositional, this predicted message should behave like M ( c, arg max i 0 ), but we found that, as in the case of addition, the average communication accuracy for all taken-out parameters dropped dramatically (again, at least 24 percentage points drop).",
"Full results are in the right column of Table 4. 5.4 Categorical perception Above we essentially propose an analysis of discreteness both in production and perception.",
"This can lead to more psycholinguistic-like queries about these emergent languages.",
"For instance, one may ask whether classical Categorical Perception' (CP) effects obtain, whereby two messages at a short distance in the latent space may be discriminated easily if (and only if) they are on two sides of a categorical boundary for interpretation purposes Compositionality by Addition Composition Network Shared Non-shared Shared Non-shared Strict 10 objects 7 .",
"As an initial foray, we can investigate the sharpness of the boundaries of our discrete messages (i.e. distribution in latent space).",
"For representation purposes, we sample pairs of messages, call them M 1 and M +1 generated by Sender for two choice functions F 1 and F +1 .",
"We explore a continuous spectrum of messages in the dimension connecting these two messages ( M t = (1 t ) M 1 +(1+ t ) M +1 2 , continuously shifting from M 1 to M +1 as the continuous variable t moves from 1 to +1 ).",
"The messages M t are fed to Receiver together with contexts C (cid:48) , and for each function F 1 and F +1 in turn, we calculate object recovery accuracy.",
"This is plotted in Figure 3 for an Extremity Game model trained in a strict, non-shared context setting with object size 5. The model shows that clusters have relatively sharp boundaries, especially in the direction of a message belonging to another cluster (the area where x is between 1 and +1 in Fig. 3).",
"We can thus identify a boundary around a cluster, and its width, providing the necessary setup to investigate CP effects: whether pairs of messages crossing such a boundary behave differently (e.g., are easier to discriminate) than a pair of equally distant messages both on one side of this boundary.",
"We propose a general signaling game framework in which fewer a priori assumptions are imposed on the conversational situations.",
"We use both production and perception analyses, and find that under appropriate conditions, which are met by most studies involving neural signaling games, messages become discrete without the analyst having to force this property into the language (and having to deal with non-differentiability issues).",
"We find no evidence of compositional structure using vector analogies and a generalization thereof but do find sharp boundaries between the discrete message clusters.",
"Future work will explore other measures and alternative game settings for the emergence of compositionality, as well as more subtle psychological effects (Categeorical Perception) of continuous biological systems exhibiting discrete structure, like the auditory system.",
"We acknowledge the funding support from ANR-17-EURE-0017, and greatly thank Marco Baroni, Diane Bouchacourt, Rahma Chaabouni, Emmanuel Dupoux, Roni Katzir, Philippe Schlenker, Benjamin Spector, Jakub Szymanik, and three ACL reviewers."
] | [
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"other"
] |
[
"Abstract Domain divergence plays a significant role in estimating the performance of a model in new domains.",
"While there is a significant literature on divergence measures, researchers find it hard to choose an appropriate divergence for a given NLP application.",
"We address this shortcoming by both surveying the literature and through an empirical study.",
"We develop a taxonomy of divergence measures consisting of three classes Information-theoretic, Geometric, and Higher-order measures and identify the relationships between them.",
"Further, to understand the common use-cases of these measures, we recognise three novel applications 1) Data Selection, 2) Learning Representation, and 3) Decisions in the Wild and use it to organise our literature.",
"From this, we identify that Information-theoretic measures are prevalent for 1) and 3), and Higher-order measures are more common for 2).",
"To further help researchers choose appropriate measures to predict drop in performance an important aspect of Decisions in the Wild, we perform correlation analysis spanning 130 domain adaptation scenarios, 3 varied NLP tasks and 12 divergence measures identified from our survey.",
"To calculate these divergences, we consider the current contextual word representations (CWR) and contrast with the older distributed representations.",
"We find that traditional measures over word distributions still serve as strong baselines, while higher-order measures with CWR are effective.",
"Standard machine learning models do not perform well when tested on data from a different target domain.",
"The performance in a target domain largely depends on the domain divergence (Ben-David et al., 2010) a notion of distance between the two domains.",
"Thus, efficiently measuring and reducing divergence is crucial for adapting models to the new domain the topic of domain adaptation .",
"Divergence also has practical applications in predicting the performance drop of a model when adapted to new domains (Van Asch and Daelemans, 2010), and in choosing among alternate models (Xia et al., 2020).",
"Given its importance, researchers have invested much effort to define and measure domain divergence.",
"Linguists use register variation to capture varieties in text the difference between distributions of the prevalent features in two registers (Biber and Conrad, 2009).",
"Other measures include probabilistic measures like H -divergence (Ben-David et al., 2010), information theoretic measures like Jenssen-Shannon and Kullback-Leibler divergence (Plank and van Noord, 2011; Van Asch and Daelemans, 2010) and measures using higher-order moments of random variables like Maximum Mean Discrepancy (MMD) and Central Moment Discrepancy (CMD) (Gretton et al., 2007; Zellinger et al., 2017).",
"The proliferation of divergence measures challenges researchers in choosing an appropriate measure for a given application.",
"To help guide best practices, we first comprehensively review the NLP literature on domain divergences.",
"Unlike previous surveys, which focus on domain adaptation for specific tasks such as machine translation (Chu and Wang, 2018) and statistical (non-neural network) models (Jiang, 2007; Mar-golis, 2011), our work takes a different perspective.",
"We study domain adaptation through the vehicle of domain divergence measures .",
"First, we develop a taxonomy of divergence measures consisting of three groups: Information-Theoretic, Geometric, and Higher-Order measures.",
"Further, to find the most common group used in NLP, we recognise three novel application areas of these divergences Data Selection, Learning Representations, and Decisions in the Wild and organise the literature under them.",
"We find that Information-Theoretic measures over word distributions are popular for Data Selection and Decisions in the wild, while Higher-order measures over continuous features are frequent for Learning representations.",
"Domain divergence is a major predictor of performance in the target domain.",
"A better domain divergence metric ideally predicts the corresponding performance drop of a model when applied to a target domain a practical and important component of Decisions in the Wild .",
"We further help researchers identify appropriate measures for predicting performance drops, through a correlation analysis over 130 domain adaptation scenarios and three standard NLP tasks: Part of Speech Tagging (POS), Named Entity Recognition (NER), and Sentiment Analysis and 12 divergence metrics from our literature review.",
"While information-theoretic measures over traditional word distributions are popular in the literature, are higher-order measures calculated over modern contextual word representations better indicators of performance drop?",
"We indeed find that higher-order measures are superior, but traditional measures are still reliable indicators of performance drop.",
"The closest to our work is (Elsahar and Gall, 2019) who perform a correlation analysis.",
"However, they do not compare against different divergence measures from the literature.",
"Comparatively, we consider more tasks and divergence measures.",
"We review the literature from the perspective of domain divergences and their use-cases in NLP.",
"We aid researchers to select appropriate divergence measure that indicate performance-drops, an important application of divergence measures.",
"We devise a taxonomy for domain divergence measures, shown in Figure 1.",
"It contains three main classes.",
"Individual measures belong to a single class, where relationships can exist between measures from different classes.",
"We provide detailed description of individual measures in Appendix A. Geometric measures calculate the distance between two vectors in a metric space.",
"As a divergence measure, they calculate the distance between features ( tf.idf , continuous representations, etc.) extracted from instances of different domains.",
"The P-norm is a generic form of the distance between two vectors, where Manhattan (p=1) and Euclidean distance (p=2) are common.",
"Cosine (Cos) uses the cosine of the angle between two vectors to measure similarity and 1-Cos measures distance.",
"Geometric measures are easy to calculate, but are ineffective in a high dimensional space as all distances appear the same (Aggarwal et al., 2001).",
"Information-theoretic measures captures the distance between probability distributions.",
"For example, cross entropy over n-gram word distributions are extensively used in domain adaptation for machine translation.",
"f -divergence (Csiszr, 1972) is a general family of divergences where f is a convex function.",
"Different formulations of the f function lead to KL and JS divergence.",
"Chen and Cardie (2018) show that reducing f -divergence measure is equivalent to reducing the PAD measures (see next section).",
"Another special case of f -divergence is the family of divergences, where KL-Div is a special case of divergence.",
"Renyi Divergence is a member of the -divergences and tends towards KL-Div as 1 (Edge A (cid:13) ); Often applied to optimal transport problems, Wasserstein distance measures the amount of work needed to convert one probability distribution to the other as distance and is used extensively for domain adaptation.",
"KL-Div is also related to Cross Entropy (CE).",
"In this paper, CE refers to measures based on entropy.",
"Higher-Order measures consider matching higher order moments of random variables or divergence in a projected space.",
"Their properties are amenable to end-to-end learning based domain adaptation and recently have been extensively adopted.",
"Maximum Mean Discrepancy (MMD) is one such measure which considers matching first order moments of variables in a Reproducible Kernel Hilbert Space.",
"On the other hand, CORAL (Sun et al., 2017) considers second order moments and CMD (Zellinger et al., 2017) considers higher order moments.",
"CORAL and CMD are desirable because they avoid computationally expensive kernel matrix computations.",
"KL-Div can also be considered as matching the first-order moment (Zellinger et al., 2017); Edge B (cid:13) .",
"Proxy-A-Distance (PAD) measures the distance between source and target distributions via the error of a classifier in target domain samples as source domain samples (Ben-David et al., 2007).",
"A few other measures do not have ample support in the literature.",
"These include information-theoretic measures such as Bhattacharya coeffi-cient, higher-order measures like PAD* (Elsahar and Gall, 2019), Word Vector Variance (WVV), and Term Vocabulary Overlap (TVO) (Dai et al., 2019).",
"Our taxonomy synthesises the diversity and Figure 1: Taxonomy for divergence measures.",
"the prevalence of the divergence measures in NLP.",
"Our key observation of the literature is that there are three primary families of applications of divergences (cf. Table 1 in the appendix): ( i ) Data Selection : selects a subset of text from a source domain that shares similar characteristics as target domain.",
"The selected subset is then used to learn a target domain model.",
"( ii )",
"Learning Representations : aligns source and target domain distributions and learn domain-invariant representations.",
"( iii )",
"Decisions in the Wild : helps practitioners predict the performance or drops in performance of a model in a new target domain.",
"We limit the scope our survey to works that focus on divergence measures.",
"We only consider unsupervised domain adaptation (UDA) where there is no annotated data available in the target domain.",
"It is more practical yet more challenging.",
"For a complete treatment of neural networks and UDA in NLP, refer to (Ramponi and Plank, 2020).",
"Also, we do not treat multilingual work.",
"While cross-lingual transfer can be regarded as an extreme form of domain adaptation, measuring the distance between languages requires different divergence measures, outside our purview.",
"Divergence measures are used to select a subset of text from the source domain that shares similar characteristics to the target domain.",
"Since the source domain has labelled data, the selected data serves as supervised data to train models in the target domain.",
"We note that the literature pays closer attention to data selection for machine translation compared to other tasks.",
"This can be attributed to its popularity in real-world applications and the difficulty of obtaining parallel sentences for every pair of language.",
"Simple word-level and surface-level text features like word and n-gram frequency distributions and tf.idf weighted distributions have sufficient power to distinguish between text varieties and help in data selection.",
"Geometric measures like cosine, used with word frequency distributions, are effective for selecting data in parsing and POS tagging (Plank and van Noord, 2011).",
"Instead of considering distributions as (sparse) vectors, one can get a better sense of the distance between distributions using information-theoretic measures.",
"Remus (2012) find JS-Div effective for sentiment analysis.",
"While word-level features are useful to select supervised data for an end-task, they also can be used to select data to pre-train language-models subsequently used for NER.",
"Dai et al. (2019) use Term Vocabulary Overlap for selecting data for pretraining language models.",
"Geometric and Information-theoretic measures with word level distributions are inexpensive to calculate.",
"However, the distributions are sparse and continuous word distributions help in learning denser representations.",
"Continuous or distributed representations of words, such as CBOW, Skip-gram (Mikolov et al., 2013) and GloVe (Pennington et al., 2014), address shortcomings of representing text as sparse, frequency-based probability distributions by transforming them into dense vectors learned from freeform text.",
"A geometric measure (e.g., Word Vector Variance used with static word embeddings) is useful to select pre-training data for NER (Dai et al., 2019).",
"Such selected data is found to be similar in tenor (the participants in a discourse, the relationships between them, etc.) to the source data.",
"But static embeddings do not change according to the context of use.",
"In contrast, contextual word representations (CWR) mostly derived from neural networks (Devlin et al., 2019; Peters et al., 2018) Paper Task(s) Information-Theoretic Geometric Higher-Order Others KL JS Renyi CE Wass.",
"capture contextual similarities between words in two domains.",
"That is, the same word used in two domains in different contexts will have different embeddings.",
"CWRs can be obtained from hidden representations of pretrained neural machine translation (NMT) models.",
"(McCann et al., 2017) have found such representations along with P-norm effective for data selection in MT (Wang et al., 2017).",
"Compared to representations from shallow NMT models, hidden representations of deep neural network language models (LM) like BERT have further improved data selection for NMT (Aharoni and Goldberg, 2020).",
"Divergences can be measured by comparing the probabilities of a language model, in contrast to directly using its hidden representations.",
"If a LM trained on the target domain assigns high probability to a sentence from the source domain, then the sentence should have similar characteristics to the target domain.",
"Cross Entropy (CE) between probability distributions from LMs capture this notion of similarity between two domains.",
"They have been extensively used for data selection in statistical machine translation (SMT) (Yasuda et al., 2008; Moore and Lewis, 2010; Axelrod et al., 2011; Duh et al., 2013; Liu et al., 2014).",
"However, CE based methods for data selection are less effective for neural machine translation (van der Wees et al., 2017; Silva et al., 2018).",
"Instead, van der Wees et al. (2017) come up with a dynamic subset selection where new subset is chosen every epoch during training.",
"We note again the common refrain that sufficient amount of data should be available; here, to train good language models in the target domain.",
"Similar to language models, probabilistic scores from classifiers which distinguish between samples from two domains can aid data selection.",
"The probabilities assigned by such classifiers in construing source domain text as target domain has been used as a divergence measures in machine translation (Chen and Huang, 2016).",
"However, the classifiers require supervised target domain data which is not always available.",
"As an alternative, Chen et al. (2017) train a classifier and selector in an alternating optimisation manner.",
"From this literature review, we find that distinct measures are effective for different NLP tasks.",
"Ruder and Plank (2017) argue that owing to their varying task characteristics, different measures should apply.",
"They show that learning a linear combination of measures is useful for NER, parsing and sentiment analysis.",
"However, this is not always possible, especially in unsupervised domain adaptation where there is no supervised data in target domain.",
"We observe that information theoretic measures and geometric measures based on frequency distributions and continuous representations are common for text and structured prediction tasks (cf. Table 1 in the appendix).",
"The effectiveness of higher order measures for these tasks are yet to be ascertained.",
"Further, we find that for SMT data selection, variants of Cross Entropy (CE) measures are used extensively.",
"However, the conclusions of van der Wees et al. (2017) are more measured regarding the benefits of CE and related measures for NMT.",
"Contextual word representations with cosine similarity has found some initial exploration for neural machine translation (NMT), with higher order measures yet to be explored for data selection in NMT.",
"One way to achieve domain adaptation is to learn representations that are domain-invariant which are sufficiently powerful to perform well on an end task (Ganin et al., 2015; Ganin and Lempitsky, 2015).",
"The theory of domain divergence (Ben-David et al., 2010) shows that the target domain error is bounded by the source domain error and domain divergence ( H -divergence) and reducing the domain divergence results in domain-invariant representation.",
"The theory also proposes a practical alternative to measure H -divergence called PAD.",
"The idea is to learn a representations that confuses a domain discriminator sufficiently to make samples from two domains indistinguishable.",
"Ganin et al. (2015) operationalise PAD in a neural network named Domain Adversarial Neural Networks (DANN).",
"The network employs a minmax game between the representation learner and the domain discriminator inspired by Generative Adversarial Networks (Goodfellow et al., 2014).",
"The representation learner is not only trained to minimise a task loss on source domain, but also maximise a discriminator's loss, by reversing the gradients calculated for the discriminator.",
"Note that this does not require any supervised data for target domain.",
"In later work, Bousmalis et al. (2016) argue that domain-specific peculiarities are lost in a DANN, and propose Domain Separation Networks (DSN) to address this shortcoming.",
"In DSN, both domain-specific and -invariant representations are captured in a sharedprivate network.",
"DSN is flex-ible in its choice of divergence measures and they find PAD performs better than MMD.",
"Here, we limit our review to works utilising divergence measures.",
"We exclude feature-based UDA methods such as Structural Corresponding Learning (SCL) (Blitzer et al., 2006), Autoencoder-SCL and pivot based language models (Ziser and Reichart, 2017, 2018, 2019; Ben-David et al., 2020).",
"Obtaining domain invariant representations is desirable for many different NLP tasks, especially for sequence labelling where annotating large amounts of data is hard.",
"They are typically used when there is a single source domain and a single target domain for sentiment analysis (Ganin et al., 2016), NER (Zhou et al., 2019), stance detection (Xu et al., 2019), machine translation (Britz et al., 2017; Zeng et al., 2018).",
"The application of DANN and DSN to a variety of tasks are testament of their generality.",
"DANN and DSN are applied in other innovative situations.",
"Text from two different periods of time can be considered as two different domains for intent classification (Kim et al., 2017).",
"Gui et al. (2017) consider clean formal newswire data as source domain and noisy, colloquial, unlabeled Twitter data as the target domain and use adversarial learning to learn robust representations for POS.",
"Commonsense knowledge graphs can help in learning domain-invariant representations as well.",
"Ghosal et al. (2020) condition DANN with an external commonsense knowledge graph using graph convolutional neural networks for sentiment analysis.",
"In contrast, Wang et al. (2018) use MMD outside the adversarial learning framework.",
"They use MMD to learn to reduce the discrepancy between neural network representations belonging to two domains.",
"Such concepts have been explored in computer vision (Tzeng et al., 2014).",
"While single source and target domains are common, complementary information available in multiple domains can help to improve performance in a target domain.",
"This is especially helpful when there is no large-scale labelled data in any one domain, but where smaller amounts are available in several domains.",
"DANN and DSN have been extended to such multi-source domain adaptation: for intent classification (Ding et al., 2019), sentiment analysis (Chen and Cardie, 2018; Li et al., 2018; Guo et al., 2018; Wright and Augenstein, 2020) and machine translation (Gu et al., 2019; Wang et al., 2019).",
"DANN and DSN can also help in multitask learning which considers two complementary tasks (Caruana, 1997).",
"A key to multitask learning is to learn a shared representation that captures the common features of two tasks.",
"However, such representations might still contain task-specific information.",
"The shared-private model of DSN helps in disentangling such representations and has been used for sentiment analysis (Liu et al., 2017), Chinese NER and word segmentation (Cao et al., 2018).",
"Also, although beyond the scope of our discussion here, DANN and DSN have been used to learn language-agnostic representations for text classification and structured prediction in multilingual learning (Chen et al., 2018; Zou et al., 2018; Ya-sunaga et al., 2018).",
"Most works that adopt DANN and DSN framework reduce either the PAD or MMD divergence.",
"However, reducing the divergences, combined with other auxiliary task specific loss functions, can result in training instabilities and vanishing gradients when the domain discriminator becomes increasingly accurate (Shen et al., 2018).",
"Using other higher order measures can result in more stable learning.",
"In this vein, CMD has been used for sentiment analysis (Zellinger et al., 2017; Peng et al., 2018), and Wasserstein distance has been used for duplicate question detection (Shah et al., 2018) and to learn domain-invariant attention distributions for emotional regression (Zhu et al., 2019).",
"The review shows that most works extend the DSN framework to learn domain invariant representations in different scenarios (cf. Table 1, in the ap-pendix).",
"The original work from (Bousmalis et al., 2016) includes MMD divergence besides PAD, which is not adopted in subsequent works, possibly due to the reported poor performance.",
"Most works require careful balancing between multiple objective functions (Han and Eisenstein, 2019), which can affect the stability of training.",
"The stability of training can be improved by selecting appropriate divergence measures like CMD (Zellinger et al., 2017) and Wasserstein Distance (Arjovsky et al., 2017).",
"We believe additional future works will adopt such measures.",
"Models can perform poorly when they are deployed in the real world.",
"The performance degrades due to the difference in distribution between training and test data.",
"Such performance degradation can be alleviated by large-scale annotation in the new domain.",
"However, annotation is expensive, and given thousands of domains quickly becomes infeasible.",
"Predicting the performance in a new domain, where there is no labelled data, is thus important.",
"Much recent work provides theory (Rosenfeld et al., 2020; Chuang et al., 2020; Steinhardt and Liang, 2016).",
"As models are put into production in the real world, this application becomes practically important as well.",
"Empirically, NLP considers the divergence between the source and the target domain to predict performance drops.",
"machine learning model in new domains.",
"Information theoretic measures like Renyi-Div and KL-Div has been used for predicting performance drops in POS (Van Asch and Daelemans, 2010) and Cross-Entropy based measure has been used for dependency parsing (Ravi et al., 2008).",
"Prediction of performance can also be useful for machine translation where obtaining parallel data is hard.",
"Based on distance between languages, (Xia et al., 2020) predict performance of the model on new languages for MT, among other tasks.",
"Such performance prediction models have also been done in the past for SMT (Birch et al., 2008; Specia et al., 2013).",
"However, Ponomareva and Thelwall (2012) argue that predicting drops in performance is more appropriate compared to raw performance.",
"They find that JS-Div effective for predicting performance drop of Sentiment Analysis systems.",
"Only recently, predicting model failures in practical deployments from an empirical viewpoint has regained attention.",
"Elsahar and Gall (2019) find the efficacy of higher-order measures to predict the drop in performance for POS and SA and do not rely on hand crafted measures as in previous works.",
"However, analysing performance drops using CWR is still lacking.",
"We tackle this in the next section.",
"A practical use case of domain divergences is to predict the performance drop of a model applied to a new domain.",
"We ask how relevant are traditional measures over word distributions compared to higher-order measures like CMD and MMD over contextual word representations like BERT, Elmo, DistilBERT (Devlin et al., 2019; Peters et al., 2018; Sanh et al., 2019)?",
"We perform an empirical study to assess their suitability to predict performance drops for three important NLP tasks: POS, NER, and SA leaving machine translation to future work.",
"Performance difference between the source and the target domain depends on the divergence between their feature distributions (Ben-David et al., 2010).",
"We assume a co-variate shift, as in (Ganin et al., 2016), where the marginal distribution over features change, but the conditional label distributions does not i.e., PD s ( y | x ) = PDT ( y | x ) PD s ( x ) (cid:54) = PDT ( x ) .",
"Although difference in conditional label distribution can increase the H Divergence measure (Wisniewski and Yvon, 2019), it requires labels in the target domain for assessment.",
"In this work, we assume no labelled data in the target domain, to best mimic realistic settings.",
"Datasets: For POS, we select 5 different corpora from the English Word Tree Bank of Universal Dependency corpus (Nivre et al., 2016) 1 and also include the GUM, Lines, and ParTUT datasets.",
"We follow Elsahar and Gall (2019) and consider these as 8 domains.",
"For NER, we consider CONLL 2003 (Tjong Kim Sang and De Meulder, 2003), Emerging and Rare Entity Recognition Twitter (Derczynski et al., 2017) and all 6 categories in OntoNotes v5 (Hovy et al., 2006) 2 , resulting in 8 domains.",
"For SA, we follow Guo et al. (2020), selecting the same 5 categories 3 for experiments (Liu et al., 2017).",
"Divergence Measures: We consider 12 divergences.",
"For Cos, we follow the instance based calculation (Ruder et al., 2017).",
"For MMD, Wasserstein and CORAL, we randomly sample 1000 sentences and average the results over 3 runs.",
"For MMD, we experiment with different kernels ( cf. Appendix A) and use default values of from the GeomLoss package (Feydy et al., 2019).",
"For TVO, KL-div, JS-div, Renyi-div, based on word frequency distribution we remove stop-words and consider the top 10k frequent words across domains to build our vocabulary (Ruder et al., 2017; Guru-rangan et al., 2020).",
"We use =0.99 for Renyi as found effective by Plank and van Noord (2011).",
"We do not choose CE as it is mainly used in MT and ineffective for classification and structured prediction (Ruder et al., 2017).",
"Model Architecture: For all our experiments, unless otherwise mentioned, we use the pre-trained DistilBERT (Sanh et al., 2019) model.",
"It has competitive performance to BERT, but has faster inference times and lower resource requirements.",
"For every text segment, we obtain the activations from the final layer and average-pool the representations.",
"We train the models on the source domain training split and test the best model picked from validation set grid search on the test dataset of the same and other domains (cf. Appendix C).",
"For POS and NER, we follow the original BERT model where a linear layer is added and a prediction is made for every token.",
"If the token is split into 1 Yahoo! Answers, Email, NewsGroups, Reviews and We-blogs.",
"2 Broadcast News (BN), Broadcast Conversation (BC), Magazine (MZ), Telephone Conversation (TC) and Web (WB).",
"3 Apparel, Baby, Books, Camera and MR.",
"multiple tokens due to Byte Pair Encoding, the label for the first token is predicted.",
"For SA and domain discriminators, we pool the representation from the last layer of DistilBERT and add a linear layer for prediction (Appendix B).",
"For POS, the PAD measure has the best correlation with performance drop (cf. Table 2).",
"Information-theoretic measures over word frequency distributions, such as JS-div, KL-div, and TVO, which have been prevalent for data selection and performance drop use cases (cf. Table 1) are comparable to PAD.",
"Plank et al. (2014) claim that the errors in POS are dictated by out of vocabulary words.",
"Our findings validate their claim, as we find strong correlation between POS performance drop and word probability distribution measures For NER, MMD-RQ provides the best correlation of 0.495.",
"CORAL a higher-order measure and JS-div are comparable.",
"For SA, Renyi-div and other information-theoretic measures provide considerably better correlation compared to higher-order measures.",
"Cos is a widely-used measure across applications, however it did not provide significant correlation for either task.",
"TVO is used for selecting pretraining data for NER (Dai et al., 2019) and as a measure to gauge the benefits of fine-tuning pre-trained LMs on domain-specific data (Gururangan et al., 2020).",
"Although TVO does not capture the nuances of domain divergences, it has strong, reliable correlations for performance drops.",
"PAD has been suggested for data selection in SA by Ruder and Plank (2017) and for predicting drop in performance by Elsahar and Gall (2019).",
"Our analysis confirms that PAD provides good correlations across POS, NER, and SA.",
"We find no single measure to be superior across all tasks.",
"However, information theoretic measures consistently provide good correlations.",
"Currently, when contextual word representations dictate results in NLP, simple measures based on frequency distributions are strong baselines for predicting performance drop.",
"Although higher-order measures do not always provide the best correlation, they are differentiable, thus suited for end-to-end training of domain-invariant representations.",
"Why are some divergence measures better at predicting drops in performance?",
"The one-dataset-one-domain is a key assumption in such works.",
"However, many works have questioned this assumption (Plank and van Noord, 2011).",
"Multiple domains may exist within the same domain (Webber, 2009) and two different datasets may not necessarily be considered different domains (Irvine et al., 2013).",
"Recently Aharoni and Goldberg (2020) show that BERT representations reveal their underlying domains.",
"They qualitatively show that a few text segments from a dataset actually belong to another domain.",
"However the degree to which the samples belong to different domains is unclear.",
"We first test the assumption that different datasets are different domains using Silhouette scores (Rousseeuw, 1987) which quantify the separability of clusters.",
"We initially assume that a dataset is in its own domain.",
"A positive score shows that datasets can be considered as well-separated domains; a negative score shows that most of the points within a dataset can be assigned to a nearby domain; and 0 signifies overlapping domains.",
"We calculate Silhouette scores and t-SNE plots (Maaten and Hinton, 2008) for different divergence measures.",
"Refer to the plots (Figures 3a to 3c) and calculation details in Appendix D. Almost all the measures across different tasks have negative values close to 0 (Table 2, (r)).",
"For POS, CORAL, Wasserstein and Cos strongly indicate that text within a dataset belongs to other Measure Correlations Silhouette Coefficients POS NER SA POS NER SA Cos 0.018 0.223 -0.012 1 .",
"domains.",
"However, for MMD-Gaussian the domains overlap (Figure 2a).",
"For NER, MMD-Gaussian and MMD-Laplacian indicate that the clusters overlap while all other metrics have negative values.",
"For SA, JS-Div has positive values compared to other measures, and as seen in Figure 2c, we can see a better notion of distinct clusters.",
"The Silhouette scores along with the t-SNE plots show that datasets are, in fact, not distinct domains.",
"Considering data-driven methods for defining domains is needed (Aharoni and Goldberg, 2020).",
"If there are indeed separate domains, does it explain why some measures are better than the others?",
"We see better notions of clusters for NER and sentiment analysis ( cf. Figures 2b and 2c).",
"We can expect the drop in performance to be indicative of these domain separations.",
"Comparing the best correlations from Table 2, correlations for NER and sentiment analysis are higher compared with POS.",
"For POS, there are no indicative domain clusters and the correlation between domain divergence and performance may be less; whereas for SA, both the t-SNE plot and the Silhouette scores for JS-Div ( cf. Figure 2c) corroborate comparatively better separation.",
"If datasets are indeed different domains, these divergence measures are reliable indicators of performance drops.",
"If they are not, there might be other confounding factors (such as differences in label distribution) and one has to be cautious in using them.",
"Domain overlap also has consequences for data selection strategies.",
"For example, Moore and Lewis (2010) select pseudo in-domain data from source corpora (cf Section 3.1).",
"As the Silhouette coefficients are negative and close to 0, many data points in a dataset belong to nearby domains.",
"Data selection strategies thus may be effective.",
"If the Silhouette coefficients are more negative and if more points in the source aptly belong to the target domain, we should expect increased sampling from such source domains to yield additional performance benefits in the target domain.",
"We survey domain adaptation works, focusing on divergence measures and their usage for data selection , learning domain-invariant representations , and making decisions in the wild .",
"We synthesised the divergence measures into a taxonomy of information theoretic , geometric and higher-order measures.",
"While traditional measures are common for data selection and making decisions in the wild, higher-order measures are prevalent in learning representations.",
"Based on our correlation experiments, silhouette scores, and t-SNE plots, we make the following recommendations: PAD is a reliable indicator of performance drop.",
"It is best used when there are sufficient examples to train a domain discriminator.",
"JS-Div is symmetric and a formal metric.",
"It is related to PAD, easy to compute, and serves as a strong baseline.",
"While Cosine is popular, it is an unreliable indicator of performance drop.",
"One-dataset-is-not-one-domain.",
"Instead, cluster representations and define appropriate domains.",
"We would also like to acknowledge the support of the NExT research grant funds,supported by the National Research Foundation,Prime Ministers Of-fice, Singapore under its IRC@ SG Funding Initiative, and to gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan XGPU used in this research."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Abstract Intrinsic evaluations of OIE systems are carried out either manuallywith human evaluators judging the correctness of extractions or automatically, on standardized benchmarks.",
"The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of the models' performance.",
"Moreover, the existing OIE benchmarks are available for English only.",
"In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese, and German.",
"In contrast to existing OIE benchmarks, BenchIE is fact-based , i.e., it takes into account informational equivalence of extractions: our gold standard consists of fact synsets , clusters in which we exhaustively list all acceptable surface forms of the same fact.",
"Moreover, having in mind common downstream applications for OIE, we make BenchIE multi-faceted ; i.e., we create benchmark variants that focus on different facets of OIE evaluation, e.g., compactness or minimality of extractions.",
"We benchmark several state-of-the-art OIE systems using BenchIE and demonstrate that these systems are significantly less effective than indicated by existing OIE benchmarks.",
"We make BenchIE (data and evaluation code) publicly available.",
"1 1 Introduction Open Information Extraction (OIE) is the task of extracting relations and their arguments from natural language text in a schema-free manner (Banko et al., 2007).",
"Consider the sentence \"Sen. Mitchell, who is from Maine, is a lawyer.\" ; an OIE system is expected to extract the triples (\"Sen. Mitchell\"; \"is from\"; \"Maine\") and (\"Sen. Mitchell\"; \"is\"; \"a lawyer\") from the sentence.",
"OIE systems are used 1 https://github.com/gkiril/benchie in many downstream tasks, including knowledge graph (KG) population (Gashteovski et al., 2020), open link prediction (Broscheit et al., 2020), and question answering (Yan et al., 2018).",
"These downstream tasks lend themselves as natural setups for extrinsic OIE evaluation (Mausam, 2016).",
"While valuable in concrete applications, such extrinsic evaluations do not measure the intrinsic correctness of the extracted facts: for that purpose, several benchmarks for intrinsic OIE evaluation have been proposed (Stanovsky and Dagan, 2016; Lechelle et al., 2019; Bhardwaj et al., 2019).",
"Automated benchmark evaluations are more feasible (i.e., faster and cheaper) than manual OIE evaluations (Hohenecker et al., 2020).",
"The current benchmarks, however, use scoring functions that are based on approximate (token-level) matching of system extractions against ground truth facts, which seems to be substantially less reliable than human judgments of extraction correctness (Zhan and Zhao, 2020).",
"This primarily stems from the incompleteness of existing OIE benchmarks: the gold standard extractions do not include all acceptable surface realizations of the same fact .",
"Consider, for example, a sentence from the recent evaluation framework CaRB (Bhardwaj et al., 2019): Sen. Mitchell is confident he has sufficient votes to block such a measure with procedural actions ; with the gold triple extraction ( Sen. Mitchell ; is confident he has ; sufficient votes to . . . procedural actions ).",
"Intuitively, a system extraction with a more concise object ( Sen. Mitchell ; is confident he has ; sufficient votes )could also be accepted, as it still captures the same core piece of knowledge, and would arguably be valuable in most downstream tasks.",
"To account for this, existing benchmarks credit system extractions for per-slot lexical overlap with gold extractions.",
"Such scoring is overly lenient and overestimates the systems' ability to extract correct knowledge facts .",
"Consider, e.g., a system 4472 extraction ( Sen. Mitchell ; is confident he has ; procedural actions ) for the above-mentioned sentence.",
"From the factual perspective, this extraction is clearly incorrect ( Sen. Mitchell has votes , not actions ).",
"However, the popular CaRB benchmark with its token-level metrics would judge the extraction as having (1) perfect precision, since all extracted tokens can be found in corresponding slots of a gold extraction and (2) high recall, as all of the gold subject and predicate tokens as well as two gold object tokens ( procedural and actions ) are found within corresponding slots of the system extraction (Table 1).",
"Moreover, by providing a single ground truth extraction per fact, existing OIE benchmarks fail to acknowledge that different downstream applications focus on different facets (i.e., aspects) of OIE extractions: e.g., for text summarization, one may prefer minimal extractions (Ponza et al., 2018), whereas knowledge base population benefits from strict correctness of entities in subject and object slots (Lin et al., 2020).",
"In this work, we depart from lenient OIE evaluations based on per-slot token overlaps and propose BenchIE, a novel fact-centric and multi-faceted OIE evaluation framework and benchmark at the core of which is the following question: Does the system extraction express the same fact (i.e., the same unit of knowledge) as any of the ground truth extractions (and vice versa)",
"w.r.t. the specific aspect of the OIE extraction that is of interest for one or more downstream applications?",
"Contributions.",
"BenchIE advances the state of the art in OIE evaluation in the following: (1) it is the first fact-centered approach to OIE evaluation: to reliably answer the above question, we exhaustively list all correct extractions of the same fact.",
"In contrast to existing benchmarks, BenchIE specifies complete sets of fact-equivalent extractions (dubbed fact synsets ), allowing us to avoid error-prone evaluation based on token overlap measures; (2) BenchIE is the first multi-faceted OIE benchmark, allowing to test systems for different aspects of OIE extractions that may be relevant in concrete downstream applications; (3) BenchIE is a multilingual benchmark, covering English, Chinese, and German, and to the best of our knowledge the first with manually annotated (i.e., gold standard) extractions in all languages; 2 (4) finally, as a 2 Ro et al. (2020) introduce a multilingual version of the CaRB dataset by machine translating both sentences and extractions.",
"fact-based and multi-faceted benchmark, BenchIE allows us to perform what we believe to be the most comprehensive profiling and comparative evaluation of OIE systems.",
"BenchIE portrays fact extraction abilities of six state-of-the-art OIE models much less favorably and points to their limitations that cannot be detected with existing benchmarks.",
"Most OIE systems extract (subject, predicate, object) triples, with concepts as subjects and objects and verb phrases (VPs) as predicates (Banko et al., 2007; Stanovsky et al., 2018; Lauscher et al., 2019; Gashteovski et al., 2017, 2019), though systems producing n-ary (Akbik and Lser, 2012), nested (Bhutani et al., 2016), and noun-mediated extractions (Yahya et al., 2014) also exist.",
"Here we follow the most common practice and focus on VP-mediated facts.",
"Our novel fact-based benchmark and evaluation paradigm can, however, equally be applied to other types of extractions (e.g., Friedrich et al. (2022) used this fact-based concept for OIE to create gold annotations for NE-Centric OIE triples ; i.e., triples where each argument is a named entity and the relations could be either verb phrases or noun phrases).",
"We introduce the general concept of a fact synset : a set of all possible extractions (i.e., different surface forms) for a given fact type (e.g., VP-mediated facts) that are instances of the same fact.",
"E.g., given the input sentence from Table 2, the extractions ( Sen. Mitchell ; has sufficient votes to block ; such a measure ) and ( Sen. Mitchell ; has sufficient votes to block ; measure ) capture the same fact and thus belong to the same fact synset.",
"Existing benchmarks fail to exhaustively list all acceptable extractions for the same fact.",
"This is precisely why, in order to avoid penalizing systems for correct extractions that are not exactly the same as the gold triples, they resort to lenient token-based performance measures prone to two types of errors: (1) they punish correct fact extractions that have limited lexical overlap with the gold extraction of the same fact, e.g., ( Sen. Mitchell ; is confident he has ; sufficient votes) vs. ( Sen. Mitchell ; is confident he has ; sufficient votes to . . . procedural actions ) unreliable for OIE as shown by Kotnis et al. (2022), up to 70% of sentence or extraction translations obtained this way were incorrect.",
"and (2) they reward incorrect extractions that have high lexical overlap with a gold extraction, e.g., ( Sen. Mitchell ; is confident he has; procedural actions ) vs. (Sen. Mitchell; is confident he has; sufficient votes to block. . . with procedural actions) .",
"To prevent this, BenchIE relies on exact matching of system extractions against the gold fact synsets.",
"Further, some OIE systems (over)generate extractions of the same fact; e.g., (Sen. Mitchell; has sufficient votes to block; such a measure) and (\"Sen. Mitchell\"; \"has sufficient votes to block\"; \"measure\") .",
"Existing evaluation procedures do not acknowledge the fact equivalence of extractions and consequently reward OIE systems for multiply extracting the same fact.",
"Our evaluation based on fact synsets directly remedies these shortcomings of existing OIE benchmarks.",
"English Benchmark.",
"To make BenchIE comparable to previous benchmarks, we annotate fact synsets on a subset of sentences from CaRB (Bhard-waj et al., 2019).",
"Because exhaustive annotation of fact synsets is time consuming, we carried it on 300 (out of 1,200) randomly sampled CaRB sentences.",
"To collect truly exhaustive fact synsets, two expert annotators independently labeled the selected 300 sentences in three rounds.",
"(1) Each annotator first (independently) manually denoted every extraction in which a VP-predicate connects two concepts.",
"The annotator then grouped the fact-equivalent triples into fact synsets.",
"3 To speed the annotation process up, we developed a dedicated web-based annotation tool AnnIE that facilitates the extraction of VP-mediated triples (e.g., we color-code verbs to indicate possible predicate heads) and their clustering into fact synsets; 4 (2) The annotators then carefully examined all gold extractions from the original CaRB dataset and added those judged to be correct, yet missing from the manually labeled fact synsets from the previous step; (3) Finally, each annotator compared the extractions of all OIE systems in evaluation (see 4) against the BenchIE's fact synsets (i.e., the result of the first two steps).",
"Any system extraction not found in BenchIE was carefully examined andif judged to be correct added to the appropriate fact synset.",
"5 Finally, the 3 We provide the annotation guidelines in Appendix A.1.",
"details about the tool, see Friedrich et al. (2022).",
"5 Very few extractions were actually added in steps (2) and (3); i.e., there were very few correct extractions (from CaRB gold standard and output of OIE systems) that the annotators missed during manual annotation of fact synsets.",
"two annotators merged their independently created annotations by discussing and jointly resolving the disagreements.",
"The overall annotation effort for the English dataset amounted to 80 hours per annotator.",
"English BenchIE contains 136,357 unique gold extractions, grouped into 1,350 fact synsets.",
"For comparison, CaRB (Bhardwaj et al., 2019) lists mere 783 gold triples for the same 300 sentences.",
"Table 2 shows fact synsets for an example sentence.",
"Inter-Annotator Agreement (IAA).",
"To validate BenchIE's annotations, we measure the inter-annotator agreement (IAA) between our two expert annotators.",
"To this end, we quantify the agreement via recall at the fact level (see 2.3 for further details): for each annotator, we compute their fact-level recall as the percentage of fact synsets of the other annotator they cover with their extractions.",
"6 We average the fact-level recalls of the two annotators as the IAA score.",
"We observed a high IAA score of 0 .",
"79 .",
"Upon manual inspection, we found that the annotators mostly agree on fact-synset level; most of the the disagreements are on extractions level (particularly, from marking the optional tokens within an extraction; see Appendix A.1.3 for details about the optional tokens).",
"Chinese and German Benchmarks.",
"Two bilingual expert annotators native in the target language and fluent in English ( EN ) translated the original 300 English sentences to Chinese ( ZH ) and German ( DE ), respectively.",
"Then, to collect exhaustive fact synsets in ZH and DE , they followed the same three annotation rounds described for 2.2.",
"Due to substantial (primarily syntactic) differences compared to EN , we adjusted the annotation guidelines for these languages (see the Appendix A.2 and A.3 for more details).",
"The statistics (number of fact synsets and extractions) of the ZH and DE benchmarks are given in Table 3.",
"Compared to EN BenchIE, the ZH benchmark contains significantly fewer fact synsets (994 compared to 1,350) and more than two orders of magnitude fewer extractions.",
"The drastically smaller number of extractions is primarily due to the lack of determiners and articles in Chinese.",
"Their frequent occurrence in English combined with their neutrality",
"w.r.t. extractions' correctness results in many mutually different yet fact-equivalent extractions.",
"The numbers for German are, expectedly, much closer to those for English.",
"We assume that BenchIE is (1) complete , i.e., that it contains",
"(a) all VP-mediated facts expressed in input sentences and",
"(b) for each fact, its every acceptable extraction as well; and (2) sound , i.e., that it does not contain any incorrect extraction that would capture a fact not stated in the sentence.",
"Such a complete OIE gold standard enables not only a more reliable evaluation of OIE systems by means of exact matching, but also an evaluation at the more meaningful level of knowledge facts, rather than at the level of individual triples.",
"Concretely, we consider a system extraction to be correct if and only if it exactly matches some gold extraction from some fact synset.",
"The number of true positives (TPs) is the number of fact synsets (i.e., different facts) covered by (at least one of the) system extractions.",
"This way, a system that extracts N different triples of the same fact, will be rewarded only once for the correct extraction of the fact.",
"BenchIE's false negatives (FNs) are then, intuitively, fact synsets not covered by any of the system extractions.",
"Finally, each system extraction that does not exactly match any gold triple (from any synset) counts as a false positive (FP).",
"We then compute Precision , Recall , and F 1 score (as the final score) from TP, FP, and FN in standard fashion.",
"Different downstream applications care about different aspects of OIE extractions.",
"For IE-based text summarization and simplification (Ponza et al., 2018; tajner and Glava, 2017), e.g., triples should be minimal overall, across all slots (i.e., without unnecessary tokens), but the exact token placement across the slots (e.g., if a preposition is in the predicate or object) does not matter.",
"For entity linking and knowledge base population (Lin et al., 2020), in contrast, the token placement between slots is critical: a token that is not part of an entity, should not be placed into subject or object.",
"Acknowledging this, we create three additional variants of the English BenchIE, referred to as facets , each 4475 Input sentence: \"Sen. Mitchell is confident he has sufficient votes to block such a measure with procedural actions.\"",
"BenchIE-E (\"Sen. Mitchell\" | \"he\"; \"is confident he has ... [such] [a] measure with\"; \"procedural actions\") BenchIE-C \"(Sen. Mitchell | he) is confident he has sufficient votes to block [such] [a] measure with procedural actions\" BenchIE-M (\"Sen. Mitchell\" | \"he\"; \"is confident he has sufficient votes to block measure with\"; \"procedural actions\") (\"Sen. Mitchell\" | \"he\"; \"is confident he has sufficient votes to block measure\"; \"with procedural actions\") Table 4: Illustration of BenchIE's facets for one fact synset ( f 4 from Table 2): all acceptable surface realizations under each facet are shown.",
"corresponding to one aspect that is relevant in common OIE applications.",
"This effort addresses recent calls for multi-dimensional analysis of NLP systems (Ethayarajh and Jurafsky, 2020; Narayan et al., 2021) and is well-aligned with recent efforts that create multi-faceted benchmarks for other NLP tasks (Liu et al., 2021; Vth et al., 2021) and datasets (Xiao et al., 2022).",
"The default, general-purpose BenchIE facet from the previous section was designed to be somewhat tolerant to token distribution accross slots (see Appendix A.1.2 for details): some tokens may be placed in either the predicate or object (e.g., the preposition with in the synset f 4 in Table 2).",
"This enables a more flexible comparison of OIE systems that are designed for different purposes (i.e., systems that produce slightly different token placements are not punished) and is in line with prior work on intrinsic OIE evaluation, both automatic (Stanovsky and Dagan, 2016; Bhardwaj et al., 2019) and manual (Fader et al., 2011; Del Corro and Gemulla, 2013; Gashteovski et al., 2017).",
"Such extraction flexibility, however, may not be desirable in tasks like automated KG construction (Wolfe et al., 2017; Jiang et al., 2019) or entity linking (Lin et al., 2020, 2021).",
"Angeli et al. (2015) show empirically that extractions with wholesome entities and without additional tokens yield benefits in KG construction.",
"Since OIE is predominantly used for KG-related tasks (Weikum et al., 2020), it is paramount to have an evaluation facet that imposes strict(er) token boundaries on entity slots subjects and objects.",
"We thus create the entity facet of the benchmark (BenchIE-E) with this additional constraint of wholesomeness of subject and object concepts.",
"BenchIE-E was constructed by one of our annotators (see 2.2) by removing from EN BenchIE's fact synsets the extractions in which subject and/or object was not a wholesome concept (see Table 4).",
"The default BenchIE facet (2) compares OIE extractions against gold triples from fact synsets at the slot level: to be judged correct, an extraction must exactly match some gold triple in all slots.",
"This criterion, however, is overly strict if extractions are to be used in applications like summarization or simplification (Ponza et al., 2018; tajner and Glava, 2017), which commonly concatenate the content of the slots.",
"In this case, it does not matter if a sequence of tokens occurs at the end of the subject or beginning of the predicate (analogously for predicate and object).",
"To reflect this, we introduce the concatenation facet , BenchIE-C: for each gold BenchIE triple, we create the gold BenchIE-C utterance by simply concatenating the content of the triple's slots (see Table 4).",
"Our third additional evaluation facet addresses the aspect of minimality of OIE extractions (Gash-teovski et al., 2017).",
"More compact extractions can benefit both text generation (Ponza et al., 2018; tajner and Glava, 2017) and KG-related tasks (Lin et al., 2020, 2021).",
"If two triples t 1 and t 2 capture the same fact (i.e., are in the same fact synset), t 1 is considered more compact than t 2 if tokens of each t 1 slot make a (non-strict) subsequence of tokens in the corresponding t 2 slot (Gashteovski, 2020).",
"7 To allow for evaluation of minimality, BenchIE-M triples contain only the non-optional tokens (denoted in square brackets in Table 2) from the corresponding BenchIE triple.",
"Consequently, BenchIE-M fact synsets on average contain many fewer extractions than the original BenchIE synsets.",
"8 4 Fact-Level Evaluation We first compare BenchIE's fact-level evaluation (i.e., default facet, 2) against CaRB's token-level 7 At least one t 1 slot has to be a strict subsequence of the respective t 2 slot; t 1 and t 2 would be the same otherwise.",
"scoring (Bhardwaj et al., 2019).",
"9 Our quantitative results confirm our intuitions and observations (see Table 1): CaRB systematically and substantially overestimates OIE systems' performance.",
"BenchIE, we argue, portrays the fact extraction abilities of OIE systems more realistically.",
"OIE Systems.",
"We tested six widely used OIE systems that extract VP-mediated facts for EN , namely: ClausIE (Del Corro and Gemulla, 2013), Stanford OIE (Angeli et al., 2015), MinIE (Gash-teovski et al., 2017), ROIE (Stanovsky et al., 2018), OpenIE 6 (Kolluru et al., 2020) and M 2 OIE (Ro et al., 2020).",
"We additionally implemented the following naive baseline (Naive OIE): each verb (detected using spaCy's POS-tagger (Honnibal and Montani, 2017)) becomes the predicate, its entire preceding sentence context becomes the subject and succeeding context the object.",
"For ZH and DE , we evaluated a supervised M 2 OIE (Ro et al., 2020) model based on the multilingual BERT (Devlin et al., 2019), trained on a large EN dataset (Zhan and Zhao, 2020) and transferred (zero-shot) to target languages by means of its multilingual encoder.",
"Implicit and N-ary Extractions.",
"Some OIE systems produce implicit extractions containing tokens that do not occur in the sentence.",
"10 As BenchIE does not contain implicit annotations, we remove such extractions from the OIE systems' output, to avoid penalizing OIE systems for extracting fact types not covered by the benchmark.",
"To make CaRB directly comparable, we automati-9 CaRB is an improved version of the widely-adopted OIE2016 benchmark (Stanovsky and Dagan, 2016); our findings for CaRB are thus likely to hold for OIE2016 as well.",
"10 E.g., the triple (\"Biden\"; \"be\"; \"President\") extracted from the phrase \"President Biden ...\" cally remove all its implicit extractions too.",
"ROIE and M 2 OIE produce N-ary extractions (i.e., more than three slots), whereas BenchIE contains only triples.",
"We follow standard practice (Del Corro and Gemulla, 2013) and convert those extractions into triples by concatenating the third and subsequent slots into a single object.",
"Table 5 summarizes results of OIE systems on BenchIE and CaRB.",
"Across the board, BenchIE's fact-level precision and recall are significantly lower than CaRB's respective precision and recall computed on token level.",
"On average, CaRB scores the OIE systems higher than BenchIE by 14 percentage points for precision, 38 percentage points for recall and 26 percentage points for the F 1 score.",
"Precision.",
"System's precision on BenchIE is lower (albeit not so drastically lower as recall) than on CaRB because BenchIE, as a complete benchmark, punishes incorrect facts , i.e., extractions that cannot be found in BenchIE's fact synsets.",
"CaRB, on the other hand, rewards any token overlap that the incorrectly extracted fact has against its gold triple(s) in many cases such overlap is substantial and CaRB consequently rewards the incorrect fact with high precision.",
"Consider, for example, the sentence from Table 1 and an incorrect fact extraction ( Sen. Mitchell ; is confident he has ; sufficient actions ); on BenchIE, this extraction is a false positive because it does not exist in any of the four fact synsets it lists for the sentence.",
"CaRB, in contrast, rewards the extraction with perfect precision because all its tokens are accounted for in the corresponding slots of its gold triple ( Sen. Mitchell ; is confident he has\" ; \"sufficient votes to . . . actions ).",
"In an attempt to quantify how much CaRB overestimates fact-level precision with its token overlap metric, we evaluated our Naive OIE baseline on both CaRB and BenchIE.",
"While BenchIE reflects the poor quality of naive extractions with the near-zero performance, CaRB estimates its precision to be non-negligible (0.24) and even higher than that of the Stanford's OIE system (0.17).",
"In contrast, BenchIE assigns much lower score to this baseline: precision of 0.038 times less than CaRB's score.",
"Recall.",
"While CaRB somewhat overestimates fact-level precision of OIE systems, its overestimation of their recall is much more drastic: all tokens of its gold extractions that can be found in respective slots of a factually incorrect extraction of an OIE system contribute to the system's recall.",
"The overestimation of CaRB's recall scores is best illustrated by the fact that our naive baseline (Naive OIE) obtains a score of 0.7, better than any of the six OIE systems under evaluation.",
"In terms of recall, CaRB obviously rewards long extractions the longer the system extraction is, the more likely it is to cover more tokens from gold standard extractions.",
"Neural extractors OpenIE6, ROIE, and M 2 OIE on average produce much longer extractions than rule-based systems like MinIE or Stanford (e.g., on average, a ROIE extraction has 16 tokens, whereas Stanford extraction has 7.7 tokens): accordingly, CaRB rewards the neural systems with much higher recall scores.",
"BenchIE, on the other hand, credits only the OIE extractions that cover its fact synsets (and only once per fact synset).",
"Our Naive OIE is, intuitively, highly unlikely to match gold extractions from fact synsets and BenchIE reflects this with a fact-level recall of only 2%.",
"Similarly, BenchIE's recall scores reveal that the long extractions of neural OIE systems very rarely correspond to any acceptable variant of an expressed fact (e.g., ROIE's fact-level recall is only 9%).",
"Multilingual OIE.",
"We evaluated M 2 OIE (as the only multilingual model in our evaluation) on the Chinese and German versions of BenchIE.",
"Quite expectedly, the performance for Chinese and German in target languages is below the source English performance.",
"However, the drop due to the zero-shot language transfer is, at first glance surprisingly, much larger for German than for Chinese: this goes against findings from other tasks, where transfer performance correlates with linguistic proximity between the source and target language (Lauscher et al., 2020).",
"M 2 OIE's Chinese N a i v e OIEC l a u s IEM i n IES t a n f o r d ROIEO p e n IE 6 M 2 OIE ( EN ) 0.0 0.2 0.4 0.6 0.8 subject relation object Figure 1: Relative proportion of errors per slot for OIE systems.",
"performance is encouraging, as it surpasses the English performance of some of the other OIE models (e.g., its recall score is better than ROIE, and its precision score is better than Stanford's).",
"We believe this is because",
"(a) OIE is a highly syntactic task; and",
"(b) Chinese language is syntactically simple and has the same word order as English (SVO).",
"German language, on the other hand, despite overall linguistic proximity to English, has a different word order (SOV; from generative per-spective), with the main verb often appearing at the very end of the sentence this, we believe, is the main cause of poor OIE transfer between English and German.",
"We believe BenchIE is a good starting point for multilingual OIE evaluation: we subsequently created additional data for Arabic, Galician, and Japanese: see Kotnis et al. (2022) and Friedrich et al. (2022) for details and further analyses.",
"Token-based evaluation of existing OIE benchmarks (with real per-extraction scores in the range [0 , 1] ) makes pinpointing of extraction error source difficult.",
"This limits their usability in automatic error analysis and system profiling.",
"The fact that previous work performed OIE error analyses manually (Fader et al., 2011; Schneider et al., 2017) confirms this.",
"BenchIE, in contrast, lists all acceptable extractions and thus naturally lends itself to reliable automatic error analysis and profiling.",
"We carry out the analysis of errors per slots on the default BenchIE facet (2), because it is application-agnostic, unlike the additional facets from 3.",
"We observed that most of the errors in all OIE systems stem from extracting the objects (see Figure 1).",
"For an SVO language like English, 4478 ( 0 , 0 , 0 ) ( 0 , 0 , 1 ) ( 0 , 1 , 0 ) ( 0 , 1 , 1 ) ( 1 , 0 , 0 ) ( 1 , 0 , 1 ) ( 1 , 1 , 0 ) Naive OIE ClausIE MinIE Stanford ROIE OpenIE6 M2OIE (EN) 0.3 0.07 0.33 0.16 0.02 0 0.12 0.08 0.03 0.06 0.08 0.13 0.05 0.57 0.06 0.08 0.04 0.1 0.23 0.23 0.26 0.05 0.05 0.19 0.07 0.16 0.09 0.39 0.09 0.07 0.07 0.06 0.25 0.17 0.29 0.09 0.04 0.1 0.07 0.15 0.06 0.5 0.06 0.03 0.12 0.08 0.11 0.06 0.54 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2: Distribution of incorrect extractions of OIE systems across different slot-error combinations.",
"correctly extracting subjects and predicates seems substantially easier than correctly extracting objects.",
"MinIE (rule-based) and ROIE (neural) have higher shares of predicate mis-extractions.",
"MinIE post-processes ClausIE's triples by moving words from objects to predicates.",
"Since ClausIE most frequently makes object errors, this effectively redistributes those errors between predicates and objects of MinIE's extractions.",
"Figure 1, however, does not tell the whole story, as many extractions are erroneous in multiple slots.",
"For more detailed insights, we assign each incorrect extraction to one of seven error buckets: each error bucket indicates one combination of extraction errors across the three slots.",
"For example, the bucket (1 , 1 , 0) contains extractions that match their closest gold triple in the subject and predicate, but not object.",
"The closest gold triple is the one that matches the extraction in most slots.",
"11 The error-bucket analysis, summarized in Figure 2, reveals that, across all systems, most extractions with object errors actually have correct subjects and predicates (bucket (1 , 1 , 0) ).",
"MinIE deviates from this pattern and produces also many extractions with both incorrect object and predicate (bucket (1 , 0 , 0) ) or only bad predicate (bucket (1 , 0 , 1) ).",
"Expectedly, most extractions of our naive baseline most often get only the predicate right (bucket (0 , 1 , 0) ) or all three slots wrong (bucket (0 , 0 , 0) ).",
"This further emphasizes how misleading current token-based benchmarks can be CaRB rewards this baseline with very high recall (see 4).",
"To understand where OIE systems fail systematically, we split the input sentences into buckets and measured the performance of OIE systems per",
"bucket.",
"Based on preliminary qualitative error analysis, we chose bucketization according to some linguistic properties of the sentences that produced erroneous triples.",
"In particular, we examine the performance of OIE systems for sentence length, presence of conjunctions and case markers, since these appeared to be the most common reasons for failure.",
"Note that BenchIE instances can be bucke-tized according to an arbitrary dimension interest, lending itself to diverse future fine-grained evaluations and analyses of OIE systems' behaviour.",
"In general, we found that OIE systems exhibit weakest performance on long sentences (with more than 30 tokens) as well as those that contain conjunctions or have more than two case markers (Figure 3).",
"For a more detailed discussion, see Appendix C. 5.3 Multi-Faceted Evaluation Finally, we profile the OIE systems on our three special benchmark facets (3): BenchIE-E, -C and -M.",
"Figure 4 summarizes the performance of OIE systems on these three facets.",
"BenchIE-C.",
"Ignoring slot boundaries, this facet is more lenient to OIE systems than the default facet BenchIE-C yields higher scores than the regular BenchIE facet for all systems.",
"The gap between the system's performance on BenchIE-C and BenchIE effectively quantifies how often the system misplaces tokens between adjacent slots.",
"This gap is very small for Stanford OIE and MinIE this means that, for extractions with correct overall token span, they also distribute the tokens between the slots correctly.",
"For downstream tasks like text summarization, BenchIE-C results point to ClausIE as the best choice.",
"Interestingly, we observed that CaRB's Precision for some systems (ClausIE and MinIE) effectively matches their Precision on BenchIE-C (see Figure 4), which is another indication that CaRB scores, in effect, neglect precise token distributions across slots.",
"BenchIE-E.",
"This facet is stricter than the default BenchIE facet it allows fewer token placement variants in subject and object.",
"For all OIE systems the F 1 BenchIE-E score is thus lower than the corresponding BenchIE score.",
"MinIE and Stanford OIE obtain very similar performance on BenchIE-C, BenchIE (default), and BenchIE-E: this means that their extraction (when correct in overall token span) most often have clean concepts in subject and object.",
"All neural systems and ClausIE exhibit huge performance drops on BenchIE-E this 4479 Figure 3: Bucketized experiments: F1 score according to different bucketizations of the input sentences: sentence length",
"means that their subject and object concept extractions are not clean, which makes these systems less suitable for tasks like KG population and entity linking.",
"Out of the systems we evaluate, MinIE is the best fit for such downstream tasks.",
"BenchIE-M.",
"This facet yields the lowest performance for all systems, as it punishes extractions with any unnecessary tokens.",
"Expectedly, MinIE a system tailored to produce minimal extractions yields the best performance on this facet.",
"But even MinIE loses half of its performance when minimality is enforced (BenchIE vs. BenchIE-M).",
"This calls for more work on minimizing OIE extractions.",
"Stanford OIE outperforms all systems except MinIE, which renders it a good pick when extraction minimality is beneficial for a downstream task.",
"Neural vs. Rule-Based Systems.",
"Neural systems underperform their rule-based counterparts on most facets.",
"This gap is most pronounced on BenchIE-E, whereas it is much smaller on BenchIE-C: these observations strongly indicate that neural systems struggle the most with correct distribution of tokens across the (adjacent) extraction slots.",
"They also do not attempt to remove the optional (i.e., unnecessary) tokens, as indicated by extremely low performance on BenchIE-M.",
"On CaRB, however, these same neural systems yield the best performance.",
"Being trained and validated on datasets with extractions similar to CaRB's, neural extractors seem to overfit to CaRB evaluation.",
"Our fact-based multi-faceted evaluation, however, reveals that their extractions are far less likely to be useful down the stream.",
"We introduced BenchIE: a benchmark for more reliable fact-level evaluation of OIE systems for English, Chinese and German.",
"Unlike existing benchmarks, BenchIE takes into account fact-level equivalence of extractions: it consists of fact synsets that contain all acceptable surface forms of the same fact.",
"Further, EN BenchIE is multi-faceted it allows to evaluate OIE extractions",
"w.r.t. several aspects relevant in common downstream tasks.",
"Our experiments show that current benchmarks, with incomplete gold standard and approximate token-level matching, drastically overestimate fact extraction abilities of OIE systems.",
"Currently, the limits of BenchIE are its relatively small size (300 sentences",
"v.s.",
"CaRB's 1,200) and its time-consuming annotation process.",
"A promising research direction is the investigation of trade-off between the manual effort and completeness of different OIE annotation strategies.",
"In this scenario, BenchIE is an ideal point of reference: it can precisely quantify the completeness of some larger (non-exhaustive) OIE dataset created with limited or no manual effort."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Throughout a conversation, participants make choices that can orient the flow of the interaction.",
"Such choices are particularly salient in the consequential domain of crisis counseling, where a difficulty for counselors is balancing between two key objectives: advancing the conversation towards a resolution, and empathetically addressing the crisis situation.",
"In this work, we develop an unsupervised methodology to quantify how counselors manage this balance.",
"Our main intuition is that if an utterance can only receive a narrow range of appropriate replies, then its likely aim is to advance the conversation forwards, towards a target within that range.",
"Likewise, an utterance that can only appropriately follow a narrow range of possible utterances is likely aimed backwards at addressing a specific situation within that range.",
"By applying this intuition, we can map each utterance to a continuous orientation axis that captures the degree to which it is intended to direct the flow of the conversation forwards or backwards.",
"This unsupervised method allows us to characterize counselor behaviors in a large dataset of crisis counseling conversations, where we show that known counseling strategies intuitively align with this axis.",
"We also illustrate how our measure can be indicative of a conversation's progress, as well as its effectiveness.",
"Participants in a conversation constantly shape the flow of the interaction through their choices.",
"In psychological crisis counseling conversations, where counselors support individuals in mental distress, these choices arise in uniquely complex and high-stakes circumstances, and are reflected in rich conversational dynamics (Sacks, 1992).",
"As such, counseling is a valuable context for computationally modeling conversational behavior (Atkins when i tell my mom about the bullies she just ignores me Have you confided to anyone else about this? yeah there's my sister she just tells me to get over it That sounds so frustrating, you deserve to be listened to. t 0 t 1 t 2 c 1 c 2 Figure 1: Two possible exchanges in a counseling conversation, illustrating key objectives that a counselor must balance: c 1 aims to advance the conversation towards a discussion of possible confidants; c 2 aims to address the emotion underlying the preceding utterance. et al., 2014; Althoff et al., 2016; Perez-Rosas et al., 2018; Zhang et al., 2019).",
"Modeling the conversational choices of counselors in this endeavor is an important step towards better supporting them.",
"Counselors are driven by several objectives that serve the broader goal of helping the individual in distress; two key objectives are exemplified in Figure",
"1. 1 The counselor must advance a conversation towards a calmer state where the individual is better equipped to cope with their situation (Mishara et al., 2007; Sandoval et al., 2009): in c 1 , the counselor prompts the individual to brainstorm options for social support.",
"The counselor must also empathetically address what was already said, coming to an empathic understanding of the individual (Rogers, 1957; Hill and Nakayama, 2000): in c 2 , the counselor validates feelings that the individual has just shared.",
"Balancing both objectives is often challenging, and overshooting in one direction can be detrimental to the conversation.",
"A counselor who leans too much on advancing forwards could rush the conversation at the expense of establishing an empathetic connection; a counselor who leans too much backwards , on addressing what was already said, may fail to make any progress.",
"1 These examples are derived from material used to train counselors in our particular setting, detailed in Section",
"2. 5277 In this work, we develop a method to examine counselor behaviors as they relate to this balancing challenge.",
"We quantify the relative extent to which an utterance is aimed at advancing the conversation, versus addressing existing content.",
"We thus map each utterance onto a continuous backwards-forwards axis which models the balance of these objectives, and refer to an utterance's position on this axis as its orientation .",
"At an intuitive level, our approach considers the range of content that is expected to follow or precede a particular utterance.",
"For an utterance like c 1 that aims to advance the conversation towards an intended target, we would expect a narrow range of appropriate replies, concentrated around that target (e.g., suggestions of possible confidants).",
"We would likewise expect an utterance like c 2 that aims to address a previously-discussed situation to only be an appropriate reply for a narrow range of possible utterances, concentrated around that specific type of situation (e.g., disclosures of negative feelings).",
"Starting from this intuition, we develop an unsupervised method to quantify and compare these expected forwards and backwards ranges for any utterance, yielding our orientation measure.",
"Using this measure, we characterize counselor behaviors in a large collection of text-message conversations from a crisis counseling service, which we accessed in collaboration with the service and with the participants' consent.",
"We show how orientation meaningfully distinguishes between key conversational strategies that counselors are taught during their training.",
"We also show that our measure tracks a conversation's progress and can signal its effectiveness, highlighting the importance of balancing the advancing and addressing objectives, and laying the basis for future inquiries in establishing potential causal effects.",
"In summary, we develop an unsupervised methodology that captures how counselors balance the conversational objectives of advancing and addressing (Section 4), apply and validate it in a large dataset of counseling conversations (Section 5), and use it to investigate the relation between a counselor's conversational behavior and their effectiveness (Section 5.4).",
"While our method is motivated by a salient challenge in counseling, we expect similar balancing problems to recur in other conversational settings where participants must carefully direct the flow of the interaction, such as court trials and debates (Section 6).",
"We develop our method in the context of Crisis Text Line, a crisis counseling platform which provides a free 24/7 service for anyone in mental crisishenceforth texters to have one-on-one conversations via text message with affiliated counselors.",
"We accessed a version of this collection, with over 1.5 million conversations, in collaboration with the platform and with the consent of the participants.",
"The data was scrubbed of personally identifiable information by the platform.",
"2 These conversations are quite substantive, averaging 25 messages with 29 and 24 words per counselor and texter message, respectively.",
"In each conversation, a crisis counselor's high-level goal is to guide the texter towards a calmer mental state.",
"In service of this goal, all counselors first complete 30 hours of training provided by the platform, which draws on past literature in counseling to recommend best practices and conversational strategies.",
"The first author also completed the training to gain familiarity with the domain.",
"While the platform offers guidance to counselors, their task is inevitably open-ended, given the emotional complexity of crisis situations.",
"As such, the counselors are motivated by an explicit goal that structures the interaction, but they face a challenging flexibility in choosing how to act.",
"We now describe the conversational challenge of balancing between advancing the conversation forwards or addressing what was previously said.",
"Our description of the challenge and our computational approach to studying it are informed by literature in counseling, on the platform's training material and on informal interviews with its staff.",
"A conversational balance.",
"A crisis counselor must fulfill multiple objectives in their broader goal of helping a texter.",
"One objective is guiding the texter through their initial distress to a calmer mental state (Mishara et al., 2007; Sandoval et al., 2009), as in Figure 1, c 1 .",
"Various strategies that aim to facilitate this advancing process are taught to counselors during training: for instance, a counselor may prompt a texter to identify a goal or cop-2 The data can be accessed by applying at https:// www.crisistextline.org/data-philosophy/data-fellows/ .",
"The extensive ethical and privacy considerations, and policies accordingly implemented by the platform, are detailed in Pisani et al. (2019).",
"ing mechanism (Rollnick and Miller, 1995).",
"As such, they attempt to move the conversation forwards , towards its eventual resolution.",
"The counselor must also engage with the texter's concerns (Rogers, 1957; Hill and Nakayama, 2000), as in c 2 , via strategies that empathetically address what the texter has already shared (Roll-nick and Miller, 1995; Weger et al., 2010; Bodie et al., 2015).",
"For instance, counselors are taught to reflect , i.e., reframe a texter's previous message to convey understanding, or draw on what was said to affirm the texter's positive qualities.",
"In doing so, the counselor looks backwards in the conversation.",
"Past work has posited the benefits of mixing between strategies that aim at either objective (Mishara et al., 2007).",
"However, as the training acknowledges, striking this balance is challenging.",
"Overzealously seeking to advance could cut short the process of establishing an empathetic connection.",
"Conversely, focusing on the conversation's past may not help with eventual problem solving (Bodie et al., 2015), and risks stalling it.",
"A texter may start to counterproductively rehash or ruminate on their concerns (Nolen-Hoeksema et al., 2008; Jones et al., 2009); indeed, prior psychological work has highlighted the thin line between productive reflection and rumination (Rose et al., 2007; Landphair and Preddy, 2012).",
"Orientation.",
"To examine this balancing dynamic, we model the choices that counselors make at each turn in a conversation.",
"Our approach is to derive a continuous axis spanned by advancing and addressing.",
"We refer to an utterance's position on this axis, representing the relative extent to which it aims at either objective, as its orientation .",
"We interpret a forwards-oriented utterance with positive as aiming to advance the conversation, and a backwards-oriented utterance with negative as aiming to address what was previously brought up.",
"In the middle, the axis reflects the graded way in which a counselor can balance between aimsfor instance, using something the texter has previously said to help motivate a problem-solving strategy.",
"Related characterizations.",
"While we develop orientation to model a dynamic in counseling, we view it as a complement to other characterizations of conversational behaviors in varied settings.",
"Prior work has similarly considered how utterances relate to the preceding and subsequent discourse (Webber, 2001).",
"Frameworks like centering theory (Grosz et al., 1995) aim at identifying referenced entities, while we aim to more abstractly model interlocutor choices.",
"Past work has also examined how interlocutors mediate a conversation's trajectory through taking or ceding control (Walker and Whittaker, 1990) or shifting topic (Nguyen et al., 2014); Althoff et al. (2016) considers the rate at which counselors in our setting advance across stages of a conversation.",
"While these actions can be construed as forwards-oriented, we focus more on the interplay between forwardsand backwards-oriented actions.",
"A counselor's objectives may also cut across these concepts: for instance, the training stresses the need for empathetic reflecting across all stages and topics.",
"Orientation also complements prior work on dialogue acts, which consider various roles that utterances play in discourse (Mann and Thompson, 1988; Core and Allen, 1997; Ritter et al., 2010; Bracewell et al., 2012; Rosenthal and McKeown, 2015; Prabhakaran et al., 2018; Wang et al., 2019).",
"In counseling settings, such approaches have highlighted strategies like reflection and question-asking (Houck, 2008; Gaume et al., 2010; Atkins et al., 2014; Can et al., 2015; Tanana et al., 2016; Perez-Rosas et al., 2017, 2018; Park et al., 2019; Lee et al., 2019; Cao et al., 2019).",
"Instead of modeling a particular taxonomy of actions, we model how counselors balance among the underlying objectives; we later relate orientation to these strategies (Section 5).",
"Most of these approaches use annotations or predefined labeling schemes, while our method is unsupervised.",
"We now describe our method to measure orientation, discussing our approach at a high level before elaborating on the particular operationalization.",
"The code implementing our approach is distributed as part of the ConvoKit library (Chang et al., 2020), at http://convokit.cornell.edu .",
"Orientation compares the extent to which an utterance aims to advance the conversation forwards with the extent to which it looks backwards.",
"Thus, we must somehow quantify how the utterance relates to the subsequent and preceding interaction.",
"Naive attempt: direct comparison.",
"As a natural starting point, we may opt for a similarity-based approach: an utterance that aims to address its preceding utterance, or predecessor , should be similar 5279 sounds frustrating confided to anyone ignoresjudges laughs doesn't because just problem ignore nothing sister friend counselor expected predecessors: expected replies: Figure 2: Words representative of replies and predecessors for utterances with two example phrasings, as observed in training data.",
"to it; an utterance that aims to advance the conversation should be similar to the reply that it prompts.",
"In practice, having to make these direct comparisons is limiting: an automated system could not characterize an utterance in an ongoing conversation by comparing it to a reply it has yet to receive.",
"This approach also has important conceptual faults.",
"First, addressing preceding content in a conversation is different from recapitulating it.",
"For instance, counselors are instructed to reframe rather than outright restate a texter's message, as in Figure 1, c 2 .",
"Likewise, counselors need not advance the conversation by declaring something for the texter to simply repeat; rather than giving specific recommendations, counselors are instructed to prompt the texters to come up with coping strategies on their own, as in c 1 .",
"Further, texters are not bound to the relatively formal linguistic style counselors must maintain, resulting in clear lexical differences.",
"Measuring orientation is hence a distinct task from measuring similarity.",
"Second, an utterance's intent to advance need not actually be realized.",
"A counselor's cues may be rebuffed or misunderstood (Schegloff, 1987; Thomas, 1983): a texter could respond to c 1 by continuing to articulate their problem with t 2 .",
"Likewise, a counselor may intend to address a texter's concerns but misinterpret them.",
"To model the balance in objectives that a counselor is aiming for, our characterization of an utterance cannot be contingent on its actual reply and predecessor.",
"Our approach: characterizing expectations.",
"We instead consider the range of replies we might expect an utterance to receive, or the range of predecessors that it might follow.",
"Intuitively, an utterance with a narrow range of appropriate replies aims to direct the conversation towards a particular target, moreso than an utterance whose appropriate replies span a broader range.",
"3 Likewise, an utterance that is an appropriate reply to only a narrow range of possible predecessors aims to address a particular situation.",
"We draw on unlabeled data of past conversations to form our expectations of these ranges, and build up our characterizations of utterances from their constituent phrasings , e.g., words or dependency-parse arcs.",
"The intuition for our approach is sketched in Figure",
"2. From our data, we observe that utterances containing confided to anyone generally elicited replies about potential confidants (e.g., sister , friend ), while the replies that followed utterances with sounds frustrating span a broader, less well-defined range.",
"As such, we have a stronger expectation of what a reply prompted by a new utterance with confided to anyone might contain than a reply to a new utterance with sounds frustrating .",
"More generally, for each phrasing w , we quantify the strength of our expectations of its potential replies by measuring the range spanned by the replies it has already received in the data, which we refer to as its forwards-range (cid:0)!",
"(cid:27) w .",
"We would say that confided to anyone has a smaller (cid:0)!",
"(cid:27) w than sounds frustrating , meaning that its observed replies were more narrowly concentrated; this is represented as the relative size of the red regions on the right side of Figure",
"2. In the other direction, we observe in our data that sounds frustrating generally followed descriptions of frustrating situations (e.g., ignores , judges ), while the range of predecessors to confided to anyone is broader.",
"We thus have a stronger expectation of the types of situations that new utterances with sounds frustrating would respond to, compared to new utterances with confided to anyone .",
"For a phrasing w , we quantify the strength of our expectations of its potential predecessors by measuring its backwards-range (cid:0) (cid:27) w , spanned by the predecessors we've observed.",
"As such, sounds frustrating has a smaller (cid:0) (cid:27) w than confided to anyone , corresponding to the relative size of the blue regions on the left side of Figure",
"2. 3 Consider leading versus open-ended questions.",
"When people ask leading questions, they intend to direct the interaction towards specific answers they have in mind; when people ask open-ended questions, they are more open to what answers they receive and where the interaction is headed.",
"The relative strengths of our expectations in either direction then indicate the balance of objectives.",
"If we have a stronger expectation of w 's replies than of its predecessorsi.e., smaller (cid:0)!",
"(cid:27) w than (cid:0) (cid:27) w we would infer that utterances with w aim to advance the conversation towards a targeted reply more than they aim to address a particular situation.",
"Conversely, if we have stronger expectations of w 's predecessorsi.e., smaller (cid:0) (cid:27) w we would infer that utterances with w aim to address the preceding interaction, rather than trying to drive the conversation towards some target.",
"We thus measure orientation by comparing a phrasing's forwardsand backwards-range.",
"The expectation-based approach allows us to circumvent the shortcomings of a direct comparison; we may interpret it as modeling a counselor's intent in advancing and addressing at each utterance (Moore and Paris, 1993; Zhang et al., 2017).",
"We now detail the steps of our method, which are outlined in Figure",
"3. Formally, our input consists of a set of utterances from counselors f c i g , and a set of utterances from texters f t i g , which we've observed in a dataset of conversations (Figure 3A).",
"We note that each texter utterance can be a reply to, or a predecessor of, a counselor utterance (or both).",
"We use this unlabeled training data to measure the forwards-range (cid:0)!",
"(cid:27) w , the backwards-range (cid:0) (cid:27) w (Figures 3B-D), and hence the orientation w of each phrasing w used by counselors (Figure 3E).",
"We then aggregate to an utterance-level measure.",
"For each counselor phrasing w , let (cid:0)!",
"T w denote the subset of texter utterances which are replies to counselor utterances containing w (Figure 3A).",
"As described above, the forwards-range (cid:0)!",
"(cid:27) w quantifies the spread among elements of (cid:0)!",
"T w ; we measure this by deriving vector representations of these utterances (cid:0)!",
"U w (Figure 3B, detailed below), and then comparing each vector in (cid:0)!",
"U w to a central reference point (cid:0)! u w (Figures 3C and 3D).",
"4 Likewise, (cid:0) (cid:27) w quantifies the similarity among elements of (cid:0) T w , the set of predecessors to counselor utterances with w ; we compute (cid:0) (cid:27) w by comparing each corresponding vector in (cid:0) U w to a central point (cid:0) u w .",
"4 Using a central reference point to calculate the forwards-range, as opposed to directly computing pairwise similarities among replies in (cid:0)!",
"U w , allows us to account for the context of w in the utterances that prompted these replies (via tf-idf weighting, as subsequently discussed).",
"To obtain vectors for each texter utterance, we construct X , a tf-idf reweighted term-document matrix where rows represent texter utterances and columns represent phrasings used by texters.",
"To ensure that we go beyond lexical matches and capture conceptual classes (e.g., possible confidants, frustrating situations), we use singular value decomposition to get X (cid:25) USVT .",
"Each row of U is a vector representation u i of utterance t i in the induced low-dimensional space T .",
"(cid:0)!",
"U w then consists of the corresponding subset of rows of U (high-lighted in Figure 3B).",
"Deriving central points (Figure 3C).",
"For each w , we take the central point (cid:0)! u w to be a weighted average of vectors in (cid:0)!",
"U w .",
"Intuitively, a texter utterance t i with vector u i should have a larger contribution to (cid:0)! u w if w is more prominent in the counselor utterance c i that preceded it.",
"We let w iw denote the normalized tf-idf weight of w in c i , and use w iw as the weight of the corresponding vector u i .",
"To properly map the resultant weighted sum w iw u i into T , we divide each dimension by the corresponding singular value in S .",
"As such, if w w is a vector of weights w iw , we can calculate the central point (cid:0)! u w 5281 of (cid:0)!",
"we likewise compute u w = w Tw U w S (cid:0) 1 .",
"Forwardsand backwards-ranges (Figure 3D).",
"We take the forwards-range (cid:0)!",
"(cid:27) w of w to be the average cosine distance from each vector in (cid:0)!",
"U w to the center point (cid:0)! u w .",
"Likewise, we take (cid:0) (cid:27) w as the average distance from each vector in (cid:0) U w to (cid:0) u w .",
"Phrasing-level orientation (Figure 3E).",
"Importantly, since we've computed the forwardsand backwards-ranges (cid:0)!",
"(cid:27) w and (cid:0) (cid:27) w using distances in the same space T , their values are comparable.",
"We then compute the orientation of w as their difference: w = (cid:0) (cid:27) w (cid:0) (cid:0)!",
"(cid:27) w .",
"Utterance-level orientation.",
"To compute the orientation of an utterance c i , we first compute the orientation of each sentence in c i as the tf-idf weighted average w of its constitutent phrasings.",
"Note that a multi-sentence utterance can orient in both directionse.g., a counselor could concatenate c 2 and c 1 from Figure 1 in a single utterance, addressing the texter's previous utterance before moving ahead.",
"To model this heterogeneity, we consider both the minimum and maximum sentence-orientations in an utterance: min captures the extent to which the utterance looks backwards, while max captures the extent to which it aims to advance forwards.",
"We apply our method to characterize messages from crisis counselors on the platform.",
"We compute the orientations of the phrasings they use, represented as dependency-parse arcs.",
"We use a training set of 351,935 texter and counselor messages each, from a random sample of conversations omitted in subsequent analyses.",
"5 Table 1 shows representative phrasings and sentences of different orientations.",
"6 Around two-thirds of phrasings and sentences have < 0 , echoing the importance of addressing the texter's previous remarks.",
"In what follows, we analyze counselor behaviors in terms of orientation, and illustrate how the measure can be useful for examining conversations.",
"We start by validating our method via two complementary approaches.",
"In a subset of sentences manually annotated with the counseling 5 Further implementation details are listed in the appendix.",
"6 Example sentences are derived from real sentences in the data, and modified for readability.",
"The examples were chosen to reflect common situations in the data, and were vetted by the platform to ensure the privacy of counselors and texters.",
"strategies they exhibit, we show that orientation meaningfully reflects these strategies (Section 5.1).",
"At a larger scale, we show that the orientation of utterances over the course of a conversation aligns with domain knowledge about counseling conversation structure (Section 5.2).",
"We also find that other measures for characterizing utterances are not as rich as orientation in capturing counseling strategies and conversation structure (Section 5.3).",
"Finally, we show that a counselor's orientation in a conversation is tied to indicators of their effectiveness in helping the texter (Section 5.4).",
"Even though it is computed without the guidance of any annotations, we expect orientation to meaningfully reflect strategies for advancing or addressing that crisis counselors are taught.",
"The first author hand-labeled 400 randomly-selected sentences with a set of pre-defined strategies derived from techniques highlighted in the training material.",
"We note example sentences in Table 1 which exemplify each strategy, and provide more extensive descriptions in the appendix.",
"Figure 4A depicts the distributions of orientations across each label, sorted from most backwardsto most forwards-oriented.",
"We find that the relative orientation of different strategies corroborates their intent as described in the literature.",
"Statements reflecting or affirming what the texter has said to check understanding or convey empathy (characterized by phrasings like totally normal ) tend to be backwards-oriented; statements prompting the texter to advance towards problem-solving (e.g., [what] has helped ) are more forwards-oriented.",
"Exploratory queries for more information on what the texter has already said (e.g., happened to make ) tend to have middling orientation (around 0).",
"The standard deviation of orientations over messages within most of the labels is significantly lower than across labels (bootstrapped p < : 05 , solid circles), showing that orientation yields interpretable groupings of messages in terms of important counseling strategies.",
"The measure also offers complementary information.",
"For instance, we find sentences that aren't accounted for by pre-defined labels, but still map to interpretable orientations, such as backwards-oriented examples assuaging texter concerns about the platform being a safe space to self-disclose.",
"We also show that orientation tracks with the structure of crisis counseling conversations as described in the training material.",
"Following Althoff et al. (2016), we divide each conversation with at least ten counselor messages into five equal-sized segments and average max and min over messages in each segment.",
"Figure 4B (black lines) shows that over the course of a conversation, messages tend to get more forwards-oriented (higher max and min ).",
"This matches a standard conversation structure taught in the training: addressing the texter's existing problems before advancing towards problem-solving.",
"While this correspondence holds in aggregate, orientation also captures complementary information to advancement through stagese.g., while problem-solving, counselors may still address and affirm a texter's ideas (Table 1, row 3).",
"We also consider a subset of conversations where we expect a different trajectory: for potentially suicidal texters, the training directs counselors to immediately start a process of risk assessment in which actively prompting the texter to disclose their level of suicidal ideation takes precedence over other objectives.",
"As such, we expect more forwards-oriented messages at the starts of conversations involving such texters.",
"Indeed, in the 30% of conversations which are risk-assessed, we find significantly larger max in the first segment (Figure 4B, orange line; Wilcoxon p < 0 : 01 in the first stage, comparing within-counselor).",
"min is smaller at each stage, suggesting that counselors balance actively prompting these critical disclosures with addressing them.",
"capturing a counselor's balancing decisions: Naive distance.",
"We conside the naive approach in Section 4, taking a difference in cosine distances between tf-idf representations of a message and its reply, and a message and its predecessor.",
"Backwards-range.",
"We consider just the mes-sage's backwards-range.",
"For each sentence we take tf-idf weighted averages of component (cid:0) (cid:27) w and take minimum (cid:0) (cid:27) for each message.",
"7 7 We get qualitatively similar results with maximum (cid:0)!",
"(cid:27) .",
"Question-asking.",
"We consider whether the message has a question.",
"This was used in Walker and Whittaker (1990) as a signal of taking control, which could be construed as forwards-oriented.",
"Within-label standard deviations of each alternative measure are generally not significantly smaller than across-label (Figure 4A), indicating that these measures are poorer reflections of the counseling strategies.",
"Label rankings under the measures are also arguably less intuitive.",
"For instance, reflection statements have relatively large (naive) cosine distance from their predecessors.",
"Indeed, the training encourages counselors to process rather than simply restate the texter's words.",
"These measures also track with the conversation's progress differentlynotably, none of them distinguish the initial dynamics of risk-assessed conversations as reflected in max (see appendix).",
"Past work on counseling has extensively discussed the virtues of addressing a client's situation (Rogers, 1957; Hill and Nakayama, 2000).",
"Some studies also suggest that accounting for both addressing and advancing is important, such that effective counselors manage to mix backwardsand forwards-oriented actions (Mishara et al., 2007).",
"We use orientation to examine how these strategies are tied to conversational effectiveness in crisis counseling at a larger scale, using our measures to provide a unified view of advancing and addressing.",
"To derive simple conversation-level measures, we average max and min over each counselor message in a conversation.",
"Adjudicating counseling conversation quality is known to be difficult (Tracey et al., 2014).",
"As a starting point, we relate our conversation-level measures to two complementary indicators of a conversation's effectiveness: 8 Perceived helpfulness.",
"We consider responses from a post-conversation survey asking the texter whether the conversation was helpful, following Althoff et al. (2016).",
"Out of the 26% of conversations with a response, 89% were rated as helpful.",
"9 Conversation length.",
"We consider a conversation's length as a simple indicator of the pace of its progress: short conversations may rush the texter, while prolonged conversations could suggest 8 We perform all subsequent analyses on a subset of 234,433 conversations, detailed in the appendix.",
"Figure 5A compares min and max in conversations rated as helpful and unhelpful by texters.",
"Both measures are significantly smaller in conversations perceived as helpful, suggesting that texters have a better impression of relatively backwards-oriented interactions where the counselor is inclined towards addressing their situation.",
"As such, this result echoes past findings relating addressing to effectiveness.",
"Figure 5B compares min in conversations of varying lengths, showing that min increases with length, such that counselors exhibit less propensity for addressing in longer conversations.",
"Anecdotal observations cited in interviews with the platform's staff suggest one interpretation: conversations in which a texter feels their concerns were not satisfactorily addressed may be prolonged when they circle back to revisit these concerns.",
"Figure 5C relates max to conversation length.",
"We find that max is smaller in the lengthiest conversations, suggesting that such prolonged in-10 As the training material notes, conversation length and texter perception may signal complementary or even conflicting information about a texter's experience of a conversation and its effectiveness: Some texters resist the end of the conversation. They ruminate [...] causing the conversation to drag on without any progress. 5284 teractions may be stalled by a weaker impulse to advance forwards.",
"Extremely short conversations have smaller max as well, such that premature endings may also reflect issues in advancing.",
"As such, we add credence to the previously-posited benefits of mixing addressing and advancing: forwards-oriented actions may be tied to making progress, while a weaker propensity to advance may signal a suboptimal pace.",
"Counselor-level analysis.",
"These findings could reflect various confoundsfor instance, a counselor's choice of orientation may have no bearing on the rating they receive from a particularly difficult texter.",
"To address this, we compute similar correspondences between orientation and our effectiveness indicators at the level of counselors rather than conversations; this analysis is detailed in the appendix.",
"Our conversation-level results are replicated under these controls.",
"In this work, we sought to examine a key balance in crisis counseling conversations between advancing forwards and addressing what has already been said.",
"Realizing this balance is one of the many challenges that crisis counselors must manage, and modeling the actions they take in light of such challenges could point to policies to better support them.",
"For instance, our method could assist human supervisors in monitoring the progress of ongoing conversations to detect instances of rushing or stalling, or enable larger-scale analyses of conversational behaviors to inform how counselors are trained.",
"The unsupervised approach we propose could circumvent dif-ficulties in getting large-scale annotations of such sensitive content.",
"Future work could bolster the measure's usefulness in several ways.",
"Technical improvements like richer utterance representations could improve the measure's fidelity; more sophisticated analyses could better capture the dynamic ways in which the balance of objectives is negotiated across many turns.",
"The preliminary explorations in Section 5.4 could also be extended to gauge the causal effects of counselors' behaviors (Kazdin, 2007).",
"We expect balancing problems to recur in conversational settings beyond crisis counseling, such as court proceedings, interviews, debates and other mental health contexts like long-term therapy.",
"In these settings, individuals also make potentially consequential choices that span the backwards-forwards orientation axis, such as addressing previous arguments (Tan et al., 2016; Zhang et al., 2016) or asking leading questions (Leech, 2002).",
"Our measure is designed to be broadly applicable, requiring no domain-specific annotations; we provide exploratory output on justice utterances from the Supreme Court's oral arguments in the appendix and release code implementing our approach at http://convokit.cornell.edu to encourage experiments in other domains.",
"However, the method's efficacy in the present setting is likely boosted by the relative uniformity of crisis counseling conversations; and future work could aim to better accomodate settings with less structure and more linguistic variability.",
"With such improvements, it would be interesting to study other domains where interlocutors are faced with conversational challenges.",
"We thank Jonathan P. Chang, Caleb Chiam, Liye Fu, Dan Jurafsky, Jack Hessel, and Lillian Lee for helpful conversations, and the anonymous reviewers for their thoughtful comments.",
"We also thank Ana Smith for collecting and processing the Supreme Court oral argument transcripts we used in the supplementary material.",
"This research, and the counseling service examined herein, would not have been possible without Crisis Text Line.",
"We are particularly grateful to Robert Filbin, Christine Morrison, and Jaclyn Weiser for their valuable insights into the experiences of counselors and for their help with using the data.",
"The research has been supported in part by NSF CAREER Award IIS1750615 and a Microsoft Research PhD Fellowship.",
"The collaboration with Crisis Text Line was supported by the Robert Wood Johnson Foundation; the views expressed here do not necessarily reflect the views of the foundation."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"objective",
"other",
"abstain",
"method",
"method",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another.",
"However, the ability of NLI models to make pragmatic inferences remains understudied.",
"We create an IMP licature and PRES upposition diagnostic dataset (IMPPRES ), consisting of > 25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types.",
"We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences.",
"Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences.",
"It reliably treats scalar implicatures triggered by some as entailments.",
"For some presupposition triggers like only , BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation.",
"BOW and InferSent show weaker evidence of pragmatic reasoning.",
"We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
"One of the most foundational semantic discoveries is that systematic rules govern the inferential relationships between pairs of natural language sentences (Aristotle, De Interpretatione , Ch. 6).",
"In natural language processing, Natural Language Inference (NLI)a task whereby a system determines whether a pair of sentences instantiates in an entailment, a contradiction, or a neutral relationhas been useful for training and evaluating models on sentential reasoning.",
"However, linguists and philosophers now recognize that there Equal Contribution Figure 1: Illustration of key properties of classical entailments, implicatures, and presuppositions.",
"are separate semantic and pragmatic modes of reasoning (Grice, 1975; Clark, 1996; Beaver, 1997; Horn and Ward, 2004; Potts, 2015), and it is not clear which of these modes, if either, NLI models learn.",
"We investigate two pragmatic inference types that are known to differ from classical entailment: scalar implicatures and presuppositions.",
"As shown in Figure 1, implicatures differ from entailments in that they can be denied, and presuppositions differ from entailments in that they are not canceled when placed in entailment-cancelling environments (e.g., negation, questions).",
"To enable research into the relationship between NLI and pragmatic reasoning, we introduce IMPPRES , a fine-grained NLI-style diagnostic test dataset for probing how well NLI models perform implicature and presupposition.",
"Containing 25.5K sentence pairs illustrating key properties of these pragmatic inference types, IMPPRES is automatically generated according to linguist-crafted templates, allowing us to create a large, lexically varied, and well controlled dataset targeting specific instances of both types.",
"We first investigate whether presuppositions and implicatures are present in NLI models' training data.",
"We take MultiNLI (Williams et al., 2018) as a case study, and find it has few instances of pragmatic inference, and almost none that arise from specific lexical triggers (see 4).",
"Given this, we ask whether training on MultiNLI is sufficient for models to generalize about these largely absent commonsense reasoning types.",
"We find that generalization is possible: the BERT NLI model shows evidence of pragmatic reasoning when tested on the implicature from some to not all , and the presuppositions of certain triggers ( only , cleft existence, possessive existence, ques-tions).",
"We obtain some negative results, that suggest that models like BERT still lack a sophisticated enough understanding of the meanings of the lexical triggers for implicature and presupposition (e.g., BERT treats several word pairs as synonyms, e.g., most notably, or and and ).",
"Our contributions are:",
"(i) we provide a new diagnostic test set to probe for pragmatic inferences, complete with linguistic controls,",
"(ii) to our knowledge, we present the first work evaluating deep NLI models on specific pragmatic inferences, and",
"(iii) we show that BERT models can perform some types of pragmatic reasoning very well, even when trained on NLI data containing very few explicit examples of pragmatic reasoning.",
"We publicly release all IMPPRES data, models evaluated, annotations of MultiNLI, and the scripts used to process data.",
"1 2 Background: Pragmatic Inference We take pragmatic inference to be a relation between two sentences relying on the utterance context and the conversational goals of interlocutors.",
"Pragmatic inference contrasts with semantic entailment, which instead captures the logical relationship between isolated sentence meanings (Grice, 1975; Stalnaker, 1974).",
"We present implicature and presupposition inferences below.",
"Broadly speaking, implicatures contrast with entailments in that they are inferences suggested by the speaker's utterance, but not included in its literal (Grice, 1975).",
"Although there are many types 1 github.com/facebookresearch/ImpPres Type Example Trigger Jo's cat yawned.",
"of implicatures we focus here on scalar implicatures .",
"Scalar implicatures are inferences, often optional, 2 which can be drawn when one mem-ber of a memorized lexical scale (e.g., (cid:104) some, all (cid:105) ) is uttered (see 6.1).",
"For example, when someone utters Jo ate some of the cake , they suggest that Jo didn't eat all of the cake , (see Figure 1 for more examples).",
"According to Neo-Gricean pragmatic theory (Horn, 1989; Levinson, 2000), the inference Jo didn't eat all of the cake arises because some has a more informative lexical alternative all that could have been uttered instead.",
"We expect the speaker to make the most informative true statement: 3 as a result, the listener should infer that a stronger statement, where some is replaced by all , is false.",
"Implicatures differ from entailments (and, as we will see, presuppositions; see Figure 1) in that they are deniable , i.e., they can be explicitly negated without resulting in a contradiction.",
"For example, someone can utter Jo ate some of the cake , followed by In fact, Jo ate all of it .",
"In this case, the implicature (i.e., Jo didn't eat all the cake from above) has been denied.",
"We thus distinguish implicated meaning from literal, or logical, meaning.",
"Presuppositions of a sentence are facts that the speaker takes for granted when uttering a sentence (Stalnaker, 1974; Beaver, 1997).",
"Presuppositions are generally associated with the presence of certain expressions, known as presupposition triggers .",
"For example, in Figure 1, the definite de-2 Implicature computation can depend on the cooperativity of the speakers, or on any aspect of the context of utterance (lexical, syntactic, semantic/pragmatic, discourse).",
"See Degen (2015) for a study of the high variability of implicature computation, and the factors responsible for it.",
"3 This follows if we assume that speakers are cooperative (Grice, 1975) and knowledgeable (Gazdar, 1979).",
"scription the cake triggers the presupposition that there is a cake (Russell, 1905).",
"Other examples of presupposition triggers are shown in Table 1. Presuppositions differ from other inference types in that they generally project out of operators like questions and negation, meaning that they remain valid inferences even when embedded under these operators (Karttunen, 1973).",
"The inference that there is a cake survives even when the presupposition trigger is in a question ( Did Jordan eat some of the cake? ), as shown in Figure 1. However, in questions, classical entailments and implicatures disappear.",
"Table 1 provides examples of triggers projecting out of several entailment canceling operators : negation, modals, interrogatives, and conditionals.",
"It is necessary to clarify in what sense presupposition is a pragmatic inference.",
"There is no consensus on whether presuppositions should be considered part of the semantic content of expressions (see Stalnaker, 1974; Heim, 1983, for opposing views).",
"However, presuppositions may come to be inferred via accommodation , a pragmatic process by which a listener infers the truth of some new fact based on its being presupposed by the speaker (Lewis, 1979).",
"For instance, if Jordan tells Harper that the King of Sweden wears glasses , and Harper did not previously know that Sweden has a king, they would learn this fact by accommodation.",
"With respect to NLI, any presupposition in the premise (short of world knowledge) will be new information, and therefore accommodation is necessary to recognize it as entailed.",
"NLI has been framed as a commonsense reasoning task (Dagan et al., 2006; Manning, 2006).",
"One early formulation of NLI defines entailment as holding for sentences p and h whenever, typi-cally, a human reading p would infer that h is most likely true. . . [given] common human understanding of language [and] common background knowledge (Dagan et al., 2006).",
"Although this sparked debate regarding the terms inference and entailment and whether an adequate notion of inference could be defined (Zaenen et al., 2005; Manning, 2006; Crouch et al., 2006)in recent work, a commonsense formulation of inference is widely adopted (Bowman et al., 2015; Williams et al., 2018) largely because it facilitates untrained annotators' participation in dataset creation.",
"NLI itself has been steadily gaining in popularity; many datasets for training and/or testing systems are now available including: FraCaS (Cooper et al., 1994), RTE (Dagan et al., 2006; Mirkin et al., 2009; Dagan et al., 2013), Sentences Involving Compositional Knowledge (Marelli et al., 2014, SICK), large scale imaging captioning as NLI (Bowman et al., 2015, SNLI), recasting other datasets into NLI (Glickman, 2006; White et al., 2017; Poliak et al., 2018), ordinal commonsense inference (Zhang et al., 2017, JOCI), Multi-Premise Entailment (Lai et al., 2017, MPE), NLI over multiple genres of written and spoken English (Williams et al., 2018, MultiNLI), adversar-ially filtered common sense reasoning sentences (Zellers et al., 2018, 2019, (Hella)SWAG), explainable annotations for SNLI (Camburu et al., 2018, e-SNLI), cross-lingual NLI (Conneau et al., 2018, XNLI), scientific questioning answering as NLI (Khot et al., 2018, SciTail), NLI recast-question answering (part of Wang et al. 2019, GLUE), NLI for dialog (Welleck et al., 2019), and NLI over narratives that require drawing inferences to the most plausible explanation from text (Bhagavatula et al., 2020, NLI).",
"Other NLI datasets are created to identify where models fail (Glockner et al., 2018; Naik et al., 2018; McCoy et al., 2019; Schmitt and Schutze, 2019), many of which are also automatically generated (Geiger et al., 2018; Yanaka et al., 2019a,b; Kim et al., 2019; Nie et al., 2019; Richardson et al., 2020).",
"As datasets for NLI become increasingly numerous, one might wonder, do we need yet another NLI dataset?",
"In this case, the answer is clearly yes: despite NLI's formulation as a commonsense reasoning task, it is still unknown whether this framing has resulted in models that learn specific modes of pragmatic reasoning.",
"IMPPRES is the first NLI dataset to explicitly probe whether models trained on commonsense reasoning actually do treat pragmatic inferences like implicatures and presuppositions as entailments without additional training on these specific inference types.",
"Beyond NLI, several recent works introduce resources for evaluating sentence understanding models for knowledge of pragmatic inferences.",
"On the presupposition side, datasets such as MegaVeridicality (White and Rawlins, 2018) and CommitmentBank (de Marneffe et al., 2019) compile gradient crowdsourced judgments regarding how likely a clause embedding predicate is to trigger a presupposition that its complement clause is true.",
"White et al. (2018) and Jiang and de Marneffe (2019) find that LSTMs trained on a gradient event factuality prediction task on these respective datasets make systematic errors.",
"Turning to implicatures, Degen (2015) introduces a dataset measuring the strength of the implicature from some to not all with crowd-sourced judgments.",
"Schuster et al. (2020) find that an LSTM with supervision on this dataset can predict human judgments well.",
"These resources all differ from IMPPRES in two respects: First, their empirical scopes are all somewhat narrower, as all these datasets focus on only a single class of presupposition or implicature triggers.",
"Second, the use of gradient judgments makes it non-trivial to use these datasets to evaluate NLI models, which are trained to make categorical predictions about entailment.",
"Both approaches have advantages, and we leave a direct comparison for future work.",
"Outside the topic of sentential inference, Rashkin et al. (2018) propose a new task where a model must label actor intents and reactions for particular actions described using text.",
"Cianflone et al. (2018) create sentence-level adverbial presupposition datasets and train a binary classifier to detect contexts in which presupposition triggers (e.g., too , again ) can be used.",
"In this section, we present results of an annotation effort that show that MultiNLI contains very lit-tle explicit evidence of pragmatic inferences of the type tested by IMPPRES .",
"Although Williams et al. (2018) report that 22% of the MultiNLI development set sentence pairs contain lexical triggers (such as regret or stopped ) in the premise and/or hypothesis, the mere presence of presupposition-triggering lexical items in the data does not show that MultiNLI contains evidence that presuppositions are entailments, since the sentential inference may focus on other types of information.",
"To address this, we randomly selected 200 sentence pairs from the MultiNLI matched development set and presented them to three expert annotators with a combined total of 17 years of training in formal semantics and pragmatics.",
"4 Annotators answered the following questions for each pair: (1) are the sentences P and H related by a presupposition/implicature relation (entails/is en-4 The full annotations are on the IMPPRES repository. tailed by, negated or not); (2) what subtype of inference (e.g., existence presupposition, (cid:104) some, all (cid:105) implicature); (3) is the presupposition trigger embedded under an entailment-cancelling operator?",
"Agreement among annotators was low, suggesting that few MultiNLI pairs are paradigmatic cases of implicatures or presuppositions.",
"We found only 8 presupposition pairs and 3 implicature pairs on which two or more annotators agreed.",
"Moreover, we found only one example illustrating a particular inference type tested in IMPPRES (the presupposition of possessed definites).",
"All others were tagged as existence presuppositions and conversational implicatures (i.e. loose inferences dependent on world knowledge).",
"The union of annotations was much larger: 42% of examples were identified by at least one annotator as a presupposition or implicature (51 presuppositions and 42 implicatures, with 10 sentences receiving divergent tags).",
"However, of these, only 23 presuppositions and 19 implicatures could reliably be used to learn pragmatic inference (in 14 cases, the given tag did not match the pragmatic inference, and in 27 cases, computing the inference did not affect the relation type).",
"Again, the large majority of implicatures were conversational, and most presuppositions were existential, and generally not linked to particular lexical triggers (e.g., topic marking).",
"We conclude that the MultiNLI dataset at best contains some evidence of loose pragmatic reasoning based on world knowledge and discourse structure, but almost no explicit information relevant to lexically triggered pragmatic inference, which is of the type tested in this paper.",
"Data Generation.",
"IMPPRES consists of semiautomatically generated pairs of sentences with NLI labels illustrating key properties of implicatures and presuppositions.",
"We generate IMPPRES using a codebase developed by Warstadt et al. (2019a) and significantly expanded for the BLiMP dataset (Warstadt et al., 2019b).",
"The codebase, including our scripts and documentation, are publicly available.",
"5 Each sentence type in IMPPRES is generated according to a template that specifies the linear order of the constituents in the sentence.",
"The constituents are sampled from a vocabulary of over 3000 lexical items annotated with grammatical features needed to ensure morphological, 5 github.com/alexwarstadt/data generation Premise Hypothesis Relation type Logical label Pragmatic label Item type some not all implicature ( + to ) neutral entailment target not all some implicature ( to + ) neutral entailment target some all negated implicature ( + ) neutral contradiction target all some reverse negated implicature ( + ) entailment contradiction target not all none negated implicature ( ) neutral contradiction target none not all reverse negated implicature ( ) entailment contradiction target all none opposite contradiction contradiction control none all opposite contradiction contradiction control some none negation contradiction contradiction control none some negation contradiction contradiction control all not all negation contradiction contradiction control not all all negation contradiction contradiction control Table 2: Paradigm for the scalar implicature datasets, with (cid:104) some, all (cid:105) as an example.",
"syntactic, and semantic well-formedness.",
"All sentences generated from a given template are structurally analogous up to the specified constituents, but may vary in sub-constituents.",
"For instance, if the template calls for a verb phrase, the generated constituent may include a direct object or complement clause, depending on the argument structure of the sampled verb.",
"See 6.1 and 7.1 for descriptions of the sentence types in the implicature and presupposition data.",
"Generating data lets us control the lexical and syntactic content so that we can guarantee that the sentence pairs in IMPPRES evaluate the desired phenomenon (see Ettinger et al., 2016, for related discussion).",
"Furthermore, the codebase we use allows for greater lexical and syntactic variety than in many other templatic datasets (see discussion in Warstadt et al., 2019b).",
"One limitation of this methodology is that generated sentences, while generally grammatical, often describe highly unlikely scenarios, or include low frequency combinations of lexical items (e.g., Sabrina only reveals this pasta ).",
"Another limitation is that generated data is of limited use for training models, since it contains simple regularities that supervised classi-fiers may learn to exploit.",
"Thus, we create IMPPRES solely for the purpose of evaluating NLI models trained on standard datasets like MultiNLI.",
"Models.",
"Our experiments evaluate NLI models trained on MultiNLI and built on top of three sentence encoding models: a bag of words (BOW) model, InferSent (Conneau et al., 2017), and BERT-Large (Devlin et al., 2019).",
"The BOW and InferSent models use 300D GloVe embeddings as word representations (Pennington et al., 2014).",
"For the BOW baseline, word embeddings for premise and hypothesis are separately summed to create sentence representations, which are concatenated to form a single sentence-pair representation which is fed to a logistic regression softmax classifier.",
"For the InferSent model, GloVe embeddings for the words in premise and hypothesis are respectively fed into a bidirectional LSTM, after which we concatenate the representations for premise and hypothesis, their difference, and their element-wise product (Mou et al., 2016).",
"BERT is a multilayer bidirectional transformer pretrained with the masked language modelling and next sequence prediction objectives, and finetuned on the MultiNLI dataset.",
"We concatenate the premise and hypothesis after a special [CLS] token and separated them with the [SEP] token.",
"The BERT representation for the [CLS] token is fed into classifier.",
"We use Huggingface's pre-trained BERT trained on Toronto books (Zhu et al., 2015).",
"6 The BOW and InferSent models have development set accuracies of 49.6% and 67.6%.",
"The development set accuracy for BERT-Large on MultiNLI is 86.6%, similar to the results achieved by (Devlin et al., 2019), but somewhat lower than state-of-the-art (currently 90.8% on test from the ensembled RoBERTa model with long pretraining optimization, Liu et al. 2019).",
"The scalar implicature portion of IMPPRES includes six datasets, each isolating a different scalar implicature trigger from six types of lexical scales (of the type described in 2): determiners (cid:104) some, all (cid:105) , connectives (cid:104) or, and (cid:105) , modals (cid:104) can, have to (cid:105) , numerals (cid:104) 2,3 (cid:105) , (cid:104) 10,100 (cid:105) , scalar adjectives, and",
"6 github.com/huggingface/pytorch-pretrained-BERT/",
"verbs, e.g., (cid:104) good, excellent (cid:105) , (cid:104) run, sprint (cid:105) .",
"Examples pairs of each implicature trigger can be found in Table 4 in the Appendix.",
"For each type, we generate 100 paradigms, each consisting of 12 unique sentence pairs, as shown in Table 2. The six target sentence pairs comprise two main relation types: implicature' and negated implicature'.",
"Pairs tagged as implicature' have a premise that implicates the hypothesis (e.g., some and not all ).",
"For negated implicature', the premise implicates the negation of the hypothesis (e.g., some and all ), or vice versa (e.g., all and some ).",
"Six control pairs are logical contradictions, representing either scalar opposites' (e.g., all and none ), or negations' (e.g., not all and all ; some and none ), probing the models' basic grasp of negation.",
"As mentioned in 2.1, implicature computation is variable and dependent on the context of utterance.",
"Thus, we anticipate two possible rational behaviors for a MultiNLI-trained model tested on an implicature:",
"(a) be pragmatic, and compute the implicature, concluding that the premise and hypothesis are in an entailment' relation,",
"(b) be logical, i.e., consider only the literal content, and not compute the implicature, concluding they are in a neutral' relation.",
"Thus, we measure both possible conclusions, by tagging sentence pairs for scalar implicature with two sets of NLI labels to reflect the behavior expected under logical and prag-matic modes of inference, as shown in Table 2. 6.2 Implicatures Results & Discussion We first evaluate model performance on the controls, shown in Figure 2. Success on these controls is a necessary condition for us to conclude that a model has learned the basic function of negation ( not , none , neither ) and the scalar relationship between terms like some and all .",
"We find that BERT performs at ceiling on control conditions for all implicature types, in contrast with InferSent and BOW, whose performance is very variable.",
"Since only BERT passes all controls, its results on the target items are most interpretable.",
"Full results for all models and target conditions by implicature trigger are in Figures 813 in the Appendix.",
"For connectives, scalar adjectives and verbs, the BERT model results correspond neither to the hypothesized pragmatic nor logical behavior.",
"In fact, for each of these subdatasets, the results are consistent with a treatment of scalemates (e.g., and and or ; good and excellent ) as synonyms, e.g. it evaluates the negated implicature' sentence pairs as entailment' in both directions.",
"This reveals a coarse-grained knowledge of these meanings that lacks information about asymmetric informativity relations between scalemates.",
"Results for modals ( can and have to ) are split between the three labels, not showing any predicted logical or pragmatic pattern.",
"We conclude that BERT has insuf-ficient knowledge of the meaning of these words.",
"In addition to pragmatic and logical interpretations, numerals can also be interpreted as exact cardinalities.",
"We thus predict three different behaviors: logical at least n , pragmatic at least n , and exactly n .",
"We observe that results are inconsistent: neither the exactly nor at least interpretations hold across the board.",
"For the determiner dataset ( some all ), Figure 4 breaks down the results by condition and shows that BERT behaves as though it performs pragmatic and logical reasoning in different conditions.",
"Overall, it predicts a pragmatic relation more frequently (55% vs. 36%), and only 9% of results are consistent with neither mode of reasoning.",
"Furthermore, the proportion of pragmatic reasoning shows consistent effects of sentence order (i.e., whether the implicature trigger is in the premise or the hypothesis), and the presence of negation in one or both sentences.",
"Pragmatic reasoning is consistently higher when the implicature trigger is in the premise, which we can see in the results for negated implicatures: the some all condition shows more pragmatic behavior compared to the all some condition (a similar behavior is observed with the not all vs. none conditions).",
"Generally, the presence of negation lowers rates of pragmatic reasoning.",
"First, the negated implicature conditions can be subdivided into pairs with and without negation.",
"Among the negated ones, pragmatic reasoning is lower than for non-negated ones.",
"Second, having negation in the premise rather than the hypothesis makes pragmatic reasoning lower: among pairs tagged as direct implicatures ( some vs. not all ), there is higher pragmatic reasoning with non-negated some in the premise than with negated not all .",
"Finally, we observe that pragmatic rates are lower for some vs. not all than for some vs. all .",
"In this final case, pragmatic reasoning could be facilitated by explicit presentation of the two items on the scale.",
"In sum, for the datasets besides determiners, we find evidence that BERT fails to learn even the logical relations between scalemates, ruling out the possibility of computing scalar implicatures.",
"It remains possible that BERT could learn these logical relations with explicit supervision (see Richard-Presuppositions Label Item Premise Hypothesis Type *Trigger Prsp entailment target *Trigger Neg.",
"Only the determiner dataset was informative in showing the extent of the NLI BERT model's pragmatic reasoning, since it alone showed a fine-grained enough understanding of the semantic relationship of the scalemates, like some and all .",
"In this setting BERT returned impressive results showing a high proportion of pragmatic reasoning compared to logical reasoning, which was affected by sentence order and presence of negation in a predictable way.",
"The presupposition portion of IMPPRES includes eight datasets, each isolating a different kind of presupposition trigger.",
"The full set of triggers is shown in Table 5 in the Appendix.",
"For each type, we generate 100 paradigms, with each paradigm consisting of 19 unique sentence pairs.",
"(Examples of the sentence types are in Table 1).",
"Of the 19 sentence pairs, 15 contain target items.",
"The first target item tests whether the model correctly determines that the presupposition trigger entails its presupposition.",
"The next two alter the presupposition, either negating it, or replacing a constituent, leading to contradiction and neutrality, respectively.",
"The remaining 12 show that the relation between the trigger and the (altered) presupposition is not affected by embedding the trigger under various entailment-canceling operators.",
"4 control items are designed to test the basic effect of entailment-canceling operatorsnegation, modals, interrogatives, and conditionals.",
"In each control, the premise is a presupposition trigger embedded under an entailment-canceling operator, and the hypothesis is an unembedded sentence containing the trigger.",
"These controls are neces-Figure 5: Results on Controls (Presuppositions).",
"The results from presupposition controls are in Figure 5.",
"BERT performs well above chance on each control (acc. > 0 . 33 ), whereas BOW and InferSent perform at or below chance.",
"In the negated condition, BERT correctly identifies that the trigger is contradicted by its negation 100% of the time, e.g., Jo's cat didn't go contradicts Jo's cat went .",
"In the other conditions, it correctly identifies the neutral relation the majority of the time, e.g., Did Jo's cat go?",
"is neutral with respect to Jo's cat went .",
"This indicates that BERT mostly learns that negation, modals, interrogatives, and conditionals cancel classical entailments, while BOW and InferSent do not capture the ordinary behavior of these common operators.",
"Next, we test whether models identify presuppositions of the premise as entailments, e.g., that Jo's cat went entails that Jo has a cat .",
"Recall from 2.2 that this is akin to a listener accommodating a presupposition.",
"The results in Figure 6 show that each of the three models accommodates some presuppositions, but this depends on both the nature of the presupposition and the model.",
"For instance, the BOW and InferSent models accommodate presuppositions of nearly all trigger types at well above chance rates (acc.",
"(cid:29) 33% ).",
"For the uniqueness presupposition of clefts, these models generally correctly predict an entailment (acc. > 90%), but for most triggers, performance is less reliable.",
"By contrast, BERT's behavior is bimodal.",
"It always accommodates the existence presuppositions of clefts and possessed definites, as well as the presupposition of only , but almost never accommodates any presupposition involving numeracy, e.g. Both flowers that bloomed died entails Figure 6: Results for the unembedded trigger paired with positive presupposition.",
"There are exactly two flowers that bloomed .",
"7 Finally, we evaluate whether models predict that presuppositions project out of entailment canceling operators (e.g., that Did Jo's cat go? entails that Jo has a cat ).",
"We can only consider such a prediction as evidence of projection if two conditions hold:",
"(a) the model correctly identifies that the relevant operator cancels entailments in the control from the same paradigm (e.g., Did Jo's cat go? is neutral with respect to Jo's cat went ), and",
"(b) the model identifies the presupposition as an entailment when the trigger is unembedded in the same paradigm (e.g. Jo's cat went entails Jo has a cat ).",
"Otherwise, a model might correctly predict entailment essentially by accident if, for instance, it systematically ignores negation.",
"For this reason, we filter out results for the target conditions that do not meet these criteria.",
"Figure 7 shows results for the target conditions after filtering.",
"While InferSent rarely predicts that presuppositions project, we find strong evidence that the BERT and BOW models do.",
"Specifi-cally, they correctly identify that the premise entails the presupposition (acc. 80% for BERT, acc. 90% for BOW).",
"Furthermore, BERT is the only model to reliably identify (i.e., over 90% of the time) that the negation of the presupposition is contradicted.",
"These results hold irrespective of the entailment canceling operator.",
"No model reliably performs above chance when the presupposition is altered to be neutral (e.g., Did Jo's cat go? is neu-7 The presence of exactly might contribute to poor performance on numeracy examples. We suspect MultiNLI annotators may have used it disproportionately for neut. hypotheses. Figure 7: Results for presupposition target conditions involving projection. tral with respect to Jo has a cat ).",
"It is surprising that the simple BOW model can learn some of the projective behavior of presuppositions.",
"One explanation for this finding is that many of the key features of presupposition projection are insensitive to word order.",
"If a lexical presupposition trigger is present at all in a sentence, a presupposition will generally arise irrespective of its position in the sentence.",
"There are some edge cases where this heuristic is insufficient, but IMPPRES is not designed to test such cases.",
"To summarize, training on NLI is sufficient for all models we evaluate to learn to accommodate presuppositions of a wide variety of unembedded triggers, though BERT rejects presuppositions involving numeracy.",
"Furthermore, BERT and even the BOW model appear to learn the characteristic projective behavior of some presuppositions.",
"We observe some encouraging results in 67.",
"We find strong evidence that BERT learns scalar implicatures associated with determiners some and all .",
"Pragmatic or logical reasoning was not diagnosable for the other scales, whose meaning was not fully understood by our models (as most scalar pairs were treated as synonymous).",
"In the case of presuppositions, the BERT NLI models, and BOW to some extent, perform well on a number of our subdatasets ( only , cleft existence, possessive existence, questions).",
"For the other subdatasets, the models did not perform as expected on the basic unembedded presupposition triggers, again suggesting the model's lack of knowledge of the basic meaning of these words.",
"Though their behavior is far from systematic, this is suggestive evidence that some NLI models can perform in ways that correlate with human-like pragmatic behavior.",
"Given that MultiNLI contains few examples of the type found in IMPPRES (see 4), where might our positive results come from?",
"There are two potential sources of signal for the BERT model: NLI training, and pretraining (either BERT's masked language modeling objective or its input word em-beddings).",
"NLI training provides specific examples of valid (or invalid) inferences constituting an incomplete characterization of what commonsense inference is in general.",
"Since presuppositions and scalar implicatures triggered by specific lexical items are largely absent from the MultiNLI data used for NLI training, any positive results on IMPPRES would likely use prior knowledge from the pretraining stage to make an inductive leap that pragmatic inferences are valid commonsense inferences.",
"The natural language text used for pretraining certainly contains pragmatic information, since, like any natural language data, it is produced with the assumption that readers are capable of pragmatic reasoning.",
"Maybe this induces patterns in the data that make the nature of those assumptions recoverable from the data itself.",
"This work is an initial step towards rigorously investigating the extent to which NLI models learn semantic versus pragmatic inference types.",
"We have introduced a new dataset IMPPRES for probing this question, which can be reused to evaluate pragmatic performance of any NLI given model.",
"This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 1850208 awarded to A. Warstadt.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.",
"Thanks to the FAIR NLP & Conversational AI Group, the Google AI NLP group, and the NYU ML 2 , including Sam Bowman, He He, Phu Mon Htut, Katharina Kann, Haokun Liu, Ethen Perez, Richard Pang, Clara Vania for discussions on the topic, and/or feedback on an earlier draft.",
"Additional thanks to Marco Baroni, Hagen Blix, Emmanuel Chemla, Aaron Steven White, and Luke Zettlemoyer for insightful comments."
] | [
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"result",
"result",
"objective",
"objective",
"objective",
"result",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.",
"In this work, we propose PLANET, a novel generation framework leveraging autoregressive self-attention mechanism to conduct content planning and surface realization dynamically.",
"To guide the generation of output sentences, our framework enriches the Transformer decoder with latent representations to maintain sentence-level semantic plans grounded by bag-of-words.",
"Moreover, we introduce a new coherence-based contrastive learning objective to further improve the coherence of output.",
"Extensive experiments are conducted on two challenging long-form text generation tasks including counterargument generation and opinion article generation.",
"Both automatic and human evaluations show that our method significantly outperforms strong baselines and generates more coherent texts with richer contents.",
"Neural sequence-to-sequence (seq2seq) models are dominant methods for text generation nowadays, which are trained to maximize the log-likelihood over targets in an end-to-end fashion (Cho et al., 2014).",
"Recently, pre-trained methods such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020) have achieved promising results by leveraging large-scale data.",
"While these models can generate fluent results, they still fall short of producing coherent long-form texts with multiple sentences (Dou et al., 2021).",
"Long text generation, especially opinion generation, usually requires the model to (1) conduct proper content selection and ordering (i.e., what to say and when to say it ) to form a coherent high-level logical flow, and (2) appropriately reflect the BART Outputs (1) Monied interests will have a large influence in elections.",
"text plans into final outputs (i.e., how to say it ).",
"We present an example of counter-argument generation in Figure 1: given a statement on a controversial topic and a set of keyphrases as guidance talking points, the task aims to produce an argument with a different stance to refute the statement (Hua et al., 2019).",
"Human writer assigns keyphrases for each sentence to form a coherent logical flow (e.g., corporations easily tap into public funding \" they also have large influence on government \" \" the current government is still corrupt \") and produces the final counter-argument that \" public funding won't solve the election problems \".",
"In contrast, although BART learns to include keyphrases and generate an argument relevant to the statement, it suffers from incoherence issues such as incorrect usage of keyphrases (not corporations but election that be manipulated and controlled ) and wrong stance ( public funding would make government less corrupt ), and fails to maintain smooth transitions between sentences (e.g., sentence 2 and 2288 3 are unrelated) and form a coherent text.",
"To solve the above defects, various text planning methods were proposed to improve the coherence of the generated text.",
"The first type of methods (Kang and Hovy, 2020; Fu et al., 2020; Kong et al., 2021) leverage a latent variable as a global plan to guide the generation process, as illustrated in Figure 2",
"(a).",
"However, these methods do not consider fine-grained sentence-level planning.",
"The second line of methods (Hua and Wang, 2020; Goldfarb-Tarrant et al., 2020) first produce sentence-level content plans, and then pass content plans to a surface realization module to generate the output words, as shown in Figure 2",
"(b).",
"Nevertheless, the planning and surface realization components are disjointed and may lead to cascading errors (Hua et al., 2021).",
"In this work, we propose PLANET , a novel text generation framework that dynamically performs content planning and surface realization in autoregressive Transformers.",
"As shown in Figure 2",
"(c), for each target sentence, an autoregressive decoder first performs dynamic content planning by producing a latent representation (SN j ) as a semantic guidance, and then generates the sentence words.",
"Both the content planning and surface realization are achieved dynamically by the autoregressive self-attention in a unified way: to generate a sentence (e.g., sentence 3 ), the latent representation (SN 3 ) attends the previous latent representations (SN 1 , 2 , solid blue arrows) and previous context (sentence 1 and 2 , dashed blue arrows) to plan its overall semantic content; Then, each output position in the sentence attends the corresponding latent representation (SN 3 , solid green arrow) and the previous words (dashed green arrows), and optionally select keyphrases (gray arrow) to decide the exact wording.",
"To supervise the latent representations, we further introduce a sentence-level bag-of-words prediction auxiliary task to provide supervision signals of the lexical semantics of the corresponding sentence.",
"In this way, our framework can be trained end-to-end and easily applied to pre-trained autoregressive Transformers.",
"Furthermore, to empower our model to distinguish coherent and incoherent targets and generate more coherent outputs, we propose a novel coherence-based contrastive learning objective with different strategies to construct negative samples.",
"We evaluate our model on two long-form opinion generation tasks: (1) counter-argument globalplan",
"generation with Reddit/ChangeMyView dataset, and (2) opinion article generation from the New York Times Opinion corpus.",
"Automatic evaluations show that our proposed method significantly outperforms strong baselines and generates more coherent texts with richer contents.",
"Human evaluations further indicate that our model can properly leverage guidance keyphrases and generate better results on both datasets.",
"The overall contributions of our work are: A unified framework that dynamically conducts content planning and surface realization by leveraging the autoregressive self-attention, with a novel sentence-level bag-of-words auxiliary task to guide the semantic content of each sentence; A new coherence-based contrastive learning method with different negative sample construction strategies to improve the coherence of outputs; Our approach outperforms strong baselines for both automatic and human evaluations on two challenging long-form text generation tasks.",
"Text Planning for Neural Generation.",
"Traditional text generation pipeline leverages text planning component to decide on the high-level structures (McKeown, 1985; Reiter and Dale, 1997; Hovy, 1990; Carenini and Moore, 2006).",
"Earlier work incorporates text planning into neural seq2seq structures by introducing hierarchical decoders (Yao et al., 2019; Moryossef et al., 2019; 2289 Shen et al., 2019).",
"However, these methods are hard to be applied to pre-trained models because of the modifications of model architecture.",
"Several studies design separate modules for text planning and surface realization (Hua and Wang, 2020; Tan et al., 2021; Goldfarb-Tarrant et al., 2020), which lead to a disconnection of the two components and often produce undesired outputs (Castro Ferreira et al., 2019).",
"Recently, Rashkin et al. (2020) present a memory-based model to keep track of the content usage and generate paragraphs recurrently.",
"Nevertheless, they do not consider sentence-level text planning which is critical to maintain high-level logical flow for opinion text generation.",
"Hua et al. (2021) propose a mixed language model to perform content selection and ordering.",
"However, they encode multiple content items separately and do not fully consider the interactions among content items.",
"In contrast to these prior studies, our model conducts sentence-level text planning and surface realization dynamically by introducing high-level latent representations for target sentences, and can be incorporated into pre-trained autoregressive Transformers.",
"Coherent Long-form Text Generation.",
"Recent work tackles this problem on the tasks including story generation (Fan et al., 2019; Xu et al., 2020), paragraph completion (Kang and Hovy, 2020), text infilling (Huang et al., 2020), long-form conversation (Xu et al., 2021) and news article generation (Rashkin et al., 2020; Tan et al., 2021).",
"To solve the incoherence issue, one type of work adopts the plan-then-generate strategy as discussed above.",
"Some work also incorporates discourse and structured information into generation process to improve output coherence (Jiang et al., 2021; Ji and Huang, 2021; Bosselut et al., 2018).",
"Recently, Guan et al. (2021) propose two auxiliary objectives of similarity prediction and order discrimination to improve coherence.",
"In this work, we focus on long-form opinion text generation which requires an appropriate combination of credible talking points with rigorous reasoning (Hua et al., 2019), and apply dynamic content planning with a coherence-based contrastive objective to improve output coherence.",
"Controllable Text Generation.",
"Our work is closely related to controllable generation (Prabhu-moye et al., 2020).",
"In this regard, typical studies manipulate sentiments (Hu et al., 2017), style (Gao et al., 2019; Du and Ji, 2021; Hu et al., 2021), syntax (Chen et al., 2019), and keywords (Keskar et al., 2019; He et al., 2020; Wu et al., 2020) to steer the generation process.",
"We use topical keyphrases as guidance talking points and require the model to properly organize and reflect keyphrases for long-form opinion text generation.",
"Task Description.",
"We follow the previous work (Hua and Wang, 2020) and model the long-form opinion generation task by considering the input of (1) a statement x which can be a proposition for argument generation or a title for opinion-article generation, and (2) a set of unordered keyphrases m = { m i } related to the statement, serving as topical guidance signal.",
"The output y is an opinion text consisting of multiple sentences and properly reflects the keyphrases in a coherent way.",
"Our framework is based on the seq2seq structure, and we adopt BART (Lewis et al., 2020) as the base model.",
"1 The overall framework is shown in Figure 3. The bi-directional encoder first encodes the statement and keyphrases, and the decoder then generates the output in an autoregressive manner: y = argmax n (cid:89) t =1 P ( y t | y 1: t 1 , x , m ) , (1) where n is the number of target words.",
"The statement and keyphrases are concatenated, with a segmenter inserted between adjacent keyphrases to indicate the keyphrase boundary.",
"We conduct content planning and surface realization dynamically by leveraging the autoregressive self-attention mechanism.",
"For each target sentence, we introduce a latent representation SN to represent its global semantic information and guide surface realization ( 3.2), then the sentence words attend the latent representation and dynamically select keyphrases ( 3.3).",
"After that, a sentence-level bag-of-words planning is introduced to enhance the latent representations ( 3.4).",
"Finally, we devise a contrastive learning (CL) objective to further improve the coherence of the output text ( 3.5).",
"We introduce a latent representation for each target sentence to represent the overall semantic information and guide the generation of the sentence words.",
"1 Our method can be also applied to other autoregressive pre-trained language models.",
"In particular, we insert a special token [SN] before every target sentence, and regard the hidden states of the decoder at the positions corresponding to [SN] as the latent representations of the target sentences.",
"This has been shown effective by previous work (Guan et al., 2021; Li et al., 2021).",
"The workflow of our dynamic planning and realization is shown in Figure 4. For the vanilla autoregressive decoder, the generation of each token only depends on the previously generated tokens.",
"In our framework, when producing the j -th output sentence y ( j ) , the latent representation SN j is first obtained by attending the previous latent representations SN 1: j 1 and words in previous sentences y (1: j 1) .",
"Then for sentence-level surface realization, each token in the current sentence y ( j ) attends the previously generated words and latent representations SN 1: j 1 , as well as the current latent representation SN j as the guidance.",
"A unique advantage of such modeling is that the content planning and surface realization can be performed simultaneously and incorporated into any pre-trained autoregressive language models, further optimized in an end-to-end fashion .",
"Based on the guidance of latent representations, each sentence word conducts content selection by incorporating keyphrases into decoder hidden states to decide which keyphrases to be reflected during generation.",
"We first feed the keyphrases to the encoder to obtain hidden representations.",
"We then construct a keyphrase memory bank B by gathering the top layer representations of the segment tokens (each keyphrase is represented by the segment token before it).",
"After that, a content selection layer retrieves keyphrase information from the keyphrase bank and integrates the selected information into the decoding process.",
"Content Selection Layer.",
"At each decoding step t , the top layer representation of the Transformer decoder h t attends the keyphrase memory bank via multi-head attention: c t = MH-ATTENTION ( h t , B , B ) , (2) where c t is a context vector that embeds the selected keyphrase information, h t is the query, and B acts as the key and value for multi-head attention.",
"Then we incorporate the keyphrase context c t into the decoder hidden state via a feed-forward layer followed by a residual connection (RC): h dt = RC ( W s tanh ( W h h t + W c c t + b s ) , h t ) .",
"Finally, the enhanced hidden state h dt will be passed to another feed-forward layer with softmax to estimate the probability of each output word:",
"We propose an auxiliary task of sentence-level bag-of-words (BOW) planning to supervise the latent representations.",
"The goal is to ground the meaning of the latent representations with the bag-of-words (Fu et al., 2020) of target sentences to reflect the global semantic plans.",
"Formally, we define the BOW of the j -th target sentence z j as a categorical distribution over the entire vocabulary: p ( z j | SN j ) = softmax ( MLP ( SN j )) , (5) where MLP ( ) is parameterized as a multi-layer feed-forward network.",
"We expect this distribution to capture the overall semantic plan of the corresponding sentence, and enhance SN to guide the surface realization of sentence words by conditioning the probability of each word on the latent representations: p ( y t | y 1: t 1 , SN 1: s jt ) , where s j t denotes the sentence index of the token y t .",
"This conditional probability can be naturally satisfied by the autoregressive decoding process.",
"The loss of the task is to maximize the likelihood of predicting the BOW of each target sentence: LBOW = 1 J (cid:88) j (cid:88) l log p ( z jl | SN j ) , (6) where J is the number of target sentence, and p ( z jl | SN j ) denotes the estimated probability of the l -th element in the bag of words for the j -th target sentence.",
"We further design a contrastive learning (CL)-based training objective to enhance the content planning and drive our model to learn a preference of coherent outputs over incoherent ones.",
"Negative Sample Construction.",
"One challenge for contrastive learning is how to construct negative samples to effectively train the model towards the desired goals.",
"We consider the original target as a positive sample representing a logically coherent output with gold planning, and construct negative samples as incoherent ones.",
"In particular, for a positive target, we create 4 negative samples based on the following strategies: (1) SHUFFLE , where we randomly shuffle the target sentences to encourage the model to learn the correct sentence order; (2) REPLACE , where we randomly replace 50% of the original target sentences with random sentences from the corpus to facilitate the model to learn better content organization; (3) DIFFERENT , where we completely replace the original target sentences with a new set that are annotated as the target of a different input from the corpus; (4) MASK , where we randomly mask 20% of the nonstop target words that are related to any keyphrases from the keyphrase set, and adopt BART to fill the masked tokens since BART is naturally a denoising model.",
"We enforce the filled negative target to be different from the original one.",
"Coherence-based Contrastive Loss.",
"Since we aim to encourage the model to distinguish between coherent and incoherent targets and generate outputs with coherent logical flows, we design a novel coherence-based contrastive learning objective.",
"Given a source-target pair, the model projects the output feature from the content selection layer to a coherence score between 0 and 1. Formally, for the i -th source-target pair, we enforce the score of the original target ( r + i ) to be larger than all corresponding negatives ( { r ik } ) by a fixed margin : LCL ( r + i , { r ik } ) = (cid:88) k max (0 , + r ik r + i ) , (7) r + i = F ( AvgPool ( W cl H d + i + b cl )) , (8) r ik = F ( AvgPool ( W cl H d ik + b cl )) , (9) where F ( ) is a nonlinear transformation with sigmoid, H d + i and H d ik are output features from the content selection layer for the positive and the k -th negative sample, and AvgPool ( ) is the average pooling to compute a fixed-size vector.",
"In this way, we expect the model to assign higher probability to the coherent target than incoherent ones.",
"We jointly optimize our model for content planning and surface realization by combining the objectives for the sentence-level BOW planning ( LBOW ), the word-level generation by cross-entropy loss over the target tokens ( LGEN ) , and the contrastive learning loss ( LCL ): L = LGEN + LBOW + LCL , where and are tuned as hyper-parameters.",
"We conduct experiments on two long-form opinion generation datasets of distinct domains: (1) Argument Generation ( ArgGen ) (Hua et al., 2019), where the model is required to generate a counterargument to refute a given proposition; (2) Opinion Article Generation ( OpinionGen ) (Hua and Wang, 2020), to produce an opinion article given a title.",
"The data statistics are shown in Table 1. Argument Generation.",
"We first apply data from Reddit r/ChangeMyView (CMV) for argument generation.",
"We consider the original poster (OP) title as the statement, and the high-quality argument replies (with community endorsement) as the targets.",
"Note that we consider the full argument replies as targets.",
"The noun phrases and verb phrases that contain at least one topic signature word (Lin and Hovy, 2000) are extracted to form the guidance keyphrases.",
"Opinion Article Generation.",
"For generating opinion articles, we consider samples from the New York Times (NYT) corpus (Sandhaus, 2008), with articles whose taxonomy labels include Top/Opinion .",
"The articles with less than three sentences or more than 10 sentences are discarded.",
"We further exclude articles containing more than 250 tokens considering the limited computing resources.",
"57,600 articles are randomly selected as the final dataset.",
"We apply the same method as in argument generation to extract topical guidance keyphrases.",
"The article title is regarded as the input statement.",
"We compare our model against the following baselines : (1) RETRIEVAL (Stab et al., 2018) which retrieves targets based on TF-IDF weights of words from the training set.",
"We keep the top-ranked results as outputs; (2) HIERPLAN (Hua et al., 2019) which is an end-to-end trained generation model with a hierarchical decoder to perform sentence-level content planning and surface generation; (3) FULLSEQ 2 SEQ (Schiller et al., 2021) where we fine-tune BART with keyphrases concatenated to the input statements; (4) SSPLANER (Kang and Hovy, 2020) is a global planning method which first conducts content prediction and then guides the surface generation with the predicted contents; (5) SEPPLAN is a two-stage planning model similar to Hua and Wang (2020), where we first fine-tune a BART as the planner to generate the ordered keyphrase plans for each target sentence, and then fine-tune another BART as the generator to produce final outputs based on the statement and keyphrase plans.",
"The details of SEPPLAN are in the Appendix A.2.",
"We use the BART-base version in all experiments for both our method and baselines.",
"We truncate both input statement and output target to at most 256 tokens during training.",
"For the BOW planning loss ( LBOW ), we consider the salient content words as the ground-truth bag of words for each target sentence.",
"For the training objective, we set as 0.2 for ArgGen and 0.3 for OpinionGen, and as 0.2 based on the validation performance.",
"The margin for contrastive loss is set as 0.5 for ArgGen and OpinionGen according to the validation performance.",
"We optimize our model with AdamW (Loshchilov and Hutter, 2017).",
"During the decoding time, we apply nucleus sampling (Holtzman et al., 2019) with a cumulative probability threshold of 0.9, and the maximum of generation steps are 150 for ArgGen and 200 OpinionGen.",
"More training and decoding details are in the Appendix A.2.",
"We first evaluate our model with BLEU (Pap-ineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Denkowski and Lavie, 2014).",
"The results are shown in Table 2. Our PLANET w/o CL model (without contrastive loss) consistently outperforms all baseline methods.",
"In particular, compared with FULLSEQ 2 SEQ and SSPLANER which are also fine-tuned based on BART with the same inputs, the substantial improvements underscore the effectiveness of our dynamic content planning to generate better outputs.",
"Meanwhile, the significant lead over HIERPLAN indicates the importance of incorporating 2293 ArgGen OpinionGen System BLEU-2 ROUGE-2 METEOR Len.",
"content planning into pre-trained language models.",
"Furthermore, PLANET w/o CL significantly outperforms SEPPLAN , which confirms that the end-to-end training in our approach can mitigate the disconnection issue of the two-stage generation pipeline and produce superior results.",
"Among our model variants, removing content selection (w/o SEL.) and BOW planning (w/o BOW) both lead to performance decrease.",
"This demonstrates the importance of the components that help the model conduct effective content planning.",
"In addition, we observe that incorporating the contrastive loss (PLANET) brings performance gains on automatic results, especially with significant improvements on BLEU scores.",
"This suggests that our contrastive loss can guide the model to more precisely use keyphrases and reflect the keyphrase information in the outputs .",
"We provide further analysis on the keyphrase usage in Section 5.2.",
"Content Richness.",
"To evaluate content richness, we employ Distinct n -gram (Li et al., 2016) that calculates the number of distinct n -grams per output in Figure 5. RETRIEVAL achieves the highest distinct results on both datasets since it returns top-ranked human-written texts with the most distinct words.",
"Among generative methods, our dynamic Figure 6: Automatic evaluation on output coherence.",
"planning model PLANET w/o CL outperforms all baselines on both datasets.",
"In addition, after applying contrastive loss, our PLANET model generates even more unique n -grams.",
"The results imply our dynamic content planning and contrastive loss can enable the model to generate richer contents.",
"Automatic Evaluation on Coherence.",
"We fine-tune BERT (Devlin et al., 2019) on each dataset to automatically evaluate the output coherence, which predicts a score between 0 and 1 for each output.",
"The higher score indicates a more coherent output.",
"The coherence model details are in Appendix A.3.",
"The results are shown in Figure 6. Among all methods, PLANET achieves the highest coherence scores on both datasets, suggesting that our dynamic planning and contrastive loss are effective to improve the coherence of outputs.",
"In contrast, SEPPLAN has the lowest scores, indicating that decoupling planning and decoding stages may lead to cascading errors.",
"Compared to FULLSEQ 2 SEQ and SSPLANER , our PLANET w/o CL model without contrastive loss also maintains better coherence, which confirms that incorporating dynamic content planning essentially promotes coherence for long text generation.",
"Moreover, we observe that the results on OpinionGen are consistently better than 2294 System OpinionGen (%) ArgGen (%) PLANET 98.03 60.71 w/o SHUFFLE 96.20 59.30 w/o REPLACE 96.02 58.41 w/o DIFFERENT 96.11 59.95 w/o MASK 96.16 59.58 Table 3: Coherence scores for different negative strategies.",
"those on the ArgGen dataset.",
"A possible reason is that arguments in ArgGen are collected from social networks and contain more colloquial and informal expressions, making it harder to learn the implicit logical coherence.",
"We leave this for future work.",
"Ablation on Contrastive Sample Construction.",
"We study the contribution of each negative sample construction strategy for improving the coherence of the outputs.",
"As in Table 3, removing each strategy leads to a performance degradation, indicating the effectiveness of all types of negative samples to enhance the contrastive learning.",
"Among all negatives, removing REPLACE shows the most effects on both datasets.",
"We hypothesize that replacing target sentences breaks the original logical flow and thus is more likely to encourage the model to focus on the global coherence.",
"In contrast, DIFFERENT shows the least effects.",
"One possible explanation is that this strategy focuses more on topical relatedness between the input and output, instead of the logical flow within the output as the negative sample itself is inherently coherent.",
"We hire three proficient English speakers as human judges to evaluate model outputs on a scale of 1 (worst) to 5 (best) for: (1) topic relatedness which measures whether the output is relevant and consistent to the input; (2) coherence which measures the high-level logical flow and transition among sentences; and (3) content richness , measuring the amount of informative talking points and specific details.",
"We also ask judges to select top-ranked results based on the overall quality, and ties are allowed.",
"50 random samples are selected from each task.",
"The detailed guidelines of human evaluations are provided in the Appendix B. The results are shown in Table 4. Both our model variants achieve better results than FULLSEQ 2 SEQ on all aspects, underscoring the effectiveness of our dynamic planning to promote output coherence.",
"Moreover, introducing contrastive objective further improves output quality on the above aspects, and Task Model Rel.",
"the outputs are more likely to be top-ranked.",
"Overall, the human results verify the capability of our dynamic planning and contrastive objective to generate high-quality long-form texts.",
"Appropriateness of Keyphrase Usage.",
"We further study how keyphrases are utilized in outputs.",
"We first compute the percentage of keyphrases mentioned in outputs, as in the left of Figure 7. Among all models, SEPPLAN uses the least keyphrases in final outputs.",
"However, its intermediate planning results cover more than 95% of keyphrases.",
"This confirms that the two-stage method results in a disconnection problem between the planning module and the surface realization module, and the outputs are not guaranteed to reflect the plans.",
"Compared to FULLSEQ 2 SEQ and SSPLANER , our methods cover more keyphrases, suggesting that our dynamic planning and keyphrase selection are useful to help the model better incorporate the guidance signal into outputs.",
"We further select 50 random samples for both tasks and ask the same human judges to score the outputs from 1 (worst) to 5 (best) on the correctness of keyphrase usage: whether the model uses keyphrases adequately as main talking points when generating outputs.",
"Results in Figure 7 (right) indicate that our models tend to use more keyphrases and properly organize them in the out-2295 Statement : Absolute free speech doesnt exist real debate is about where to draw the line PLANET : 1 Absolute free speech exists in a conceptual , legal , and practical way.",
"2 The simple fact that it exists is obvious.",
"3 If you're going to say something in a legal way , at any time, the consequences should be the consequences.",
"4 It's not that absolute free speech doesn't exist, but the conceptual way you could interpret it is much more practical .",
"5 The internet has made it much more obvious that you could use it any way you wanted.",
"BOW : 1 free, speech, concept 2 simple, obvious 3 consequences, legal, illegal 4 freedom, case, practical 5 internet, easy FULLSEQ 2 SEQ : Absolute free speech exists in a conceptual (and probably legal) way.",
"There is the simple fact that absolute free speech exists in a legal way.",
"It's obvious what the consequences are for someone, but it can be done any time.",
"In many cases, it's very practical to do something like this in a legal way because the internet makes it very obvious.",
"Statement : Millions of mothers can't choose not to work PLANET : 1 Single mothers have to decide how to raise their children.",
"2 But the choice mothers have has often been made to work, juggling financial responsibilities that make it all the more difficult.",
"3 While it is true that many families have no such choice , the reality is the same : single mothers have little options .",
"4 And while it is true that a family of two or more lowers pay and is likely to take many times the cost of similar work , the reality is that it may not be that simple (...) BOW : 1 child, parent, children 2 work, choice, mother 3 choice, family, mother 4 work, pay, children, family FULLSEQ 2 SEQ : Crittenden is right about single mothers' choice to choose not to work, in her book \"the choice mothers make\" But the sad reality of working families is that it is the reality that Ms. Crittenden and many others, in juggling financial responsibilities, are forced to choose not to work.",
"If they are lucky enough to be able to keep their jobs, they can be at similar work as nannies.",
"But the sad reality is that the choice mothers make is no longer one wage earner (...) Figure 8: Sample outputs on ArgGen (Upper) and OpinionGen (Lower).",
"puts compared to all baseline methods.",
"Although on OpinionGen our contrastive model mentions fewer keyphrases, human judges rate it with higher scores for keyphrase usage.",
"We speculate that this can be attribute to the MASK strategy for negative sample construction in contrastive learning, which helps to improve the model ability on the appropriate usage of keyphrases.",
"The above results confirm that PLANET can properly utilize the keyphrases and reflect the contents in the outputs.",
"We show two sample outputs on both tasks and highlight the phrases relevant to the guidance",
"keyphrases in Figure 8. We can see that on both tasks, our model effectively leverages guidance keyphrases as main talking points, and properly organizes and reuses the keyphrases to form a coherent output.",
"In contrast, FULLSEQ 2 SEQ suffers from incoherence issues such as repetition (e.g., the first and second argument sentences) and inconsistent stance (e.g., choose not to work in generated opinion article).",
"This indicates that our dynamic planning is effective to guide the model to better leverage keyphrases in the outputs.",
"We also present the predicted BOW of our model for each generated sentence.",
"As can be seen, our model predicts most of the salient content words of the target sentences and effectively reflects the semantic plans in the generated sentences, suggesting that our latent representations are useful to capture the global semantic information of each sentence and conduct content planning during the generation process.",
"However, there is still a large gap compared with human written texts, inspiring the future work on long-form text generation.",
"More sample outputs are provided in Appendix D. 6 Conclusion We present a novel generation framework to dynamically conduct content planning and surface realization in large autoregressive Transformers by leveraging self-attention and high-level latent representations.",
"The latent representations are grounded by bag-of-words that measures the overall semantic plan of each target sentence.",
"We further introduce a novel coherence-based contrastive objective with different negative sample construction strategies to improve output coherence.",
"Experiment results on two opinion text generation tasks demonstrate that our model can generate high-quality outputs with better coherence and content richness.",
"We thank the anonymous reviewers, area chair, and senior area chairs for their constructive suggestions on our work.",
"We also thank Xinyu Hua for the helpful discussions.",
"Hou Pong Chan was supported by the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST).",
"Lifu Huang also thanks the support from the Amazon Research Awards.",
"We recognize that our method may generate fabricated and potentially harmful contents due to the systematic biases of pre-training using heterogeneous web corpora and the open-ended generation characteristics of the opinion generation tasks.",
"Therefore, we urge the users to carefully examine the ethical influence of the generated outputs and cautiously apply the system in real-world applications."
] | [
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"method",
"method",
"objective",
"method",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method"
] |
[
"Conversational dialogue systems (CDSs) are hard to evaluate due to the complexity of natural language.",
"Automatic evaluation of dialogues often shows insufficient correlation with human judgements.",
"Human evaluation is reliable but labor-intensive.",
"We introduce a human-machine collaborative framework, HMCEval, that can guarantee reliability of the evaluation outcomes with reduced human effort.",
"HMCEval casts dialogue evaluation as a sample assignment problem, where we need to decide to assign a sample to a human or a machine for evaluation.",
"HMCEval includes a model confidence estimation module to estimate the confidence of the predicted sample assignment, and a human effort estimation module to estimate the human effort should the sample be assigned to human evaluation, as well as a sample assignment execution module that finds the optimum assignment solution based on the estimated confidence and effort.",
"We assess the performance of HMCEval on the task of evaluating malevolence in dialogues.",
"The experimental results show that HMCEval achieves around 99% evaluation accuracy with half of the human effort spared, showing that HMCEval provides reliable evaluation outcomes while reducing human effort by a large amount.",
"Conversational dialogue systems (CDSs) are often trained to generate responses given unstructured, open-domain dialogues.",
"Evaluation of CDS responses has drawn broad attention due to its crucial rule for CDS development (Deriu et al., 2020).",
"Broadly speaking, there are two approaches to perform dialogue evaluation: automatic evaluation and human judgements (Finch and Choi, 2020).",
"Automatic evaluation metrics such as appropriateness (Lowe et al., 2017), engagement (Zhang Corresponding author. et al., 2020), are efficient but have low agreement with human judgements due to the diversity of responses (Liu et al., 2016), especially for word-overlap based metrics, such as BLEU (Pa-pineni et al., 2002) and ROUGE (Lin and Hovy, 2002).",
"More recently, training based methods, e.g., ADEM (Lowe et al., 2017), RUBER (Tao et al., 2018) and contextualized methods, e.g. BERT-based RUBER (Ghazarian et al., 2019), have been shown to have better agreement with human judgements.",
"However, these methods are still not reliable enough: the Pearson correlation with human judgments is 0.44 for appropriateness (Lowe et al., 2017) and 0.55 for relevance (Ghazarian et al., 2019).",
"To guarantee reliability of evaluation outcomes, our current best practice is to use human judgements.",
"In terms of most evaluation aspects, e.g., appropriateness (Young et al., 2018), coherence (Ram et al., 2018) and empathy (Rashkin et al., 2019), human judgements simply show the highest reliability.",
"Obviously, human judgments are more labor-intensive than automatic evaluation (Deriu et al., 2020).",
"The flaws of automatic evaluation and the lack of speed and scalability of human evaluation limits the speed at which the community can develop more intelligent CDSs.",
"For example, as part of the daily research and development cycle of CDSs, we need to change the model design and retrain the model multiple times, on a daily or even hourly basis.",
"Even if there is a minor change, we need to verify its performance again each time.",
"For an-other example, CDS leaderboards are very popular recently as a means to provide platforms for fair comparison (Hou et al., 2019).",
"There are usually dozens of models to evaluate, and new models are introduced everyday.",
"Practical scenarios like the above two call for dialogue evaluation methods that are both reliable and efficient.",
"collaborative evaluation (HMCEval) framework for dialogue evaluation with the aim of balancing reliability and efficiency.",
"HMCEval formulates the dialogue evaluation task as a sample assignment problem, i.e., if the machine can provide accurate outcomes, most evaluation samples should be assigned to the machine; otherwise, we should assign more samples to human evaluators.",
"As shown in Figure 1, automatic evaluation has low reliability although the efficiency is high; human judgement has high reliability but it is labor-intensive; HMCEval beats the previous two methods in balancing reliability and efficiency.",
"Finding a good balance between reliability and efficiency is non-trivial as the two desiderata are often in conflict with each other.",
"Usually, reliability is improved at the expense of efficiency (Chaganty et al., 2018).",
"There are three main modules in human-machine collaborative evaluation (HMCEval), namely the model confidence estimation (MCE) module, the human effort estimation (HEE) module, and the sample assignment execution (SAE) module.",
"First, the MCE module measures the confidence of predicted evaluation for each dialogue response based sample.",
"Our implementation of MCE is based on three estimation methods, namely, BERT based maximum class probability (MCP), trust score (TS) (Jiang et al., 2018), and true class probability (TCP) (Corbi`ere et al., 2019).",
"TS and TCP have originally been introduced for images; we add a BERT layer to expand it to dialogues.",
"Second, the HEE module estimates the effort.",
"Our implementation is based on annotation time cost prediction by dialogue-related and worker-related features.",
"Third, the SAE module decides whether a dialogue response sample should be assigned to a human or a machine for evaluation by maximizing the confidence and minimizing the (human) effort.",
"We implement the module by integer linear programming (ILP).",
"We demonstrate the effectiveness of HMCEval on dialogue malevolence evaluation (Zhang et al., 2021).",
"The main reason we choose this particular task is that dialogue malevolence is highly related to social good (Xu et al., 2020; Shi et al., 2020), which is of vital importance for CDSs, but it is hard to evaluate because of the need of deep semantic understanding (Das et al., 2020).",
"We carry out experiments on the recently introduced malevolent dialogue response detection and classifying (MDRDC) dataset (Zhang et al., 2021).",
"Our results show that the proposed HMCEval framework significantly surpasses machine evaluation and human judgement in terms of balancing reliability and effort.",
"HMCEval achieves around 99% evaluation accuracy (compared to human evaluation) with as much as half of the human effort saved.",
"The results demonstrate that HMCEval can be used for reliable and efficient evaluation of CDSs since the accuracy is high and the effort is significantly reduced compared to fully human evaluation.",
"Automatic evaluation for CDSs includes untrained methods and learning based methods.",
"Early untrained methods, such as perplexity (Chen et al., 1998), and quality metrics BLEU (Papineni et al., 2002) and ROUGE (Lin and Hovy, 2002) are widely used for CDS but the aspects they evaluate are limited.",
"Recent work based on word embed-dings cover more aspects, such as distinct-n for diversity (Li et al., 2016) or average word embedding similarity for coherence (Luo et al., 2018).",
"Most untrained methods have low agreement with human judgements (Liu et al., 2016) because machine responses are highly diversified, although a few metrics have sufficient agreement with human, i.e., a Pearson correlation of 0.69 for coherence (Luo et al., 2018).",
"To address the problem of low agreement with human judgments, learning based methods have been developed (Novikova et al., 2017; Tao et al., 2018).",
"Lowe et al. (2017) propose ADEM to evaluate the appropriateness of responses.",
"Tao et al. (2018) propose RUBER, which shows better agreement with human judgments than ADEM.",
"RUBER is designed for relevance and similarity by blending relevance between the generated response with human ground truth and context.",
"Several methods utilize pretrained language models such as BERT for automatic evaluation.",
"Ghazarian et al. (2019) propose contextualized RUBER, which outperforms RUBER.",
"Similarly, a predictive engagement metric is built by utilizing user engagement score (Ghazarian et al., 2020); quality is evaluated by transformer based language models without reference response (Nedelchev et al., 2020).",
"The above methods cover more aspects and integrate linguistic features (Tao et al., 2018), thus the agreement with human judgement is higher than most word-overlap based methods.",
"However, for most of the metrics, the model performance still has space to improve, for instance, the accuracy of engagement is 0.76 (Ghazarian et al., 2020).",
"Our proposed HMCEval framework could be applied to these metrics and improve general evaluation reliability with an acceptable amount of human effort.",
"Human judgement is applied in common evaluation aspects including fluency, consistence, relevance, appropriateness, coherence, quality for CDSs (Finch and Choi, 2020).",
"It is reliable, yet expensive and time intensive, especially for large scale evaluation (Hou et al., 2019).",
"In order to guarantee reliability, agreement among different workers is needed, which makes the high effort problem more severe (Das et al., 2020).",
"Unlike the methods listed above, the HMCEval framework specifically aims to balance reliability and human effort for the evaluation of CDSs.",
"2.2 Human-machine collaboration Human-machine collaboration hybridizes machine prediction and human judgements.",
"Previous research mostly focuses on using human judgments to help label the low reliability samples (Callaghan et al., 2018; Kyono et al., 2018; Gates et al., 2020).",
"Earlier research gives human the output of an automatic model and lets human decide whether the model prediction is reliable (Lasecki et al., 2012).",
"However, people tend to ignore the predictions of a model if it makes mistakes (Dietvorst et al., 2015) since they are not tolerant to model mistakes.",
"In such cases, predictive results are not fully utilized and human effort increases.",
"At the same time, there is a possibility that human annotators mistakenly follow the outputs of a model with errors (Cum-mings, 2004).",
"Both situations lead to failure of human-machine collaboration.",
"The core problem is to determine when a human annotator should trust a model.",
"Confidence estimation for a model's prediction has been proposed to help improve overall accuracy, correctness etc. for human-machine collaboration.",
"Callaghan et al. (2018) develop a hybrid cardiogram classification human-machine collaborative (HMC) framework, which achieves better performance than a classifier by itself and uses less expert resources compared to expert classification by itself.",
"Kyono et al. (2018) develop a Man and Machine Mammography Oracle that improves overall breast cancer diagnostic accuracy, while reducing the number of radiologist readings.",
"Gates et al. (2020) use Abstrackr based a HMC screening method to screen relevant title and abstract for paper reviews, which could save time of reviewers and have little risk of missing relevant records.",
"However, the above methods select the top-k most unreliable samples and do not consider effort division between human and machine.",
"Chaganty et al. (2018) are the first to combine machine and human evaluation to obtain a reliable estimate at lower cost than human alone on summarizing and open-source question answering, with cost reduction only 713%.",
"Ravindranath et al. (2020) build a highly cost-efficient face recognition HMC framework that outperforms both a machine-based method and a fully manual method, with both reliability and effort considered.",
"However, the methods introduced previously are not suitable for HMC evaluation for dialogue because of focusing on non-dialogue tasks, low cost reduction, or not considering both reliability and effort.",
"Our proposed framework is purpose-built for dialogue evaluation.",
"It leverages both human judgement and machine prediction by assigning low confidence machine-generated samples to human workers, while minimizing overall human effort.",
"Suppose we have a set of M samples { ( C i , x i ) } Mi =1 to be evaluated.",
"Here, C i is the dialogue context and x i is a response generated by a CDS model f g ( C ) x .",
"Below, we propose a method to achieve reliable and efficient evaluation of the M samples under the constraint that a human can annotate at most N (cid:28) M samples.",
"We propose the human-machine collaborative evaluation (HMCEval) framework to solve this task.",
"HMCEval is divided into three modules: sample assignment execution (SAE), model confidence estimation (MCE) and human effort estimation (HEE).",
"The optimization problem of assigning M samples to a human or machine can be solved by tractable integer linear programming, which is NP-complete (Papadimitriou and Steiglitz, 1998).",
"First, we introduce the decision variable z i to denote the sample assignment to a human or machine: z i = (cid:40) 0 , sample i is assigned to a human; 1 , sample i is assigned to machine.",
"(1) Second, we define two ILP objectives that try to maximize the overall confidence and minimize the overall effort, respectively: max M (cid:88) i =1 a i z i + M (cid:88) i =1 b i (1 z i ) , min M (cid:88) i =1 k i z i + M (cid:88) i =1 l i (1 z i ) , (2) where",
"(a) M is the total number of samples to evaluate generated by the generation model f g ( C ) x ;",
"(b) a i [0 , 1] is the model confidence for evaluating sample i ;",
"(c) b i is the human confidence for evaluating sample i ;",
"(d) k i is the machine effort for evaluating sample i ; and",
"(e) l i [0 , 1] is the human effort for evaluating sample i .",
"We use the weighted sum method (Marler and Arora, 2010) to solve Eq.",
"2 so as to get the optimal z i .",
"The objective function in Eq.",
"2 is transformed into: max (cid:34) M (cid:88) i =1 a i z i + M (cid:88) i =1 b i (1 z i ) (cid:32) M (cid:88) i =1 k i z i + M (cid:88) i =1 l i (1 z i ) (cid:33)(cid:35) , (3) subject to M (cid:88) i =1 z i M N b i = 1 for i = 1 , . . . , M k i = 0 for i = 1 , . . . , M 0 .",
"The constraints are motivated as follows:",
"(a) the number of samples assigned to a human is less than or equal to N ;",
"(b) human confidence is assumed to be 1;",
"(c) machine effort is assumed to be 0; and",
"(d) is greater than 0.",
"N and are two parameters that we use to balance reliability and effort; is a trade-off parameter that controls the contribution of two objectives to the overall objective, as shown in Eq.",
"3; and N controls the total samples assigned to a human.",
"As N gets larger or gets smaller, the overall evaluation is more reliable but needs more human effort.",
"As N gets smaller or gets larger, the overall evaluation costs less human effort but gets less reliability.",
"Given a machine evaluation model (usually a classification model (De Mattei et al., 2020)) f c ( C, x ) y , where y is the evaluation result (usually a category, e.g., malevolence or non-malevolence), the MCE module aims to recognize how confident the evaluation y is.",
"In this work, we investigate three confidence estimation methods, namely maximum class probability (MCP), trust score (TS) and true class probability (TCP).",
"MCP is a basic method that directly uses the classification probabilities to measure the confidence.",
"Based on the dataset { ( C (cid:48) j , x j ) , y j } Qj =1 , we build a BERT-based classifier as a machine evaluation model f c .",
"MCP is the softmax probability of the evaluation result y .",
"Formally, MCP( C (cid:48) , x ) = P ( Y = y | w, C (cid:48) , x ) .",
"TS is a confidence measurement that estimates whether the predicted category of a test sample by a classifier can be trusted.",
"It is calculated as the ratio between the Hausdorff distance from the sample to the non-predicted and the predicted categories (Jiang et al., 2018).",
"First, the training data is processed to find k-NN radius based -high-density-set H ( C (cid:48) train , x train ) , where { C (cid:48) train , x train } is the output of feeding training samples { ( C (cid:48) train , x train ) } into the BERT layer of f c .",
"This part is different from the original TS work designed for images (Yu et al., 2019).",
"Then, for a given test sample, we predict the ratio of distances, which is the TS value.",
"Formally, a = d ( C (cid:48) j , x j , H 1 ) /d ( C (cid:48) j , x j , H 2 ) , where H 1 is the high density set of the non-predicted category, H 2 is the high density set of the predicted category.",
"The estimated TS is normalized within 0 and 1 by min-max normalization.",
"As for TCP, the estimation is obtained by a learning-based method.",
"Similar to TS, the original confidence network for TCP estimation is also built for images (Corbi`ere et al., 2019).",
"We expand it into a BERT-based confidence network for CDSs.",
"The TCP estimation part f conf is based on the BERT-classifier f c .",
"Formally, f conf ( C, x, f c , f g ) a [0 , 1] , where f g is the generation model.",
"We pass the features from the BERT layer of f c and feed them into a confidence network implemented by a succession of dense layers with a sigmoid activation to get the confidence scalar.",
"We define an MSE loss to train TCP: L conf = 1 Q (cid:80) Qi =1 ( a ( C (cid:48) i , x i , ) a ( C (cid:48) i , x i , y i )) 2 , where a ( C (cid:48) i , x i , y i ) is the target confidence value.",
"During inference, the ground truth TCP score is calculated based on the BERT-based classifier: TCP( C (cid:48) , x, y ) = P ( Y = y | w, C (cid:48) , x ) , where y is the true category.",
"The HEE module is designed for estimating the human effort e .",
"In this work, we use time cost, i.e., the time spent for each annotation, to represent human effort.",
"We implement the time cost estimation model f l with random forest regression (Liaw et al., 2002): f l ( h ( C, x )) l [0 , 1] , h is the feature extraction function.",
"There are two groups of features, namely dialogue related features and worker related features; see Table 5. The dialogue related features are:",
"(a) total turns': total number of turns in a dialogue;",
"(b) malevolent turns': total number of malevolent turns in a dialogue; for prediction, we use the BERT-classifier results;",
"(c) non-malevolent turns': total number of non-malevolent turns in a dialogue; for prediction, we use the BERT-classifier results.",
"(d) first submission or not': if this is the first time the worker does this task, the value is 1, else 0;",
"(e) paraphrased turns': some turns are paraphrased; we calculate the total number of such turns;",
"(f) total length': total number of tokens in the dialogue;",
"(g) FK score': the result of a readability test, based on (Kincaid et al., 1975);",
"(h) DC score': the result of a readability test, based on (Dale and Chall, 1948);",
"(i) contains malevolent turn or not': if the dialogue contains a malevolent turn, the value is 1, else 0; and",
"(j) perplexity score': we use BERT as a language model to calculate the perplexity (Gamon et al., 2005).",
"The worker related features are:",
"(a) worker test score': this is based on a test designed to test workers' ability to annotate the dialogue according to the gold standard annotation (Zhang et al., 2021); and",
"(b) approval rate ranking': we rank workers by their lifetime approval rate in ascending order, and use the index; lower approval rate workers (i.e., with a smaller index) usually spend less time on annotations.",
"To train the time cost estimation model f l , we need the annotation time spent on each response.",
"However, for each individual response, the time spent is relatively short; as a consequence, the influence of noise such as attention, click time, may be relatively large and make the data unreliable as training data.",
"Therefore, we use the annotation time spent on each dialogue instead of each response as time cost target, and it is normalized within 0 and 1 using min-max normalization.",
"For the SAE module and effort assessment, we use the average time per turn of each dialogue as the time cost l for each response.",
"In addition, there are multiple human annotator submissions for inter-annotator agreement; we filter out the data points that disagree with the agreed annotation; then we choose the data point with a higher annotator test score; if the test scores are same, we randomly choose one.",
"We carry out experiments on the MDRDC dataset which is initially built for malevolent dialogue detection and classification (Zhang et al., 2021).",
"The dataset consists of 6,000 dialogues, with 21,081 non-malevolent utterances and 10,299 malevolent utterances.",
"The dataset also includes MTurk information, e.g., the time spent on each annotation.",
"We follow the original paper to split the dataset into train, validation and test with a ratio of 7:1:2.",
"In terms of the responses by the generation model f g , in our implementation, we use the original responses by a human for evaluation.",
"The MCE module is implemented by a BERT-based classifier and a BERT-based confidence network.",
"First, for the BERT-based classifier, we add a softmax layer on top of the [CLS]' token.",
"It is fine-tuned with 4 epochs since it is already pretrained on a large dataset.",
"The vocabulary size is 30,522.",
"Dialogue context and the current response are concatenated with the [SEP]' delimiter.",
"We consider the previous three dialogue utterances (if any) as context.",
"We set the max sequence length to 128, the batch size to 64, the dropout ratio to 0.1, and the learning rate is 5e-5.",
"Second, the BERT-based confidence network is attached to a BERT-classifier.",
"It is composed of 5 dense layers, following previous work (Corbi`ere et al., 2019).",
"As for max sequence length, batch size, dropout ratio, and learning rate, these are the same as for the classifier.",
"The confidence network is trained with a maximum of 30 epochs, with early stopping if the validation loss does not improve for 10 epochs.",
"The HEE module is implemented by a random forest regression model; the max number of estimators in this study is 100; only the features related to time cost are selected for annotation time cost prediction, with a maximum feature size of 10.",
"We use the MIP package to implement ILP for the SAE module 1 with the Coin-or branch-and-cut solver (Mitchell, 2002).",
"The search stops when it reaches a feasible solution.",
"All the neural models are trained on GeForce GTX TitanX GPUs.",
"We use reliability metrics and effort metrics to assess overall performance.",
"The reliability metrics are precision, recall, F1-score, and accuracy.",
"We calculate the macro score of precision, recall and F1 as the categories are imbalanced (Hossin and Sulaiman, 2015).",
"The effort metrics include human ratio and time cost.",
"Human ratio is the ratio of samples assigned to a human.",
"Time cost is the total time required for a human to annotate the samples.",
"We use AUC, and top-k accuracy to assess the different MCE implementations (Ouni et al., 2017).",
"We rank the confidence in descending order and calculate the accuracy at top-50%.",
"Top-50% accuracy measures how well the MCE predictions work for the top-50% most confident samples.",
"We use mean square error (MSE), rooted mean square error (RMSE), mean absolute error (MAE) and R 2 to assess the HEE module.",
"MSE, RMSE, MAE are calculated between the predicted time cost and real time cost.",
"We also use the Pearson and Spearman correlation scores to analyze the correlation between features and real time cost.",
"To determine how HMCEval compares to human evaluation and machine evaluation in balancing reliability and efficiency, we report the results in Table 1.",
"HMCEval outperforms both human and machine evaluation in balancing reliability and efficiency.",
"More importantly, HMCEval, with half of the human effort spared, achieves reliability that is close to human reliability.",
"First, compared to 1 https://python-mip.com Table 1: Reliability and efficiency of HMCEval w.r.t. human and machine evaluation ( N/M = 0 . 5 ).",
"human evaluation, HMCEval arrives at 98.5% of human accuracy but the human effort decreases by 50.0%.",
"This means that HMCEval is much more efficient than human evaluation, while the reliability is close to human.",
"Second, compared to machine evaluation, the precision, recall, F1-score and accuracy of HMCEval increase by 20.2%, 21.5%, 21.0%, and 14.3%, respectively.",
"This means that HMCEval has higher reliability than machine evaluation.",
"In sum, therefore, HMCEval surpasses both human and machine evaluation in balancing reliability and efficiency.",
"To investigate how N and , two parameters for the SAE module that balance the reliability and effort, influence the performance of HMCEval, we first fix and vary N/M from 0 to 1 with a step size of 0.05, where M is the total number of samples to evaluate.",
"Then, we fix N and vary from 0 to 45 with a step size of 0.1.",
"The results are shown in Figure 2 and 3. Influence of N .",
"Generally, as N increases, HMCEval has better reliability, nevertheless the human effort increases.",
"From Figure 2, we can see that when is fixed, as N gets larger, the precision, recall, F1-score and accuracy increase, but human ratio and time cost also increase.",
"With larger N , more samples are assigned to a human, so the overall evaluation results are more reliable, but this requires a bigger human annotation effort.",
"The marginal reliability benefit of assigning more samples to a human decreases as N gets larger.",
"Figure",
"2(a) shows that as N increases, the reliability increases sharply at the beginning but the increase levels off when N > 2 , 500 .",
"The samples assigned to a human when N < 2 , 500 have lower model confidence, i.e., it is very likely that those samples are given inaccurate evaluation by machine.",
"But when N > 2 , 500 , samples with higher model confidence are also assigned to human which yields a",
"limited return in terms of reliability.",
"Influence of .",
"As increases, HMCEval gets more efficient, while the reliability gets worse.",
"As shown in Figure 3, when increases, the human ratio stays at 0.5, and after a certain pivotal point, it decreases sharply.",
"The time costs keep decreasing.",
"The precision, recall, F1-score and accuracy decreases rapidly.",
"With larger , the SAE objective puts a bigger emphasis on efficiency, so HMCEval gets more efficient but less reliable.",
"Analysis of the SAE module.",
"By adjusting the values, the SAE module can degenerate into a greedy algorithm (Gates et al., 2020).",
"Table 2 shows the results with the human ratio set to a fixed value of N/M , i.e., 0.5.",
"When = 0 , the HEE module has no effect, so it has the worst efficiency and the best reliability.",
"When , i.e., 500, the MCE module contributes little to the objective, so it has the best efficiency but the worst reliability.",
"Analysis of the MCE module.",
"For the MCE module, we analyze the effect of alternative implementations.",
"As shown in Figure 4, TS outperforms MCP and TCP.",
"Specifically, when the human ratio is fixed to 0.5, TS achieves the best accuracy for different time costs.",
"This means that TS has better model confidence estimation for the samples with higher confidence.",
"As shown in Table 3, for the top-50% samples ranked by model confidence, Table 2: Analysis of the SAE module.",
"TS has the best accuracy.",
"MCP has the best AUC score, which means for all the M samples, MCP is the best.",
"But the top-50% samples have more influence on the SAE module.",
"Analysis of the HEE module.",
"For the HEE module, we analyze the effect of different features.",
"Adding worker related features helps to improve accuracy.",
"As shown in Figure 5, SAE with both dialogue and worker related features has better accuracy than SAE with only dialogue related features when the human ratio is fixed to 0.5.",
"Worker based features are useful for time cost estimation.",
"This is confirmed by the results in Table 4. The results with both dialogue and worker related features are the best, with MSE, RMSE and MAE decreasing by 55.6%, 35.9%, 45.9%, and R 2 increasing by 76.2%.",
"The HEE module is sufficient for time cost prediction since R 2 greater than 0.26 is sufficient for behavior related models (Cohen, 1988).",
"A correlation analysis between each feature and the real time cost is shown in Table 5. All the features, except perplexity, have significant Pearson or Spearman scores with the real time cost by workers.",
"Most features show positive correlation.",
"But two features, namely non-malevolent turns' and FK Figure 5: Feature analysis w.r.t. accuracy. (D: Dialogue related features, W: Worker related features.) Table 4: Direct evaluation of the HEE module. (D: Dialogue related features, W: Worker related features.) Metric D D+W MSE 0.009 0.004 RMSE 0.092 0.059 MAE 0.061 0.033 R2 0.433 0.763 score' have a negative correlation with time cost:",
"(a) non-malevolent responses are relatively easy to identify; and",
"(b) a higher FleschKincaid (FK) score means that the dialogue is easier to understand, which requires less time to annotate.",
"We analyze the effectiveness of HMCEval at different dialogue turns in Figure 6. As the dialogue evolves, HMCEval gets more reliable.",
"It gets easier for the MCE module to detect malevolent responses with high confidence when more context information is available.",
"The exception for turn seven and nine might due to the fact that the total number of utterances is small (less than 5% of the whole test set) and thus the results have high variance.",
"The effort is not related to turn.",
"We also look into the 1.5% cases when HMCEval gives inaccurate evaluation, and some cases that require human judgement but are not assigned to a human.",
"We find that these cases mostly have meaning extension, which means an extension of meaning of words with reference.",
"For instance, I've commit 8 treasonous acts today and they still haven't put me in prison', this is actually a non-malevolent joke.",
"However, the MCE module classified it to be malevolent with high confidence.",
"In this work, we have introduced a human-machine collaborative evaluation framework (HMCEval) for reliable and efficient CDS evaluation.",
"Experiments on the task of evaluating malevolence in dialogue responses show that HMCEval can achieve around 99% reliability with half human effort spared.",
"A limitation of HMCEval is that given 50% samples Table 5: Correlation analysis between time cost and different features for HMC module.",
"assigned to a human, 1.11.5% samples are evaluated inaccurately.",
"This is due to contexts that consist of a small number of turns, or high confidence for some dialogues where language is used in a non-literal way.",
"Although HMCEval could be generalized to several evaluation metrics of CDS, e.g., BERT-based RUBER and BERT-based engagement, for score-based metrics, suitable confidence estimation is required.",
"In the future, we seek to improve the model confidence and human effort estimation by considering better neural architectures and more factors; we also plan to conduct a comprehensive and reliable analysis of the performance of current state-of-the-art CDS models by applying HMCEval to various evaluation aspects.",
"This research was funded by the China Scholarship Council and the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https://hybrid-intelligence-centre.nl .",
"Our code is available at: https://github.com/ repozhang/CaSE_HMCEval ."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Uncertainty estimation (UE) of model predictions is a crucial step for a variety of tasks such as active learning, misclassification detection, adversarial attack detection, out-of-distribution detection, etc.",
"Most of the works on modeling the uncertainty of deep neural networks evaluate these methods on image classification tasks.",
"Little attention has been paid to UE in natural language processing.",
"To fill this gap, we perform a vast empirical investigation of state-of-the-art UE methods for Transformer models on misclassification detection in named entity recognition and text classification tasks and propose two computationally efficient modifications, one of which approaches or even outperforms computationally intensive methods 1 .",
"Machine learning methods are naturally prone to errors as they typically have to deal with ambiguous and incomplete data during both training and inference.",
"Unreliable predictions hinder the application of these methods in domains, where the price of mistakes is very high, such as clinical medicine.",
"Even in more error-tolerant domains and tasks, such as intent recognition in general-purpose chatbots, one would like to achieve a better trade-off between expressiveness of a model and its computational performance during inference.",
"Since mistakes are inevitable, it is crucial to understand whether model predictions can be trusted or not and abstain from unreliable decisions.",
"Uncertainty estimation (UE) of model predictions aims to solve this task.",
"Ideally, uncertain instances should correspond to erroneous 1 The code for experiments is available online at https://github.com/AIRI-Institute/ uncertainty_transformers Equal contribution, corresponding authors objects and help in misclassification detection .",
"Besides misclassification detection, UE is a crucial component for active learning (Settles, 2009), adversarial attack detection (Lee et al., 2018), detection of out-of-distribution (OOD) instances (Van Amersfoort et al., 2020), etc.",
"Some classical machine learning models, e.g. Gaussian processes (Rasmussen, 2003), have built-in UE capabilities.",
"Modern deep neural networks (DNNs) usually take advantage of a softmax layer, which output can be considered as a prediction probability and be used for UE.",
"However, the softmax probabilities are usually unreliable and produce overconfident predictions (Guo et al., 2017).",
"Some previously proposed techniques such as deep ensemble (Lakshminarayanan et al., 2017) are known for producing good UE scores but require a large additional memory footprint for storing several versions of weights and multiply an amount of computation for conducting several forward passes.",
"Reliable UE of DNN predictions that does not introduce high computational overhead is an open research question (Van Amersfoort et al., 2020).",
"In this work, we investigate methods for UE of DNNs based on the Transformer architecture (Vaswani et al., 2017) in misclassification detection.",
"We consider two of the most common NLP tasks: text classification and named entity recognition (NER).",
"The latter has been overlooked in the literature on UE.",
"To our knowledge, this work is the first to consider UE for NER.",
"We propose two novel computationally cheap methods for UE of Transformer predictions.",
"The first method is the modification of the Monte Carlo dropout with determinantal point process sampling of dropout masks (Shelmanov et al., 2021).",
"We introduce an additional step for making masks more diverse, which helps to 8237 achieve substantial improvements and approach the performance of computationally-intensive methods on NER.",
"The second method leverages Mahalanobis distance (Lee et al., 2018) but also adds a spectral normalization of the weight matrix in the classification layer (Liu et al., 2020).",
"This method achieves the best results on most of the datasets and even outperforms computationally-intensive methods.",
"We also investigate recently proposed regularization techniques in combination with other UE methods.",
"The contributions of this paper are the following: We propose two novel computationally cheap modifications of UE methods for Transformer models.",
"The method based on Mahalanobis distance with spectral normalization approaches or even outperforms strong computationally intensive counterparts.",
"This work is the first to investigate UE methods on the NER task.",
"We conduct an extensive empirical evaluation, in which we investigate recently proposed regularization techniques in combination with other UE methods.",
"It is well known that reliable uncertainty scores can be obtained simply by constructing an ensemble of decorrelated neural networks ( deep ensemble ) (Lakshminarayanan et al., 2017).",
"However, such a straightforward approach is coupled with substantial computational and memory overhead during training an ensemble, performing inference of all its components, and storing multiple versions of weights.",
"This overhead is a serious obstacle to deploying ensemble-based uncertainty estimation methods in practice.",
"Uncertainty estimation is a built-in capability of Bayesian neural networks (Blundell et al., 2015).",
"However, such models have similar issues as ensembles and also require special training procedures.",
"Recently, it was shown by Gal and Ghahramani (2016) that dropout, a well-known regularization technique, is formally equivalent to approximate variational inference in a deep Gaussian process if it is activated during prediction.",
"This method, known as Monte Carlo (MC) dropout, uses the approximating variational distribution with Bernoulli variables related to network units.",
"MC dropout does not impose any overhead during training, introduces no additional parameters, and thus does not require any additional memory.",
"The main disadvantage of this method is that it usually requires many forward-pass samplings for approximating predictive posterior, which makes it also computationally expensive.",
"Recently, many works have investigated the approximate Bayesian inference for neural networks using deterministic approaches: Lee et al. (2018); Liu et al. (2020); Van Amersfoort et al. (2020); Mukhoti et al. (2021); Shen et al. (2021), etc.",
"These methods do not introduce notable overhead for inference, storing weights, and usually require compatible training time.",
"However, most of the research in this area is accomplished for computer vision tasks.",
"For text classification, a series of works investigates UE methods for the OOD detection task (Liu et al., 2020; Podolskiy et al., 2021; Zeng et al., 2021; Hu and Khan, 2021).",
"In this work, we focus on a more challenging task misclassification detection.",
"While OOD detection requires to model only the epistemic uncertainty inherent to the model and caused by a lack of training data, misclassification detection also requires to model aleatoric uncertainty caused by noise and ambiguity in data (Mukhoti et al., 2021).",
"We consider recently proposed methods in this area that are evaluated in text processing.",
"Three recent works propose techniques for misclassification detection based on an additive regularization of a training loss function.",
"Zhang et al. (2019) suggest adding a penalty that reduces the Euclidean distance between training instances of the same class and increases the distance between instances of different classes.",
"He et al. (2020) suggest using two components in the loss function that reduce the difference between outputs from two versions of a model initialized with different weights.",
"They also use mix-up (Thulasidasan et al., 2019) to generate additional training instance representations that help to capture aleatoric uncertainty, self-ensembling, MC dropout, and a distinctiveness score to measure the epistemic uncertainty.",
"Xin et al. (2021) introduce a regularizer that penalizes overconfident instances with high loss.",
"In another recent work, Shelmanov et al. (2021) propose to combine MC dropout with a Determinantal Point Process (DPP) to improve the diversity of predictions by considering the correlations between neurons and sampling the 8238 diverse neurons for activation in a dropout layer.",
"In this work, we conduct a systematic empirical investigation of UE methods on NLP tasks.",
"We evaluate combinations of methods that have not been tested before and propose two modifications, one of which achieves the best results among computationally cheap methods.",
"The previous work focuses on text classification tasks, while this work is the first to investigate UE also for NER.",
"In this section, we describe the baselines and propose novel uncertainty estimation techniques.",
"Softmax Response (SR) (Geifman and El-Yaniv, 2017) is a trivial baseline for UE that uses the probabilities generated via the output softmax layer of the neural network.",
"SR is based on the maximum probability p ( y | x ) over classes y = c C .",
"The smaller this probability is, the more uncertain model is: u SR ( x ) = 1 max c C p ( y = c | x ) .",
"Standard Monte Carlo Dropout (MC Dropout) Consider we have conducted T stochastic forward passes with activated dropout.",
"In this work, we use the following ways to quantify uncertainty with methods based on MC dropout: Sampled maximum probability (SMP) is: u SMP = 1 max c C 1 TT (cid:88) t =1 p ct , (2) where p ct is the probability of the class c for the t -th stochastic forward pass.",
"Probability variance (PV; Gal et al. (2017); Smith and Gal (2018)) is: u PV = 1 CC (cid:88) c =1 (cid:32) 1 TT (cid:88) t =1 ( p ct p c ) 2 (cid:33) , (3) where p c = 1 T (cid:80) t p ct is the probability for a class c averaged across T stochastic forward passes.",
"Bayesian active learning by disagreement (BALD; Houlsby et al. (2011)) is: u BALD = C (cid:88) c =1 p c log p c + 1 T (cid:88) c,t p ct log p ct .",
"The two former techniques are specifically designed for estimation of the epistemic (model) uncertainty arising from the lack of knowledge and ignore the aleatoric uncertainty related to ambiguity and noise in the data, while the latter method can be seen as a measure of total uncertainty (Malinin and Gales, 2018).",
"Transformers contain multiple dropout layers (after the embedding layer, in each attention head, and before the last classification layer).",
"It is shown in previous work that the standard MC dropout outperforms the baseline SR only when all dropout layers are activated in a model (Shelmanov et al., 2021).",
"Therefore, we follow this setting for experiments in this work.",
"We note that due to activating all dropout layers, multiple stochastic predictions are required for the whole network, which introduces a large computational overhead.",
"Similar UE scores are used in deep ensemble (Lakshminarayanan et al., 2017), where instead of multiple stochastic predictions we train and infer several model versions with different sets of weights.",
"Diverse Determinantal Point Process Monte Carlo Dropout (DDPP MC dropout) (Ours) Determinantal point processes (DPPs; Kulesza and Taskar (2012)) are used for sampling a subset of diverse objects from a given set.",
"Recently, Shelmanov et al. (2021) have combined the MC dropout with a determinantal point process (DPP) for sampling neurons in a dropout layer and demonstrated that using stochasticity in the last dropout layer (in a classification head of Transformer) only is enough to improve upon SR in misclassification detection.",
"This method is less computationally expensive than the standard MC dropout since it requires multiple stochastic predictions only for the top classification layer of the network with a small number of parameters, while all other layers are inferred only once.",
"Consider the similarity matrix C h between neurons of the h -th hidden layer (in particular, we use a correlation matrix between output values of neurons on the training set).",
"Then one can construct the DPP-based dropout masks M DPPh using C h as a likelihood kernel for the DPP: M DPPh DP P ( C h ) .",
"That gives the following probability to select a set S of activations on the layer h : P (cid:104) M DPPh = S (cid:105) = det( C Sh ) det( C h + I ) , (5) 8239 where C Sh is the square submatrix of C h obtained by keeping only rows and columns indexed by the sample S .",
"In this work, we improve this method by increasing the diversity of the sampled DPP masks.",
"After multiple dropout masks are pre-generated via DPP in the inference step as in the original DPP MC dropout, we make an additional step, in which we select a diverse set of masks from this pre-generated pool using one of two strategies: DDPP (+DPP) : We sample a set of diverse masks that activate different sets of neurons.",
"For this purpose, we apply DPP sampling again to the pool of pre-generated masks.",
"As a similarity kernel in this step, we use an RBF-similarity matrix of mask vectors.",
"DDPP (+OOD) : We sample a set of masks that generate diverse predictions.",
"For this purpose, we select the masks that yield the highest PV scores on the given OOD dataset.",
"After a new set of T masks is selected, we use them as in the standard MC dropout to obtain stochastic predictions.",
"Increasing the diversity of masks in the proposed modification is motivated by the finding of Jain et al. (2020) that improving the diversity of elements in an ensemble leads to better uncertainty estimates.",
"We note that in masks generated with DPP, usually, less than 50% of neurons are activated, which makes predictions poorly calibrated.",
"To mitigate this problem, for each constructed mask, we perform a temperature-scaling calibration (Guo et al., 2017) using a held-out dataset.",
"Spectral-normalized Neural Gaussian Process (SNGP) Liu et al. (2020) suggest replacing the typical dense output layer of a network with a layer that implements a Gaussian process with an RBF kernel, whose posterior variance at a given instance is characterized by its L 2 distance from the training data in the hidden vector space constructed by underlying layers of a network.",
"The authors propose an approximation based on random Fourier feature expansion, which enables end-to-end training and makes the inference feasible.",
"However, this method requires hidden representations to be distance-preserving in order to make it work.",
"While the distance between instances in the hidden space does not always have a meaningful correspondence to the distance in the input space, authors prove that to keep hidden representations distance-preserving, the transformation should satisfy the bi-Lipschitz condition.",
"For ResNets (He et al., 2016), this requirement is satisfied if weight matrices for the nonlinear residual blocks have a spectral norm (i.e., the largest singular value) bounded from above by a constant.",
"Therefore, to enforce the aforementioned Lipschitz constraint, they apply a spectral normalization (SN) on weight matrices.",
"For Transformers, they normalize the matrix of the penultimate classification layer only.",
"Mahalanobis Distance (MD) Mahalanobis distance is a generalisation of the Euclidean distance, which takes into account the spreading of instances in the training set along various directions in a feature space.",
"Lee et al. (2018) suggest estimating uncertainty by measuring the Mahalanobis distance between a test instance and the closest class-conditional Gaussian distribution: u MD = min c C ( h i c ) T 1 ( h i c ) , (6) where h i is a hidden representation of a i -th instance, c is a centroid of a class c , and is a covariance matrix for hidden representations of training instances.",
"Recently, the Mahalanobis distance has been adopted for out-of-distribution detection with Transformer networks by Podolskiy et al. (2021).",
"Mahalanobis Distance with Spectral-normalized Network (MD SN) (Ours) Since the UE method based on the Mahalanobis distance utilizes the idea of a proximity of a tested instance hidden representation to the training distribution, we expect this method to benefit from distance-preserving representations.",
"Therefore, we propose the modification of the method of Lee et al. (2018) and Podolskiy et al. (2021) that enforces the bi-Lipschitz constraints on transformation implemented by the network.",
"We perform spectral normalization of the weight matrix of the linear layer in the classification head of Transformer as it is suggested in SNGP (Liu et al., 2020).",
"At each training step, a spectral norm is estimated using the power iteration method = (cid:107) W (cid:107) 2 , and a normalized weight matrix is obtained: W = W .",
"At the inference step, hidden representations are calculated using the normalized 8240 matrix h ( x ) = W x + b and are used for computing the Mahalanobis distance.",
"Additive regularization is another approach to improving UE of neural network predictions.",
"Usually, the training loss combines the original task-specific loss L task (e.g. cross-entropy) and a regularization component L reg that facilitates producing better calibrated UEs: L = L task + L reg , (7) where is a hyperparameter that controls the regularization strength.",
"The positive side of such techniques is that, besides SR, they can be used to improve other methods like MC dropout and deterministic methods.",
"The drawback is that regularization affects the training procedure and can decrease the model quality.",
"Confident Error Regularizer (CER) Xin et al. (2021) propose a regularizer that adds a penalty for an instance with a bigger loss than other instances and, at the same time, bigger confidence: L reg = k (cid:88) i,j =1 i,j 1 [ e i > e j ] , (8) i,j = max { 0 , max c p ci max c p cj } 2 , (9) where k is the number of instances in a batch and e i is an error of the i -th instance: e i is 1 if the prediction of the classifier matches the true label, and e i is 0 otherwise.",
"The authors evaluate this type of regularization only in conjunction with the SR baseline.",
"Metric Regularizer Zhang et al. (2019) propose a regularizer that aims to shorten the intra-class distance and enlarge the inter-class distance: L reg = C (cid:88) c =1 (cid:110) L intra ( c )+ (cid:88) k (cid:54) = c L inter ( c, k ) (cid:111) , (10) L intra ( c ) = 2 | S c | 2 | S c | (cid:88) i,j S c ,i<j D ( h i , h j ) , (11) L inter ( c,k )= 1 | S c || S k | (cid:88) i S c ,j S k [ D ( h i ,h j )] + , (12) D ( r i , r j ) = 1 d || h i h j || 22 , (13) where h i is a feature representation of an instance i from a penultimate layer of a model with a dimension d , S c is the set of instances from class c , | S c | is the number of elements in S c , and are positive hyperparameters, [ x ] + = max (0 , x ) .",
"(16) 8241 Method Reg.Type UEScore MRPC SST-2 CoLA CoNLL-2003(tokenlevel) CoNLL-2003(seq. level) RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP MC PV 13.97 1.16 1.68 0.09 12.90 1.92 0.82 0.11 44.35 4.90 2.06 0.16 6.32 1.66 0.10 0.02 16.05 3.78 1.93 0.43 MC BALD 14.21 1.04 1.69 0.09 12.98 1.87 0.82 0.10 45.06 4.90 2.08 0.17 6.44 1.86 0.10 0.02 16.28 4.00 1.96 0.45 MC SMP 14.38 2.07 1.76 0.19 14.00 2.20 0.91 0.15 42.95 5.98 2.01 0.15 6.04 1.03 0.09 0.02 15.79 3.34 1.80 0.35 MC CER PV 12.82 1.89 1.60 0.13 12.18 1.20 0.80 0.10 46.84 9.19 2.11 0.23 6.92 1.22 0.10 0.02 17.05 3.14 1.91 0.36 MC CER BALD 12.89 1.89 1.60 0.13 12.39 1.23 0.81 0.09 47.34 8.30 2.14 0.24 7.16 1.15 0.11 0.02 17.25 3.05 1.93 0.35 MC CER SMP 12.91 2.15 1.67 0.15 12.22 1.31 0.82 0.09 46.10 11.07 2.05 0.22 6.69 1.38 0.10 0.02 16.81 1.61 1.81 0.14 MC metric PV 14.21 1.95 1.73 0.23 12.28 1.77 0.80 0.11 42.35 0.69 2.04 0.07 6.69 0.89 0.10 0.01 17.17 1.90 1.93 0.31 MC metric BALD 14.55 2.31 1.73 0.23 12.08 1.79 0.79 0.10 43.76 0.55 2.08 0.07 6.91 1.02 0.10 0.01 17.47 1.85 1.98 0.30 MC metric SMP 13.39 1.19 1.72 0.20 13.55 1.65 0.90 0.14 40.88 1.25 2.01 0.09 6.30 0.98 0.10 0.01 16.81 1.40 1.80 0.23 DDPP(+DPP)(ours) PV 22.30 7.15 2.58 0.65 16.70 1.38 1.12 0.12 49.75 3.96 2.44 0.29 6.12 0.71 0.10 0.01 16.78 2.44 1.93 0.20 DDPP(+DPP)(ours) BALD 23.08 7.00 2.63 0.63 16.08 2.37 1.05 0.18 49.59 5.40 2.48 0.31 6.39 0.64 0.10 0.01 21.53 4.77 2.63 0.45 DDPP(+DPP)(ours) SMP 21.79 7.72 2.57 0.68 17.55 3.03 1.19 0.23 47.86 5.51 2.39 0.31 6.08 0.62 0.10 0.01 17.71 2.77 2.05 0.23 DDPP(+DPP)(ours) CER PV 15.12 2.27 2.03 0.24 13.56 1.37 0.91 0.14 54.51 8.80 2.58 0.22 6.98 0.98 0.11 0.02 19.44 1.15 2.13 0.17 DDPP(+DPP)(ours) CER BALD 15.94 3.77 2.07 0.36 14.87 2.22 0.96 0.13 55.11 7.42 2.61 0.31 7.90 1.95 0.12 0.01 26.20 6.41 3.11 0.56 DDPP(+DPP)(ours) CER SMP 14.75 1.43 2.02 0.16 14.47 1.63 0.99 0.11 54.01 9.79 2.55 0.18 6.91 1.13 0.11 0.02 20.66 1.53 2.31 0.08 DDPP(+DPP)(ours) metric PV 19.51 3.40 2.47 0.28 15.79 1.67 1.07 0.14 43.82 1.82 2.17 0.14 7.33 1.53 0.12 0.02 18.93 2.09 2.11 0.25 DDPP(+DPP)(ours) metric BALD 20.54 4.72 2.52 0.34 15.48 1.81 1.03 0.08 43.95 1.68 2.17 0.12 8.01 2.08 0.13 0.03 22.44 4.78 2.67 0.49 DDPP(+DPP)(ours) metric SMP 18.45 2.88 2.41 0.26 16.78 3.43 1.14 0.26 43.61 1.61 2.16 0.11 6.92 1.32 0.11 0.02 19.11 2.14 2.16 0.22 DDPP(+OOD)(ours) PV 22.73 7.45 2.65 0.59 19.05 2.95 1.29 0.23 51.11 12.03 2.37 0.34 6.32 0.72 0.10 0.01 16.75 2.31 1.94 0.21 DDPP(+OOD)(ours) BALD 23.85 8.39 2.69 0.58 18.27 3.05 1.22 0.23 52.59 12.08 2.42 0.34 6.59 0.69 0.11 0.01 20.56 3.09 2.50 0.26 DDPP(+OOD)(ours) SMP 22.31 7.80 2.60 0.65 19.86 3.83 1.36 0.29 50.14 9.73 2.32 0.30 6.09 0.67 0.10 0.01 17.76 2.75 2.06 0.23 DDPP(+OOD)(ours) CER PV 14.83 1.42 2.05 0.17 14.98 1.36 1.01 0.09 59.14 11.27 2.56 0.24 7.08 1.37 0.11 0.02 19.66 1.25 2.17 0.15 DDPP(+OOD)(ours) CER BALD 15.03 1.85 2.08 0.24 14.37 2.22 0.96 0.14 57.48 9.37 2.54 0.26 7.41 1.29 0.12 0.02 25.30 3.36 3.00 0.24 DDPP(+OOD)(ours) CER SMP 14.34 1.15 1.99 0.16 15.88 1.96 1.08 0.13 59.32 11.86 2.53 0.20 6.88 1.24 0.11 0.02 21.06 1.96 2.35 0.14 DDPP(+OOD)(ours) metric PV 19.03 3.97 2.41 0.34 17.75 5.20 1.10 0.17 48.54 11.38 2.23 0.24 6.92 1.32 0.11 0.02 18.36 1.90 2.05 0.26 DDPP(+OOD)(ours) metric BALD 19.33 4.78 2.41 0.40 16.71 7.13 1.02 0.20 49.31 11.87 2.24 0.25 7.21 1.49 0.11 0.02 21.35 4.47 2.54 0.45 DDPP(+OOD)(ours) metric SMP 18.55 3.06 2.42 0.27 17.08 3.78 1.14 0.26 43.67 1.77 2.15 0.11 6.71 1.18 0.10 0.02 19.01 2.30 2.16 0.25 SR CER MP 14.62 1.62 2.02 0.19 14.56 2.14 1.00 0.14 56.97 9.69 2.53 0.15 6.84 1.41 0.11 0.02 21.31 1.63 2.49 0.25 SR metric MP 18.39 2.94 2.40 0.27 16.90 3.12 1.16 0.24 44.54 2.11 2.22 0.15 6.51 1.07 0.10 0.02 20.32 1.68 2.32 0.23 SR(baseline) MP 22.32 8.08 2.58 0.65 17.93 3.84 1.22 0.28 49.48 3.71 2.35 0.25 6.08 0.62 0.10 0.01 18.81 3.35 2.21 0.29 Table 1: Results for methods based on MC dropout and regularization techniques (ELECTRA model).",
"In the experiments, we train a model on a given dataset and perform inference on a separate test set to compute both predictions and UE scores u .",
"We are interested in how the scores correlate with the mistakes e of the model on the test set.",
"For text classification, mistakes are computed in the following way: e i = (cid:26) 1 , y i (cid:54) = y i , 0 , y i = y i , (14) where y i is a true label, y i is a predicted label.",
"For NER, we use two evaluation options: token-level and sequence-level.",
"For the token-level evaluation, individual tokens are considered as separate instances as in the text classification.",
"For the sequence-level evaluation, mistakes are computed in the following way: e i = (cid:26) 1 , j { 1 , . . . , n } , y ij (cid:54) = y ij , 0 , j { 1 , . . . , n } , y ij = y ij , (15) where n is a sequence length, y ij is a true label, y ij is a predicted label of a j -th token in a sequence.",
"In the sequence-level evaluation, UE of a sequence is aggregated from UEs of tokens by taking maximum (for MD methods) or by summation (for others).",
"El-Yaniv and Wiener (2010) suggest evaluating the quality of UE using the area under the risk coverage curve (RCC-AUC) .",
"The risk coverage curve demonstrates the cumulative sum of loss due to misclassification (cumulative risk) depending on the uncertainty level used for rejection of predictions.",
"The lower area under this curve indicates better quality of the UE method.",
"Xin et al. (2021) propose a reversed pair proportion (RPP) metric.",
"They note that instances with higher confidence should have a lower loss l .",
"RPP measures how far the uncertainty estimator u is to ideal, given the labeled dataset of size n : RP P = 1 n 2 n (cid:88) i,j =1 1 [ u ( x i ) > u ( x j ) , l i <l j ] .",
"This metric has an upper bound of 1; for convenience, the reported values are multiplied by 100.",
"Similar to Xin et al. (2021), for both metrics, l is an indicator loss function.",
"We conduct each experiment six times with different random seeds, obtaining the corresponding metric values, and report their mean and standard deviation.",
"We also present the results using the accuracy rejection curve .",
"This curve is drawn by varying the rejection uncertainty level (horizontal axis) and presenting the corresponding accuracy obtained when all rejected instances are labeled with an oracle (vertical axis).",
"This emulates the work of a human expert in conjunction with a machine learning system.",
"The higher the curve, the smaller amount of labor is needed to achieve a certain level of performance and the better is the UE method.",
"A similar evaluation approach in a table form is used in (Zhang et al., 2019).",
"A similar curve but without oracle labeling is used in (Lakshminarayanan et al., 2017; Filos et al., 2019).",
"For experiments with text classification, we use three datasets from the GLUE benchmark (Wang et al., 2018) that were previously leveraged by Shelmanov et al. (2021) and Xin et al. (2021) for the same purpose: Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett,",
"2005), Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), and Stanford Sentiment Treebank (SST-2) (Socher et al., 2013).",
"Similar to (Shelmanov et al., 2021), we randomly subsample SST-2 to 10% to emulate a low-resource setting.",
"The experiments with NER were performed on the widely-used CoNLL-2003 task (Tjong Kim Sang and De Meulder, 2003).",
"For this dataset, we also subsample the training part to 10%.",
"As an out-of-domain dataset for DDPP MC dropout, we use the IMDB binary sentiment classification dataset (Maas et al., 2011).",
"We randomly select 5,000 instances from its test part and use them to select DPP-generated masks.",
"For experiments, we use two modern Transformers: the pre-trained ELECTRA model (Clark et al., 2020) with 110 million parameters and DeBERTa (He et al., 2021) with 138 million parameters.",
"They achieve higher performance on the GLUE benchmark in comparison with previous models, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).",
"The optimal hyperparameter values for each triple <Dataset, Regularization Type, Spectral Normalization Usage> are presented in Table 6 8242 Method Reg.Type UEScore MRPC SST-2 CoLA CoNLL-2003(tokenlevel) CoNLL-2003(seq. level) RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP MD MD 13.69 1.25 1.88 0.13 13.08 2.58 0.86 0.15 41.73 1.45 1.96 0.04 10.33 3.55 0.15 0.04 17.05 5.07 2.05 0.45 MD CER MD 13.61 1.82 1.87 0.22 14.10 2.69 0.96 0.16 42.50 2.65 2.00 0.07 6.82 0.90 0.10 0.01 16.92 2.51 1.87 0.23 MD metric MD 13.91 2.35 1.89 0.29 12.03 2.04 0.85 0.15 40.29 2.09 2.02 0.09 10.01 2.56 0.15 0.03 17.67 3.92 2.09 0.36 MDSN(ours) MD 13.44 1.28 1.85 0.20 11.77 1.33 0.83 0.08 40.07 3.62 1.95 0.16 7.21 1.34 0.11 0.02 17.29 3.58 2.01 0.37 MDSN(ours) CER MD 14.41 1.96 1.94 0.21 12.32 1.37 0.85 0.10 37.82 2.91 1.90 0.12 6.95 1.50 0.11 0.02 17.76 4.00 2.06 0.42 MDSN(ours) metric MD 12.04 1.33 1.56 0.12 12.05 1.42 0.84 0.07 39.37 2.00 1.97 0.15 6.90 1.21 0.11 0.02 17.02 3.39 2.01 0.40 SNGP SNGP 14.52 2.48 2.00 0.35 16.08 4.18 1.02 0.18 51.96 1.89 2.64 0.07 56.43 23.03 0.60 0.22 44.80 11.00 5.06 1.01 SRSN MP 18.83 3.89 2.46 0.46 19.02 6.07 1.21 0.35 81.25 12.56 3.40 0.33 7.46 1.39 0.12 0.02 20.13 3.50 2.30 0.26 SR CER MP 14.62 1.62 2.02 0.19 14.56 2.14 1.00 0.14 56.97 9.69 2.53 0.15 6.84 1.41 0.11 0.02 21.31 1.63 2.49 0.25 SR metric MP 18.39 2.94 2.40 0.27 16.90 3.12 1.16 0.24 44.54 2.11 2.22 0.15 6.51 1.07 0.10 0.02 20.32 1.68 2.32 0.23 SR(baseline) MP 22.32 8.08 2.58 0.65 17.93 3.84 1.22 0.28 49.48 3.71 2.35 0.25 6.08 0.62 0.10 0.01 18.81 3.35 2.21 0.29 Table 2: Results of deterministic methods with different types of regularization (ELECTRA model).",
"in Appendix A. For the optimal hyperparameter search, we split the original training data into training and validation subsets in a ratio of 80 to 20 and apply Bayesian optimization with early stopping.",
"For text classification, we use accuracy as an objective metric, and for sequence tagging, we use span-based F1-score (Tjong Kim Sang and De Meulder, 2003).",
"Sets of pre-defined values for each hyperparameter are given in the caption of Table 6. After the hyperparameter search is completed, we train the model on the original training set using the optimal values.",
"The hyperparameters for UE methods are presented in Table 9 in Appendix A. The values for the DDPP MC dropout and MD SN are chosen using a grid search, while validating on the held-out validation dataset with RCC-AUC as an objective.",
"For deep ensemble, we use random subsampling of the training set with a fixed ratio of 90%.",
"The results of methods based on MC dropout and loss regularization are presented in Table 1 (for ELECTRA).",
"The standard computationally intensive MC dropout achieves big improvements over the SR baseline on all text classification datasets and the sequence-level CoNLL-2003 benchmark.",
"For token-level CoNLL-2003, none of the considered methods substantially outperform the baseline.",
"Uncertainty estimation scores BALD and PV have similar results, outperforming SMP on SST-2, while SMP has a slight advantage over them on CoLA and CoNLL-2003.",
"The DDPP MC dropout method does not outperform the MC dropout.",
"However, DDPP (+DDPP) demonstrates a notable advantage over the SR baseline on text classification datasets SST2 and CoLA, while both DDPP (+DDPP) and DDPP (+OOD) outperform the baseline on the sequence-level CoNLL-2003 benchmark.",
"The main advantage of the proposed DDPP MC dropout method consists in its much faster inference compared to the computationally expensive standard MC dropout.",
"The DDPP MC dropout has the same computational overhead during inference as the original DPP MC dropout, which is only less than 0.5% of the overhead introduced by the standard MC dropout (Shelmanov et al., 2021).",
"We conduct an ablation study of the proposed modifications for the original DPP MC dropout.",
"The experimental results of this study presented in Table 12 in Appendix C demonstrate the benefits of using calibration and introducing diversity in mask generation.",
"Both metric regularization and CER achieve a substantial advantage over the baseline on text classification datasets SST-2 and MRPC.",
"However, regularization appears to be malignant for NER.",
"Adding loss regularization to MC dropout usually helps to achieve better results on text classification.",
"The best results on SST-2 and CoLA are achieved using metric regularization, while the best result for MRPC is obtained using CER.",
"Regularization and DDPP MC dropout usually complement each other, the results of their combination are slightly better than when they are applied individually for all datasets except CoNLL-2003.",
"The results for deterministic methods are presented in Table 2 (for ELECTRA).",
"SNGP gives substantial improvements on the text classification datasets MRPC and SST-2 but significantly falls behind the trivial baseline on CoNLL-2003.",
"The low performance of SNGP for NER can be attributed to the fact that it is initially designed for classification 8243 Method Reg.Type UEScore MRPC SST-2 CoLA CoNLL-2003(tokenlevel) CoNLL-2003(seq. level) RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP RCC-AUC RPP MC SMP 14.38 2.07 1.76 0.19 14.00 2.20 0.91 0.15 42.95 5.98 2.01 0.15 6.04 1.03 0.09 0.02 15.79 3.34 1.80 0.35 MC CER PV 12.82 1.89 1.60 0.13 12.18 1.20 0.80 0.10 46.84 9.19 2.11 0.23 6.92 1.22 0.10 0.02 17.05 3.14 1.91 0.36 MC metric BALD 14.55 2.31 1.73 0.23 12.08 1.79 0.79 0.10 43.76 0.55 2.08 0.07 6.91 1.02 0.10 0.01 17.47 1.85 1.98 0.30 MC metric SMP 13.39 1.19 1.72 0.20 13.55 1.65 0.90 0.14 40.88 1.25 2.01 0.09 6.30 0.98 0.10 0.01 16.81 1.40 1.80 0.23 DeepEnsemble PV 20.70 4.24 2.10 0.35 12.02 1.63 0.71 0.07 50.15 5.57 2.21 0.19 4.02 1.24 0.06 0.02 13.18 4.60 1.54 0.57 DeepEnsemble SMP 13.01 2.57 1.68 0.27 12.13 1.27 0.79 0.08 43.73 4.25 2.05 0.19 4.16 1.37 0.06 0.02 13.93 4.88 1.57 0.58 MSD MSD DS 12.70 1.61 1.74 0.25 11.17 1.03 0.78 0.06 39.21 2.18 1.90 0.12 12.34 4.19 0.18 0.05 16.83 3.92 1.94 0.25 DDPP(+DPP)(ours) PV 22.30 7.15 2.58 0.65 16.70 1.38 1.12 0.12 49.75 3.96 2.44 0.29 6.12 0.71 0.10 0.01 16.78 2.44 1.93 0.20 DDPP(+DPP)(ours) SMP 21.79 7.72 2.57 0.68 17.55 3.03 1.19 0.23 47.86 5.51 2.39 0.31 6.08 0.62 0.10 0.01 17.71 2.77 2.05 0.23 DDPP(+DPP)(ours) CER PV 15.12 2.27 2.03 0.24 13.56 1.37 0.91 0.14 54.51 8.80 2.58 0.22 6.98 0.98 0.11 0.02 19.44 1.15 2.13 0.17 DDPP(+DPP)(ours) CER SMP 14.75 1.43 2.02 0.16 14.47 1.63 0.99 0.11 54.01 9.79 2.55 0.18 6.91 1.13 0.11 0.02 20.66 1.53 2.31 0.08 DDPP(+DPP)(ours) metric SMP 18.45 2.88 2.41 0.26 16.78 3.43 1.14 0.26 43.61 1.61 2.16 0.11 6.92 1.32 0.11 0.02 19.11 2.14 2.16 0.22 DDPP(+OOD)(ours) PV 22.73 7.45 2.65 0.59 19.05 2.95 1.29 0.23 51.11 12.03 2.37 0.34 6.32 0.72 0.10 0.01 16.75 2.31 1.94 0.21 DDPP(+OOD)(ours) SMP 22.31 7.80 2.60 0.65 19.86 3.83 1.36 0.29 50.14 9.73 2.32 0.30 6.09 0.67 0.10 0.01 17.76 2.75 2.06 0.23 DDPP(+OOD)(ours) CER BALD 15.03 1.85 2.08 0.24 14.37 2.22 0.96 0.14 57.48 9.37 2.54 0.26 7.41 1.29 0.12 0.02 25.30 3.36 3.00 0.24 DDPP(+OOD)(ours) CER SMP 14.34 1.15 1.99 0.16 15.88 1.96 1.08 0.13 59.32 11.86 2.53 0.20 6.88 1.24 0.11 0.02 21.06 1.96 2.35 0.14 DDPP(+OOD)(ours) metric SMP 18.55 3.06 2.42 0.27 17.08 3.78 1.14 0.26 43.67 1.77 2.15 0.11 6.71 1.18 0.10 0.02 19.01 2.30 2.16 0.25 MD CER MD 13.61 1.82 1.87 0.22 14.10 2.69 0.96 0.16 42.50 2.65 2.00 0.07 6.82 0.90 0.10 0.01 16.92 2.51 1.87 0.23 MD metric MD 13.91 2.35 1.89 0.29 12.03 2.04 0.85 0.15 40.29 2.09 2.02 0.09 10.01 2.56 0.15 0.03 17.67 3.92 2.09 0.36 MDSN(ours) MD 13.44 1.28 1.85 0.20 11.77 1.33 0.83 0.08 40.07 3.62 1.95 0.16 7.21 1.34 0.11 0.02 17.29 3.58 2.01 0.37 MDSN(ours) CER MD 14.41 1.96 1.94 0.21 12.32 1.37 0.85 0.10 37.82 2.91 1.90 0.12 6.95 1.50 0.11 0.02 17.76 4.00 2.06 0.42 MDSN(ours) metric MD 12.04 1.33 1.56 0.12 12.05 1.42 0.84 0.07 39.37 2.00 1.97 0.15 6.90 1.21 0.11 0.02 17.02 3.39 2.01 0.40 SR CER MP 14.62 1.62 2.02 0.19 14.56 2.14 1.00 0.14 56.97 9.69 2.53 0.15 6.84 1.41 0.11 0.02 21.31 1.63 2.49 0.25 SR metric MP 18.39 2.94 2.40 0.27 16.90 3.12 1.16 0.24 44.54 2.11 2.22 0.15 6.51 1.07 0.10 0.02 20.32 1.68 2.32 0.23 SR(baseline) MP 22.32 8.08 2.58 0.65 17.93 3.84 1.22 0.28 49.48 3.71 2.35 0.25 6.08 0.62 0.10 0.01 18.81 3.35 2.21 0.29 Table 3: Comparison of the best results for all methods (ELECTRA model).",
"tasks rather than sequence tagging.",
"MD yields much bigger improvements over the SR baseline on all datasets and significantly outperforms SNGP.",
"MD SN is able to improve the misclassification detection performance even further for MRPC, SST-2, and CoLA.",
"We also conduct an ablation study (Table 2), in which we use the spectral normalization without MD.",
"We see that SN on its own, as expected, mostly does not improve the UE performance; the results usually are even slightly worse than the baseline.",
"Regularization also helps to improve the results of methods based on the Mahalanobis distance.",
"For both MD and MD SN, regularization helps on CoLA and CoNLL-2003.",
"For MD, it also helps on SST-2, while for MD SN, regularization improves the results on MRPC.",
"We note that regularization reduces the gap between MD and MD SN on text classification datasets and even gives a slight advantage to MD over MD SN on CoNLL-2003.",
"The best results across all deterministic methods for text classification datasets are achieved by MDSN.",
"The biggest improvements are obtained on MRPC, where regularized MD SN reduces RCC-AUC by more than 46% compared to the baseline.",
"Table 3 and Figure 1 compare results of the best methods in each group for ELECTRA.",
"Table 11 and Figure 3 in Appendix B show the best results for DeBERTa.",
"In these tables and figures, we also present the results of deep ensemble (Lakshminarayanan et al., 2017), which is a strong yet computationally intensive baseline (Ashukha et al., 2020), and results of another recently proposed computationally intensive method called MSD (He et al., 2020) that leverage mix-up (Thulasidasan et al., 2019), self-ensembling, MD, 8244 0.2 0.4 0.6 0.8 1.0 Rejection rate 0.90 0.92 0.94 0.96 0.98 1.00 A cc u r a c y s c o r e Deep Ensemble, SMP DDPP (+OOD), CER, SMP (ours) MC, CER, PV MD, CER MD SN, metric (ours) SR (baseline) Figure 2: Median values of accuracy rejection curves for selected methods on MRPC (ELECTRA model).",
"and the MC dropout (all layers are activated).",
"We can see that it is possible to substantially improve misclassification detection performance and achieve even better results than MC dropout, deep ensemble, or MSD almost with no overhead in terms of memory consumption and amount of computation.",
"For text classification and for both models, computationally cheap methods are either better or on par with the expensive counterparts.",
"However, for NER, we see that the latter methods seriously fall behind deep ensemble and MC dropout.",
"On the token-level CoNLL-2003 benchmark, only deep ensemble substantially outperforms the SR baseline.",
"On the sequence-level CoNLL-2003 benchmark, MD with CER, DDPP (+DDP) PV, and DDPP (+OOD) PV improve upon SR, but only approach the performance of computationally intensive methods.",
"The proposed in this work MD SN method outperforms all other computationally efficient alternatives on text classification datasets.",
"For both models, it even substantially outperforms all computationally expensive methods on the CoLA dataset, while on other text classification datasets it is on par with them.",
"Another method proposed in this work, DDPP MC dropout, empowered with regularization techniques, is able to substantially reduce the gap between the SR baseline and computationally intensive UE methods, while introducing only a fraction of their overhead.",
"Figure 2 also presents accuracy rejection curves for selected methods on MRPC.",
"The figure shows that if we reject 20% of instances using UE obtained with MC dropout and ask human experts to label these uncertain objects, the accuracy score of such a human-machine hybrid system will increase from 88.4% to 96.0%, which is 1.3% better than the SR baseline.",
"Such an additional gain over the SR baseline can be crucial for safe-critical applications.",
"Deep ensemble and MD SN are close to each other and achieve 95.6% and 95.2% of accuracy correspondingly.",
"Rejecting 40% of most uncertain instances gives 98.2% of accuracy for the computationally-intensive deep ensemble, while the proposed cheap MD SN method yields even better result with 98.5% of accuracy, which is 1.7% higher than the result of the SR baseline.",
"Our extensive empirical investigation on text classification and NER tasks shows that computationally cheap UE methods are able to substantially improve misclassification detection for Transformers, performing on par or even better than computationally intensive MC dropout and deep ensemble.",
"The proposed in this work method based on the Mahalanobis distance and spectral normalization of a weight matrix (MD SN) achieves the best results among other computationally cheap methods on text classification datasets and is on par with expensive methods.",
"This method does not require seriously modifying a model architecture, extra memory storage, and introduces only a little amount of additional computation during inference.",
"We also show that our modification of DPP MC dropout that leverages the diversity of generated dropout masks, which is also a computationally cheap method, is able to outperform the softmax response baseline and approach the computationally intensive methods on NER.",
"Finally, we find that regularization can slightly improve the results of methods based on MC dropout and the Mahalanobis distance in text classification.",
"The spectral normalization is theoretically proven to ensure bi-Lipschitz constraint on the transformation defined by the standard residual connection network (Liu et al., 2020).",
"However, the self-attention blocks in Transformers have a more complicated architecture than the layers of standard ResNets, which means that the theoretical guarantees for them do not hold in general.",
"In future work, we are looking forward to investigating other techniques to ensure bi-Lipschitz constraint on self-attention blocks, which might further improve deterministic methods for uncertainty estimation of Transformers.",
"We thank anonymous reviewers for their insightful suggestions to improve this paper.",
"The work was supported by a grant for research centers in the field of artificial intelligence (agreement identifier 000000D730321P5Q0002 dated November 2, 2021 No. 70-2021-00142 with ISP RAS)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"objective",
"method",
"method",
"objective",
"objective",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Knowledge of difficulty level of questions helps a teacher in several ways, such as estimating students' potential quickly by asking carefully selected questions and improving quality of examination by modifying trivial and hard questions.",
"Can we extract such benefits of instance difficulty in Natural Language Processing?",
"To this end, we conduct I nstanceL evel D ifficulty A nalysis of E valuation data (ILDAE) in a large-scale setup of 23 datasets and demonstrate its five novel applications: 1) conducting efficient-yet-accurate evaluations with fewer instances saving computational cost and time, 2) improving quality of existing evaluation datasets by repairing erroneous and trivial instances, 3) selecting the best model based on application requirements, 4) analyzing dataset characteristics for guiding future data creation , 5) estimating Out-of-Domain performance reliably .",
"Comprehensive experiments for these applications lead to several interesting results, such as evaluation using just 5% instances (selected via ILDAE) achieves as high as 0 .",
"93 Kendall correlation with evaluation using complete dataset and computing weighted accuracy using difficulty scores leads to 5 .",
"2% higher correlation with Out-of-Domain performance.",
"We release the difficulty scores 1 and hope our work will encourage research in this important yet understudied field of leveraging instance difficulty in evaluations.",
"Transformer-based language models (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020) have improved state-of-the-art performance on numerous natural language processing benchmarks (Wang et al., 2018, 2019; Talmor et al., 2019); however, recent studies (Zhong et al., 2021; Sagawa et al., 2020) have raised questions regarding whether these models are uniformly better across all instances.",
"This has drawn attention towards instance-1 https://github.com/nrjvarshney/ILDAE Figure 1: Illustrating five applications of Instance-Level Difficulty Analysis of Evaluation data (ILDAE).",
"level analysis of evaluation data (Rodriguez et al., 2021; Vania et al., 2021; Mishra and Arunkumar, 2021) which was previously limited to training data (Swayamdipta et al., 2020; Xu et al., 2020; Mishra and Sachdeva, 2020).",
"Furthermore, it is intuitive that not all instances in a dataset are equally difficult .",
"However, instance-level difficulty analysis of evaluation data (ILDAE) has remained underex-plored in many different ways: what are the potential applications and broad impact associated with ILDAE?",
"In this work, we address the above question by first computing difficulty scores of evaluation instances (section 2) and then demonstrating five novel applications of ILDAE (Figure 1).",
"1. Efficient Evaluations: We propose an approach of conducting efficient-yet-accurate evaluations.",
"Our approach uses as little as 5% evaluation instances (selected via ILDAE) to achieve up to 0 .",
"93 Kendall correlation with evaluations conducted using the complete dataset .",
"Thus, without considerably impacting the effectiveness of evaluations, our approach saves computational cost and time.",
"2. Improving Evaluation Datasets: We first show that trivial' and erroneous' instances can be identified using our difficulty scores and then present a model-and-human-in-the-loop technique to modify/repair such instances resulting in improved quality of the datasets.",
"We instantiate it with SNLI dataset (Bowman et al., 2015) and show that on modifying the trivial instances, the accuracy (averaged over 27 models) drops from 77.58% to 26.49%, and on repairing the erroneous instances, it increases from 13.65% to 69.9%.",
"Thus, improving the dataset quality.",
"3. Model Analysis: We divide evaluation instances into different regions based on difficulty scores and analyze models' performance in each region.",
"We find that a single model does not achieve the highest accuracy in all difficulty regions.",
"This implies that the model that achieves best overall performance may not be the best in each difficulty region.",
"Such analyses could benefit in model selection.",
"For instance, in scenarios where a system is expected to encounter hard instances, the model that performs well in high difficulty regions could be selected.",
"4. Dataset Analysis: ILDAE reveals several important characteristics of datasets that can be leveraged in future data creation processes.",
"For instance, we find that in SNLI and MNLI datasets, contradiction' instances receive lower average difficulty score than en-tailment' and neutral' instances .",
"Thus, more difficult contradiction examples can be created to develop high-quality task-specific datasets.",
"5. OOD Correlation: We compute weighted accuracy leveraging the difficulty scores and show that it leads to 5 .",
"2% higher Kendall correlation with Out-of-Domain (OOD) performance than the standard accuracy that treats all instances equally .",
"Thus, ILDAE helps in getting a more reliable estimation of models' OOD performance.",
"Interpretation: Human perception of difficulty may not always correlate well with machine's interpretation.",
"Thus, difficulty scores must be computed via a model-in-the-loop technique so that they directly reflect machine's interpretation.",
"predictive correctness since a difficult instance less likely to be predicted correctly than a relatively easier instance.",
"We incorporate the above desiderata and consider model's prediction confidence in the ground truth answer (indicated by softmax probability assigned to that answer) as the measure of its predictive correctness.",
"Furthermore, we compile an ensemble of models trained with varying configurations and use their mean predictive correctness to compute the difficulty scores.",
"We do this because model's predictions fluctuate greatly when its training configuration is changed (Zhou et al., 2020; McCoy et al., 2020) and relying on predictive correctness of only one model could result in difficulty scores that show poor generalization.",
"To this end, we use the following three training configurations to compile predictions from an ensemble of models: Data Size: Instances that can be answered correctly even with few training examples are inherently easy and should receive lower difficulty score than the ones that require a large training dataset.",
"To achieve this, we train a model each with 5, 10, 15, 20, 25, 50, and 100 % of the total training examples and include them in our ensemble.",
"Data Corruption: Instances that can be answered correctly even with some level of corrup-tion/noise in the training dataset should receive low difficulty score.",
"To achieve this, we train a model each with different levels of noise (2, 5, 10, 20, 25% of the examples) in the training data, and add them to our ensemble.",
"For creating noisy examples, we randomly change the ground-truth label in case of classification and multiple-choice datasets and change the answer span for extractive QA datasets.",
"Training Steps: Instances that can be consistently answered correctly from the early stages of training should receive low difficulty score.",
"Here, we add a model checkpoint after every epoch during training to our ensemble.",
"This results in a total of N = E (7+5) models in our ensemble where E corresponds to the number of training epochs, and 7 , 5 correspond to the number of data size and data corruption configurations respectively.",
"We infer the evaluation dataset using these N models and calculate the average predictive correctness for each instance.",
"Finally, we compute the difficulty score by subtracting this averaged correctness value from 1 .",
"This ensures that an instance that is answered correctly with high confidence under many training configurations gets assigned a low difficulty score as it corresponds to an easy instance.",
"In contrast, an instance that is often answered incorrectly gets assigned a high difficulty score.",
"Algorithm 1 summarizes this approach.",
"We use RoBERTa-large model (Liu et al., 2019) for this procedure and train each model for E = 10 epochs, resulting in N = 120 predictions for each evaluation instance.",
"Our difficulty computation method is general and can be used with any other model or configurations; we use RoBERTa-large as it has been shown to achieve high performance across diverse NLP tasks (Liu et al., 2019).",
"In addition, we show that difficulty scores computed using our procedure also generalize for other models (3.5.1).",
"We note that difficulty computation is not our primary contribution.",
"Prior work (Swayamdipta et al., 2020; Xu et al., 2020) has explored different ways to achieve this.",
"However, our approach uses 120 predictions from models trained with different configurations for its computation and hence is more reliable.",
"Equipped with difficulty scores of evaluation instances, we now demonstrate five applications of ILDAE in the following sections.",
"Success of BERT (Devlin et al., 2019) has fostered development of several other pre-trained language models such as RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019b), DistilBERT (Sanh et al., 2019), ALBERT (Lan et al., 2020).",
"Though, it has resulted in the availability of numerous model options for a task, comparing the performance of such a large number of models has become computationally expensive and time-consuming.",
"For example, in real-world applications like online competitions, the naive approach that evaluates candidate models on the entire test dataset would be too expensive because they receive thousands of model submissions and contain a sizable number of evaluation instances.",
"Moreover, some applications also require additional evaluations to measure Out-of-Domain generalization and robustness making it even more expensive.",
"Can we make the evaluations efficient ?",
"We address the above question and explore if the performance of candidate models can be accurately compared with a carefully selected smaller subset of the evaluation dataset.",
"Reducing the number of instances would save computational cost and make the evaluations efficient.",
"To this end, we propose an approach that selects evaluation instances based on their difficulty scores.",
"We compare performance of candidate models only on these selected instances and show that without considerably impacting the result of evaluations, our approach saves computational cost and time.",
"Instance Selection: We argue that the instances with extreme difficulty scores (very low and very high scores) would not be very effective in distinguishing between the candidate models .",
"This is because the former instances are trivial and would be answered correctly by many/all candidate models, while the latter ones are hard and would be answered correctly by only a few/none models.",
"Therefore, given a budget on the number of evaluation instances, we select a majority of them with moderate difficulty scores.",
"However, to distinguish amongst very weak and amongst very strong candidates, we also include a small number of instances with extreme difficulty scores.",
"Figure 2 illustrates our approach.",
"ficulty scores are pre-computed.",
"Furthermore, we do not compute separate difficulty scores for each candidate model as it would defy the sole purpose of efficient' evaluations.",
"Instead, we compute difficulty scores using only one model (RoBERTa-large) and exclude it from the list of candidate models for a fair evaluation of our approach.",
"For our instance selection approach to work in this setting, the difficulty scores should generalize for other models.",
"We empirically prove this generalization capability and demonstrate the efficacy of our efficient evaluations approach in 3.5.",
"Performance Metric: We measure the efficacy of an instance selection technique by computing accuracies of candidate models on the selected instances and calculating their Kendall's correlation (Kendall, 1938) with accuracies obtained on the full evaluation dataset.",
"High correlation implies that the performance scores obtained using the selected instances display the same behavior as the performance scores obtained using the complete dataset.",
"Hence, high correlations values are preferred.",
"Datasets: We experiment with a total of 23 datasets across Natural Language Inference, Duplicate Detection, Sentiment Analysis, Question Answering, Commonsense Reasoning, and several other tasks.",
"Refer to Appendix section B for an exhaustive list of datasets for each task.",
"Candidate Models: We use BERT (Devlin et al., 2019), DistilBERT (Sanh et al., 2019), ConvBERT (Jiang et al., 2020) , XLNET (Yang et al., 2019a), SqueezeBERT (Iandola et al., 2020), ELECTRA (Clark et al., 2020) in our experiments.",
"We also use different variants of ConvBert (small, medium-small, base) and ELECTRA (small, base) models.",
"For comprehensive experiments, we train each of the above models with training data of three different sizes ( 2 k , 5 k , and 10 k examples) resulting in 27 candidate models for each dataset.",
"We intentionally exclude RoBERTa from this list as we use it for computing the difficulty scores.",
"the proposed instance selection approach with the following baselines:",
"Random Selection : Select a random subset of instances from the evaluation dataset.",
"Heuristic Selection : Select instances based on the length heuristic (number of characters in the instance text) instead of the difficulty scores.",
"Adaptive evaluation (Weiss, 1982) is used in educational settings for evaluating performance of students.",
"It uses Item Response Theory (IRT) (Baker 3415 % Instances 0.5% 1% 2% 5% 10% 20% Dataset Random Heuristic Proposed Random Heuristic Proposed Proposed Proposed Proposed Proposed SNLI 0 . 55 0 . 09 0 . 38 0 . 17 0.68 0 . 13 0 . 68 0 . 05 0 . 58 0 . 08 0.78 0 . 08 0 . 83 0 . 04 0 . 88 0 . 04 0 . 91 0 . 01 0 . 93 0 . 02 PAWS Wiki 0 . 67 0 . 07 0 . 68 0 . 04 0.78 0 . 06 0 . 73 0 . 05 0 . 78 0 . 02 0.86 0 . 05 0 . 89 0 . 02 0 . 91 0 . 03 0 . 95 0 . 01 0 . 96 0 . 01 AgNews 0 . 12 0 . 26 0 . 14 0 . 27 0.47 0 . 05 0 . 25 0 . 34 0 . 41 0 . 14 0.52 0 . 1 0 . 65 0 . 07 0 . 75 0 . 06 0 . 8 0 . 04 0 . 89 0 . 03 QNLI 0 . 41 0 . 1 0 . 44 0 . 04 0.48 0 . 13 0 . 57 0 . 04 0 . 55 0 . 1 0.57 0 . 07 0 . 7 0 . 06 0 . 78 0 . 06 0 . 85 0 . 03 0 . 91 0 . 03 MRPC 0 . 04 0 . 09 0 . 03 0 . 18 0.21 0 . 16 0 . 02 0 . 09 0 . 05 0 . 2 0.29 0 . 21 0 . 36 0 . 15 0 . 45 0 . 08 0 . 58 0 . 12 0 . 65 0 . 14 SocialIQA 0 . 19 0 . 09 0 . 15 0 . 29 0.37 0 . 17 0 . 34 0 . 07 0 . 28 0 . 21 0.4 0 . 09 0 . 58 0 . 1 0 . 67 0 . 04 0 . 75 0 . 08 0 . 81 0 . 05 QQP 0 . 63 0 . 06 0 . 64 0 . 05 0.65 0 . 05 0 . 74 0 . 03 0 . 74 0 . 01 0.77 0 . 06 0 . 84 0 . 04 0 . 9 0 . 04 0 . 94 0 . 04 0 . 95 0 . 01 DNLI 0 . 58 0 . 05 0 . 59 0 . 1 0.58 0 . 11 0 . 68 0 . 1 0 . 71 0 . 04 0.76 0 . 07 0 . 84 0 . 04 0 . 92 0 . 05 0 . 94 0 . 03 0 . 96 0 . 01 COLA 0 . 01 0 . 18 0.25 0 . 26 0 . 24 0 . 45 0 . 41 0 . 41 0 . 63 0 . 23 0 . 75 0 . 08 0 . 78 0 . 02 SWAG 0 . 72 0 . 04 0 . 66 0 . 02 0.75 0 . 06 0 . 79 0 . 03 0 . 77 0 . 03 0.78 0 . 05 0 . 86 0 . 03 0 . 89 0 . 02 0 . 93 0 . 01 0 . 95 0 . 01 PAWS QQP 0 . 13 0 . 24 0 . 36 0 . 05 0.34 0 . 13 0 . 55 0 . 19 0 . 8 0 . 05 0 . 84 0 . 03 0 . 87 0 . 04 MNLI 0 . 7 0 . 04 0 . 71 0 . 03 0.73 0 . 07 0 . 8 0 . 02 0 . 8 0 . 04 0.82 0 . 08 0 . 89 0 . 03 0 . 93 0 . 02 0 . 95 0 . 02 0 . 96 0 . 01 Adv. NLI R1 0 . 0 0 . 08 0 . 07 0 . 06 0.17 0 . 27 0 . 02 0 . 13 0.09 0 . 11 0 . 08 0 . 2 0 . 13 0 . 18 0 . 3 0 . 18 0 . 47 0 . 05 0 . 59 0 . 05 Adv. NLI R2 0 . 08 0 . 04 -0.01 0 . 06 0 . 08 0 . 16 0 . 08 0 . 07 0.02 0 . 03 0 . 03 0 . 21 0 . 0 0 . 12 0 . 17 0 . 03 0 . 26 0 . 11 0 . 42 0 . 15 Adv. NLI R3 0 . 15 0 . 12 0.15 0 . 1 0 . 1 0 . 21 0 . 03 0 . 06 0 . 07 0 . 1 0.1 0 . 11 0 . 18 0 . 16 0 . 12 0 . 17 0 . 31 0 . 15 0 . 58 0 . 05 SST-2 0 . 08 0 . 15 0 . 16 0 . 35 0.29 0 . 25 0 . 4 0 . 2 0 . 52 0 . 16 0 . 65 0 . 13 0 . 81 0 . 08 ARC Easy 0 . 0 0 . 2 0 . 03 0 . 12 0.42 0 . 19 0 . 47 0 . 19 0 . 59 0 . 13 0 . 6 0 . 14 0 . 74 0 . 11 ARC Diff 0 . 15 0 . 29 0 . 28 0 . 13 0 . 33 0 . 31 0 . 3 0 . 26 Abductive NLI 0 . 08 0 . 26 0.17 0 . 05 0 . 16 0 . 09 0 . 19 0 . 19 0 . 26 0 . 08 0.3 0 . 07 0 . 42 0 . 13 0 . 57 0 . 08 0 . 61 0 . 07 0 . 68 0 . 07 Winogrande 0 . 19 0 . 11 0 . 03 0 . 06 0.0 0 . 17 0 . 11 0 . 09 0 . 05 0 . 12 0.11 0 . 15 0 . 09 0 . 14 0 . 03 0 . 1 0 . 14 0 . 1 0 . 21 0 . 14 CSQA 0 . 29 0 . 11 0 . 28 0 . 1 0.31 0 . 07 0 . 36 0 . 14 0 . 37 0 . 08 0.39 0 . 09 0 . 49 0 . 09 0 . 69 0 . 08 0 . 78 0 . 04 0 . 83 0 . 05 QuaRel 0 . 32 0 . 26 0 . 33 0 . 25 0 . 39 0 . 07 0 . 51 0 . 1 QuaRTz 0 . 34 0 . 19 0 . 36 0 . 04 0 . 34 0 . 12 0 . 37 0 . 08 Average 0 . 28 0 . 1 0 . 3 0 . 11 0.39 0 . 13 0 . 31 0 . 11 0 . 35 0 . 11 0.43 0 . 14 0 . 46 0 . 17 0 . 58 0 . 11 0 . 66 0 . 08 0 . 72 0 . 07 Table 1: Kendall correlation with full evaluation dataset achieved by various instance selection approaches for different percentage of instances. Each cell shows the mean and standard deviation obtained from 5 different runs. cell indicates 0 selected instances. We show the expanded version of this table in supplementary. and Kim, 2004) from psychometrics that requires a large number of subjects and items to estimate system parameters (Lalor et al., 2016, 2018).",
"Moreover, adaptive evaluation is computationally very expensive as it requires calculating performance after each response to select the next instance based on the previous responses of the subject.",
"Thus, it is not fit for our setting as we intend to improve the computational efficiency.",
"In contrast, our approach is much simpler and efficient as it does not incur any additional cost during the evaluation.",
"We first study generalization of our computed difficulty scores and then show the efficacy of the proposed instance selection approach in conducting efficient evaluations.",
"In Figure 3, we plot accuracy (averaged over all 27 candidate models) against difficulty scores (com-puted using RoBERTa-large).",
"We find that with the increase in difficulty score, the accuracy consistently decreases for all datasets.",
"We also study this behavior for each individual candidate model and find results supporting the above observation 2 (Fig-2 Further details are in appendix ure 6).",
"This proves that the difficulty scores follow the desiderata mentioned in Section 2.1 for other models also and our intuitions behind instance selection for conducting efficient evaluations hold true.",
"Note that these difficulty scores are computed using a specific model but our approach is general and will replicate this generalization capability if used with any other model.",
"Table 1 shows Kendall correlation with full dataset evaluation achieved by various instance selection approaches for different percentages of instances.",
"Proposed Approach Outperforms Baselines: Our proposed approach is consistently better than the Random and Heuristic approaches.",
"For instance, with just 0 .",
"5% and 1% evaluation instances, our approach outperforms the baseline methods by 30% and 22 .",
"8% respectively.",
"We show the expanded version of this table and performance of other instance selection techniques in Appendix.",
"Correlation Change with % of Instances: As expected, Kendall correlation consistently increases as a higher percentage of instances are selected for evaluation.",
"In case of SNLI, PAWS Wiki, QQP, DNLI, SWAG, and MNLI, just 2% instances 3416 are sufficient to achieve correlation of > 0 .",
"8 .",
"For most datasets, with just 20% of the evaluation instances, our approach achieves Kendall correlation of > 0 .",
"8 .",
"This suggests that the evaluations can be conducted with fewer instances without significantly compromising the accuracy of comparison.",
"We further analyze performance of our approach for higher percentage of instances in Table 7.",
"Thus, for practical settings where candidate models can't be compared on the entire dataset due to computational and time constraints, evaluating only on the selected instances can result in fairly accurate performance comparison.",
"Performance on Multiple-Choice QA datasets: Though, we perform better than the baselines approaches on almost all datasets, we achieve a lower correlation value for multiple-choice question answering datasets such as QuaRel, QuaRTz, and Winogrande.",
"We attribute this behavior to the close scores (accuracies) achieved by many candidate models even in case of full dataset evaluation.",
"Thus, it is difficult to differentiate such models as they achieve nearly the same performance.",
"Furthermore, in some difficult datasets such as Adversarial NLI (R1, R2, and R3), ARC Difficult, and Winogrande, many candidate models achieve accuracies very close to the random baseline ( 33% for NLI, 50% for Winogrande).",
"So, comparing their performance even with full dataset does not provide any significant insights.",
"Recent years have seen a rapid increase in the number and size of NLP datasets.",
"Crowd-sourcing is a prominent way of collecting these datasets.",
"Prior work (Gururangan et al., 2018; Tan et al., 2019; Mishra et al., 2020) has shown that crowd-sourced datasets can contain:",
"(a) erroneous instances that have annotation mistakes or ambiguity,",
"(b) too many trivial instances that are very easy to answer.",
"This hampers the quality of the dataset and makes it less reliable for drawing conclusions.",
"Can difficulty scores aid in improving the quality of evaluation datasets?",
"We first show that erroneous and trivial instances can be identified using the difficulty scores and then present a human-and-model-in-the-loop tech-Dataset",
"Identifying Erroneous and Trivial Instances: We inspect 50 instances each with very high and very low difficulty scores and find that a significant percentage of the former are either mislabeled or contain ambiguity and the latter are too easy to be answered.",
"Table 2 shows examples of erroneous instances from SNLI, Winogrande, CSQA, and Abductive NLI.",
"We find 72% of the inspected SNLI instances to be erroneous.",
"Furthermore, we find that some high difficulty score instances are actually difficult even for humans because they require abilities such as commonsense reasoning.",
"Table 4 (appendix) shows such instances.",
"We also provide examples of trivial instances (Table 6) and note that such instances are trivial from model's perspective as they can be answered correctly (with high confidence) by simply latching on to some statistical cues present in the training data.",
"Technique: Since the trivial instances are too easy to be answered, we propose to modify them in an adversarial way such that they no longer remain trivial.",
"Specifically, we include a human-in-the-loop who needs to modify a trivial instance in a label-preserving manner such that the modified version fools the model into making an incorrect prediction.",
"For adversarial attack, we use the strongest model from our ensemble of 120 models.",
"It has two key differences with the standard adversarial data creation approach presented in (Nie et al., 2020; Kiela et al., 2021):",
"(a) it requires modifying an already existing instance instead of creating a new instance from scratch.",
"(b) it does not increase the size of the evaluation dataset as we replace an already saturated instance (trivial) with its improved not-trivial version.",
"We use a human instead of leveraging automated ways to modify the trivial instances because our objective is to improve the quality of instances and prior work has shown that these automated techniques often result in unnatural and noisy instances.",
"Therefore, such techniques could be cost-efficient but might not solve the sole purpose of improving quality.",
"To further improve the quality, we provide instances with very high difficulty score (potentially erroneous) and ask a human to repair them such that the repaired versions follow the task definition.",
"The human can either change the instance text or its answer to achieve the goal.",
"Note that this scenario is model-independent.",
"Table 3 shows original and modified instances from SNLI.",
"Top two examples correspond to the trivial instances where the human modified the hypothesis in a label-preserving manner such that it fooled the model into making incorrect prediction.",
"The bottom two correspond to the mislabeled instances where the human rectified the label.",
"Figure 4 compares the performance of models on the original instances and the their modified/repaired versions.",
"As expected, the performance drops on the previously trivial instances as they are no longer trivial and improves on the previously erroneous instances.",
"We release the improved version of the dataset compiled via our technique.",
"ILDAE reveals several useful characteristics of datasets such as which class label has the easiest instances.",
"We study this for NLI datasets: SNLI, MNLI, DNLI, and Adversarial NLI (Figure 5).",
"For SNLI and MNLI, we find that the contradiction instances receive lower average difficulty score than entailment and neutral instances.",
"For Adversarial NLI, the order is reversed.",
"For DNLI, all the labels get assigned nearly the same average difficulty.",
"Such analysis can serve as a guide for future data creation as it indicates for which type of instances more data collection effort needs to be invested.",
"It can also be used to compare average difficulty at dataset level.",
"Furthermore, a new harder task-specific benchmark can be created by combining high difficulty instances from all the datasets of that task.",
"We divide the evaluation instances into different regions based on the difficulty scores and analyze models' performance in each region.",
"We find that a single model does not achieve the highest accuracy across all regions.",
"Figure 6 illustrates this pattern for SNLI dataset.",
"This implies that the model that achieves the highest performance on easy instances may not necessarily achieve the highest performance on difficult instances.",
"The similar pattern is observed for other datasets (refer appendix).",
"Such analysis would benefit in model selection.",
"For instance, in scenarios where a system is expected to encounter hard instances, we can select the model that has the highest accuracy on instances of difficult regions.",
"Whereas, for scenarios containing easy instances, the model that has the highest accuracy on instances of easy regions.",
"Large pre-trained language models can achieve high In-Domain performance on numerous tasks.",
"However, it does not correlate well with OOD performance (Hendrycks and Dietterich, 2019; Hendrycks et al., 2020).",
"To this end, we present an approach to compute a weighted accuracy that shifts away from treating all the evaluations instances equally and assigns weight based on their difficulty scores.",
"We define the weight w i of an Figure 7: Comparing Kendall correlation of standard unweighted accuracy and weighted accuracy with OOD accuracy.",
"instance i with difficulty score d i as: w i = 1 + d i N + (cid:80) N j =1 d j where N corresponds to the total number of evaluation instances, and is a hyper-parameter that controls influence of difficulty score on the weight.",
"Then, weighted accuracy W is simply: W = N (cid:88) i =1 w i v i where v i is 1 when the model's prediction is correct else 0.",
"This implies that high accuracy may not always translate to high weighted accuracy.",
"We take SNLI as the in-domain dataset and MNLI, DNLI, and HANS (McCoy et al., 2019) (Constituent, Lexical Overlap, Subsequence) as OOD datasets.",
"We calculate unweighted and weighted accuracy of the 27 models (described in Section 3.3) and compare their Kendall correlation with the accuracy on OOD datasets.",
"Figure 7 shows this comparison.",
"It can be observed that weighted accuracy shows 5 .",
"2% higher correlation with OOD performance that the standard accuracy.",
"Most improvement is observed in hard datasets i.e. HANS.",
"Thus, weighting instances based on their difficulty score is more informative than the standard accuracy that treats all instances equally.",
"We conducted Instance-Level Difficulty Analysis of Evaluation data (ILDAE) in a large-scale setup of 23 datasets and presented its five novel applications.",
"With these applications, we demonstrated 3419 ILDAE's impact in several important areas, such as conducting efficient evaluations with fewer instances, improving dataset quality, and estimating out-of-domain performance reliably.",
"We release our computed difficulty scores and hope that our encourage research in this important yet understudied field of leveraging instance difficulty in evaluations.",
"We use existing public-domain text datasets, such as SNLI, Winogrande, and ARC, and follow the protocol to use and adapt research data to compute instance-level difficulty scores.",
"We will release the computed difficulty scores, but will not share the original source data.",
"We recommend readers to refer to the original source research papers.",
"Any bias observed in difficulty scores computed using our methods can be attributed to the source data and our computation functions.",
"However, no particular socio-political bias is emphasized or reduced specifically by our methods.",
"We thank the anonymous reviewers for their insightful feedback.",
"This research was supported by DARPA SAIL-ON and DARPA CHESS programs."
] | [
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"objective",
"result",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Entity set expansion, aiming at expanding a small seed entity set with new entities belonging to the same semantic class, is a critical task that benefits many downstream NLP and IR applications, such as question answering, query understanding, and taxonomy construction.",
"Existing set expansion methods bootstrap the seed entity set by adaptively selecting context features and extracting new entities.",
"A key challenge for entity set expansion is to avoid selecting ambiguous context features which will shift the class semantics and lead to accumulative errors in later iterations.",
"In this study, we propose a novel iterative set expansion framework that leverages automatically generated class names to address the semantic drift issue.",
"In each iteration, we select one positive and several negative class names by probing a pre-trained language model, and further score each candidate entity based on selected class names.",
"Experiments on two datasets show that our framework generates high-quality class names and outperforms previous state-of-the-art methods significantly.",
"Entity set expansion aims to expand a small set of seed entities (e.g., { United States , China , Canada }) with new entities (e.g., United Kingdom , Australia ) belonging to the same semantic class (i.e., Country ).",
"The entities so discovered may benefit a variety of NLP and IR applications, such as question answering (Wang et al., 2008), query understanding (Hua et al., 2017), taxonomy construction (Shen et al., 2018a), and semantic search (Xiong et al., 2017; Shen et al., 2018b).",
"Most existing entity set expansion methods bootstrap the initial seed set by iteratively selecting context features (e.g., co-occurrence words (Pantel et al., 2009), unary patterns (Rong et al., 2016), and coordinational patterns (Mamou et al., 2018)), Entities HearstPattern [NP 0 ]suchas[NP 1 ],[NP 2 ],and[NP 3 ] {USA,China, Canada} [MASK]such asUSA,China,andCanada Class-probingQuery ClassName Entity HearstPattern [NP 0 ],[NP 1 ],orother[NP 2 ] countries Canada Entity-probingQuery Canada, [MASK],orothercountries RetrievedClassNames countries states largecountries cities RetrievedEntities Japan UnitedKingdom Mexico Toronto LanguageModel(e.g.BERT/XLNet) Figure 1: Examples of class-probing and entity-probing queries generated based on Hearst patterns.",
"while extracting and ranking new entities.",
"A key challenge to set expansion is to avoid selecting ambiguous patterns that may introduce erroneous entities from other non-target semantic classes.",
"Take the above class Country as an example, we may find some ambiguous patterns like * located at (which will match more general Location entities) and match against * (which may be associated with entities in the Sports Club class).",
"Furthermore, as bootstrapping is an iterative process, those erroneous entities added at early iterations may shift the class semantics, leading to inferior expansion quality at later iterations.",
"Addressing such semantic drift issue without requiring additional user inputs (e.g., mutually exclusive classes (Curran et al., 2007) and negative example entities (Jindal and Roth, 2011)) remains an open research problem.",
"In this study, we propose to empower entity set expansion with class names automatically generated from pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019).",
"Intuitively, knowing the class name is country, instead of state or city, can help us identify unambiguous patterns and eliminate erroneous entities like Europe and New York .",
"Moreover, we can acquire such knowledge (i.e., positive and negative class names) by probing a pre-trained language model automatically without relying on human annotated data.",
"Motivated by the above intuition, we propose a new iterative framework for entity set expansion that consists of three modules: (1) The first, class name generation module, constructs and submits class-probing queries (e.g., [M ASK ] such as USA, China, and Canada. in Fig. 1) to a language model for retrieving a set of candidate class names.",
"(2) The second, class name ranking module, builds an entity-probing query for each candidate class name and retrieves a set of entities.",
"The similarity between this retrieved set and the current entity set serves as a proxy for the class name quality, based on which we rank all candidate class names.",
"An unsupervised ensemble technique (Shen et al., 2017) is further used to improve the quality of final ranked list from which we select one best class name and several negative class names.",
"(3) The third, class-guided entity selection module, scores each entity conditioned on the above selected class names and adds top-ranked entities into the currently expanded set.",
"As better class names may emerge in later iterations, we score and rank all entities (including those already in the expanded set) at each iteration, which helps alleviate the semantic drift issue.",
"Contributions.",
"In summary, this study makes the following contributions: (1) We propose a new set expansion framework that leverages class names to guide the expansion process and enables filtra-tion of the entire set in each iteration to resolve the semantic drift issue; (2) we design an automatic class name generation algorithm that outputs high-quality class names by dynamically probing pre-trained language models; and (3) experiments on two public datasets from different domains demonstrate the superior performance of our approach compared with state-of-the-art methods.",
"In this section, we provide background on language models and define the entity set expansion problem.",
"A standard language model (LM) inputs a word sequence w = [ w 1 , w 2 , . . . , w n ] and assigns a probability P ( w ) to the whole sequence.",
"Recent studies (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019) found that language models, simply trained for next word or missing word prediction, can generate high quality contextualized word representations which benefit many downstream applications.",
"Specifically, these language models will output an embedding vector for each word appearance in a specific context that is usually the entire sentence where the target word occurs, rather than just words appearing before the target word.",
"Therefore, we can also view a LM as a model that inputs a word sequence w and outputs a probability P ( w i ) = P ( w i | w 1 , . . . , w i 1 , w i +1 , . . . , w n ) to any position 1 i n .",
"Currently, Devlin et al. (2019) propose BERT and train the language model with two objectives: (1) a cloze-filling objective which randomly substitutes some words with a special [MASK ] token in the input sentence and forces LM to recover masked words, and (2) a binary classification objective that guides LM to predict whether one sentence directly follows another (sentence).",
"BERT leverages Transformer (Vaswani et al., 2017) architecture and is learned on English Wikipedia as well as BookCorpus.",
"More LM architectures are described in Section",
"5. 2.2 Problem Formulation We first define some key concepts and then present our problem formulation.",
"Entity.",
"An entity is a word or a phrase that refers to a real-world instance.",
"For example, U.S. refers to the country: United States.",
"Class Name.",
"A class name is a text representation of a semantic class.",
"For instance, country could be a class name for the semantic class that includes entities like United States and China .",
"Probing Query.",
"A probing query is a word sequence containing one [MASK ] token.",
"In this work, we utilize Hearst patterns (Hearst, 1992) to construct two types of probing queries: (1) A class-probing query aims to predict the class name of some given entities (e.g., [M ASK ] such as United States and China), and (2) an entity-probing query aims to retrieve entities that fit into the mask token (e.g., countries such as [MASK ] and Japan).",
"Problem Formulation.",
"Given a text corpus D and a seed set of user-provided entities, we aim to output a ranked list of entities that belong to the same semantic class.",
"Example",
"1. Given a seed set of three countries { United States , China , Canada } , we aim to return a ranked list of entities belonging to the same country class such as United Kingdom , Japan , and Mexico .",
"We introduce our class-guided entity set expansion framework in this section.",
"First, we present our class name generation and ranking modules in Sections 3.1 and 3.2, respectively.",
"Then, we discuss how to leverage class names to guide the iterative expansion process in Section 3.3.",
"The class name generation module inputs a small collection of entities and generates a set of candidate class names for these entities.",
"We build this module by automatically constructing class-probing queries and iteratively querying a pre-trained LM to obtain multi-gram class names.",
"First, we notice that the class name generation goal is similar to the hypernymy detection task which aims to find a general hypernym (e.g., mammal) for a given specific hyponym (e.g., panda).",
"Therefore, we leverage the six Hearst patterns (Hearst, 1992) 1 , widely used for hypernymy detection, to construct the class-probing query.",
"More specifically, we randomly select three entities in the current set as well as one Hearst pattern (out of six choices) to construct one query.",
"For example, we may choose entities { China , India , Japan } and pattern NP y such as NP a , NP b , and NP c to construct the query [M ASK ] such as China, India, and Japan.",
"By repeating such a random selection process, we can construct a set of queries and feed them into pre-trained language models to obtain predicted masked tokens which are viewed as possible class names.",
"The above procedure has one limitationit can only generate unigram class names.",
"To obtain multi-gram class names, we design a modified beam search algorithm to iteratively query a pre-trained LM.",
"Specifically, after we query a LM for the first time and retrieve top K most likely words (for the masked token), we construct K new queries by adding each retrieved word after the masked token.",
"Taking the former query [M ASK ] such as China, India, and Japan as an example, we may first obtain words like countries, nations, and then construct a new query [M ASK ] countries such as China, India, and Japan.",
"Probing the LM again with this new query, we can get words like Asian or large, and obtain more fine-grained class names like Asian countries or large coun-1 For example, the pattern NP y such as NP a indicates that noun phrase y is a hypernym of noun phrase a .",
"tries.",
"We repeat this process for maximum three times and keep all generated class names that are noun phrases 2 .",
"As a result, for each Hearst pattern and randomly selected three entities from the current set, we will obtain a set of candidate class names.",
"Finally, we use the union of all these sets as our candidate class name pool, denoted as C .",
"Note that in this module, we focus on the recall of candidate class name pool C , without considering its precision, since the next module will further rank and select these class names based on the provided text corpus.",
"In this module, we rank the above generated candidate class names to select one best class name that represents the whole entity set and some negative class names used in the next module to filter out wrong entities.",
"A simple strategy is to rank these class names based on the number of times it has been generated in the previous module.",
"However, such a strategy is sub-optimal because short unigram class names always appear more frequently than longer multi-gram class names.",
"Therefore, we propose a new method below to measure how well each candidate class name represents the entity set.",
"First, we introduce a corpus-based similarity measure between an entity e and a class name c .",
"Given the class name c , we first construct 6 entity-probing queries by masking the hyponym term in six Hearst patterns 3 , and query a pre-trained LM to obtain the set of six [MASK ] token embeddings, denoted as X c .",
"Moreover, we use X e to denote the set of all contextualized representations of the entity e in the given corpus.",
"Then, we define the similarity between e and c , as: M k ( e, c ) = 1 k max X X e , | X | = k (cid:88) x X max x (cid:48) X c cos ( x , x (cid:48) ) , (1) where cos ( x , x (cid:48) ) is the cosine similarity between two vectors x and x (cid:48) .",
"The inner max operator finds the maximum similarity between each occurrence of e and the set of entity-probing queries constructed based on c .",
"The outer max operator identifies the topk most similar occurrences of e with the queries and then we take their average as the final similarity between the entity e and the class name c .",
"This measure is analogous to finding 2 Therefore, class names likes and countries and , countries are filtered out.",
"3 For example, a query for class name countries is countries such as [MASK ].",
"k best occurrences of entity e that matches to any of the probing queries of class c , and therefore it improves the previous similarity measures that utilize only the context-free representations of entities and class names (e.g., Word2Vec).",
"After we define the entity-class similarity score, we can choose one entity in the current set and obtain a ranked list of candidate class names based on their similarities with this chosen entity.",
"Then, given an entity set E , we can obtain | E | ranked lists, L 1 , L 2 , . . . , L | E | , one for each entity in E .",
"Finally, we follow (Shen et al., 2017) and aggregate all these lists to a final ranked list of class names based on the score s ( c ) = (cid:80) | E | i =1 1 r ic , where r ic indicates the rank position of class name c in ranked list L i .",
"This final ranked list shows the order of how well each class name can represent the current entity set.",
"Therefore, we choose the best one that ranks in the first position as the positive class , denoted as c p .",
"Aside from choosing the positive class name c p , we also select a set of negative class names for the target semantic class to help bound its semantics.",
"To achieve this goal, we assume that entities in the initial user-provided seed set E 0 definitely belong to the target class.",
"Then, we choose those class names that rank lower than c p in all lists corresponding to entities in E 0 , namely { L i | e i E 0 }, and treat them as the negative class names.",
"We refer to this negative set of class names as CN and use them to guide the set expansion process below.",
"In this module, we leverage the above selected positive and negative class names to help select new entities to add to the set.",
"We first introduce two entity scoring functions and then present a new rank ensemble algorithm for entity selection.",
"where M k is defined in Eq.",
"(1).",
"We refer to this score as a local score because it only looks at topk best occurrences in the corpus where the contextualized representation of entity e i is most similar to the representation of class name c q .",
"The second scoring function calculates the similarity between each candidate entity and existing entities in the current set, based on their context-free representations.",
"For each entity e , we use the average of all its contextualized embedding vectors as its context-free representation, denoted as v e .",
"Given the current entity set E , we first sample several entities from E , denoted as E s , and calculate the score for each candidate entity e i as: score glbi = 1 | E s | (cid:88) e E s cos ( v e i , v e ) .",
"Note here we sample a small set E s (typically of size 3), rather than using the entire set E .",
"Since the current entity set E may contain wrong entities introduced in previous steps, we do not use all the entities in E and compute the candidate entity score only once.",
"Instead, we randomly select multiple subsets of entities from the current set E , namely E s , obtain a ranked list of candidate entities for each sampled subset, and aggregate all ranked lists to select the final entities.",
"Such a sampling strategy can reduce the effect of using wrong entities in E , as they are unlikely to be sampled multiple times, and thus can alleviate potential errors that are introduced in previous iterations.",
"We refer to this score as a global score because it utilizes context-free representations which better reflect entities' overall positions in the embedding space and measure the entity-entity similarity in a more global sense.",
"Such a global score complements the above local score and we use their geometric mean to finally rank all candidate entities: score i = (cid:113) score loci score glbi .",
"As the expansion process iterates, wrong entities",
"may be included in the set and cause semantic drifting.",
"We develop a novel rank ensemble algorithm that leverages those selected class names to improve the quality and robustness of entity selection.",
"First, we repeatedly sample E s (used for calculating score glbi in Eq.",
"(3)) T times from current entity set E , and obtain T entity ranked lists { R m } Tm =1 .",
"Second, we follow the class name ranking procedure in Section 3.2 to obtain | E | class ranked lists { L n } | E | n =1 , one for each entity e i E .",
"Note here each L n is actually a ranked list over { c p } CN , namely the set of selected one positive class name and all negative class names.",
"Intuitively, an entity belonging to our target semantic class should satisfy two criteria: (1) it appears at the top positions in multiple entity ranked lists, and (2) within its corresponding class ranked list, the selected best class name c p should be ranked above any one of the negative class name in CN .",
"Combining these two criteria, we define a new rank aggregation score as follows: S ( e i ) = T (cid:88) t =1 (cid:0) 1 ( e i E ) + s t ( e i ) (cid:1) 1 ( r ic p < min c (cid:48) CN r ic (cid:48) ) , (5) where 1 ( ) is an indicator function, r ic is the rank of class name c in entity e i 's ranked list L ic , and s t ( e i ) the individual aggregation score of e i deduced from the ranked list R t , for which we test two aggregation methods: (1) mean reciprocal rank, where s t ( e i ) = 1 r ti (6) and r ti is the rank of entity e i in the t -th ranked list R t ; and (2) the combination of scores (CombSUM), where s t ( e i ) = score t i min e j R t score t j max e j R t score tj min e j R t score tj (7) is the ranking score of e i in the ranked list R t after min-max feature scaling.",
"To interpret Eq.",
"5, the first summation term reflects our criterion (1) and its inner indicator function ensuring an entity in the current set E prone to have a large rank aggregation score if not been filtered out below.",
"The second term reflects our criterion (2) by using an indicator function that filters out all entities which are more similar to a negative class name than the positive class name.",
"Note here we calculate the aggregation score for all entities in Dataset # Test Queries # Entities # Sentences Wiki 40 33K 1.50M APR 15 76K 1.01M Table 1: Datasets statistics the vocabulary list, including those already in the current set E , and it is possible that some entity in E will be filtered out because it has 0 value in the second term.",
"This makes a huge difference comparing with previous iterative set expansion algorithms which all assume that once an entity is included in the set, it will stay in the set forever.",
"Consequently, our method is more robust to the semantic drifting issue than previous studies.",
"Summary.",
"Starting with a small seed entity set, we iteratively apply the above three modules to obtain an entity ranked list and add top-ranked entities into the set.",
"We repeat the whole process until either (1) the expanded set reaches a pre-defined target size or (2) the size of the set does not increase for three consecutive iterations.",
"Notice that, by setting a large target size, more true entities belonging to the target semantic class will be selected to expand the set, which increases the recall, but wrong entities are also more likely to be included, which decreases the precision.",
"However, as the output of the set expansion framework is a ranked list, the most confident high-quality entities will still be ranked high in the list.",
"Datasets.",
"We conduct our experiments on two public benchmark datasets widely used in previous studies (Shen et al., 2017; Yan et al., 2019): (1) Wiki , which is a subset of English Wikipedia articles, and (2) APR , which contains all news articles published by Associated Press and Reuters in 2015.",
"Following the previous work, we adopt a phrase mining tool, AutoPhrase (Shang et al., 2018), to construct the entity vocabulary list from the corpus, and select the same 8 semantic classes for the Wiki dataset as well as 3 semantic classes for the APR dataset.",
"Each semantic class has 5 seed sets and each seed set contains 3 entities.",
"Table 1 summarizes the statistics for these datasets.",
"Compared methods.",
"We compare the following corpus-based entity set expansion methods.",
"1. Egoset (Rong et al., 2016): This is a multifaceted set expansion system using context features and Word2Vec embeddings.",
"The original Methods Wiki APR MAP@10 MAP@20 MAP@50 MAP@10 MAP@20 MAP@50 Egoset (Rong et al., 2016) 0.904 0.877 0.745 0.758 0.710 0.570 SetExpan (Shen et al., 2017) 0.944 0.921 0.720 0.789 0.763 0.639 SetExpander (Mamou et al., 2018) 0.499 0.439 0.321 0.287 0.208 0.120 CaSE (Yu et al., 2019b) 0.897 0.806 0.588 0.619 0.494 0.330 MCTS (Yan et al., 2019) 0.980 0.930 0.790 0.960 0.900 0.810 CGExpan-NoCN 0.968 0.945 0.859 0.909 0.902 0.787 CGExpan-NoFilter 0.990 0.975 0.890 0.979 0.962 0.892 CGExpan-Comb 0.991 0.974 0.895 0.983 0.984 0.937 CGExpan-MRR 0.995 0.978 0.902 0.992 0.990 0.955 Table 2: Mean Average Precision on Wiki and APR.",
"framework aims to expand the set in multiple facets.",
"Here we treat all expanded entities as in one semantic class due to little ambiguity in the seed set.",
"2. SetExpan (Shen et al., 2017): This method iteratively selects skip-gram context features from the corpus and develops a rank ensemble mechanism to score and select entities.",
"3. SetExpander (Mamou et al., 2018): This method trains different embeddings based on different types of context features and leverages additional human-annotated sets to build a classifier on top of learned embeddings to predict whether an entity belongs to the set.",
"4. CaSE (Yu et al., 2019b): This method combines entity skip-gram context feature and embedding features to score and rank entities once from the corpus.",
"The original paper has three variants and we use the CaSE-W2V variant since it is the best model claimed in the paper.",
"5. MCTS (Yan et al., 2019): This method bootstraps the initial seed set by combing the Monte Carlo Tree Search algorithm with a deep similarity network to estimate delayed feedback for pattern evaluation and to score entities given selected patterns.",
"6. CGExpan: This method is our proposed Class-Guided Set Expansion framework, using BERT (Devlin et al., 2019) as the pre-trained language model.",
"We include two versions of our full model, namely CGExpan-Comb and CGExpan-MRR, that use the combination of score and mean reciprocal rank for rank aggregation, respectively.",
"7. CGExpan-NoCN: An ablation of CGExpan that excludes the class name guidance.",
"Therefore, it only incorporates the average BERT representation to select entities.",
"8. CGExpan-NoFilter: An ablation of CGExpan CGExpan vs. Other MAP@10 MAP@20 MAP@50 vs. SetExpan 100% 94.5% 87.3% vs. CGExpan-NoFilter 100% 94.5% 58.2% vs. CGExpan-NoCN 100% 94.5% 70.9% Table 3: Ratio of seed entity set queries on which the first method reaches better or the same performance as the second method.",
"Evaluation Metric.",
"We follow previous studies and evaluate set expansion results using Mean Average Precision at different top K positions (MAP@ K ) as below: MAP@ K = 1 | Q | (cid:88) q QAPK ( L q , S q ) , where Q is the set of all seed queries and for each query q , we use APK ( L q , S q ) to denote the traditional average precision at position K given a ranked list of entities L q and a ground-truth set S q .",
"Implementation Details.",
"For CGExpan, we use BERT-base-uncased 4 as our pre-trained LM.",
"For parameter setting, in the class name generation module (Sec. 3.1), we take top-3 predicted tokens in each level of beam search and set the maximum length of generated class names up to",
"3. When calculating the similarity between an entity and a class name (Eq. 1), we choose k = 5 , and will later provide a parameter study on k in the experiment.",
"Also, since MAP@ K for K = 10 , 20 , 50 are typically used for set expansion evaluations, we follow the convention and choose 50 as the target set size in our experiments.",
"5 4 In principle, other masked LMs such as RoBERTa and XLNet can also be used in our framework.",
"5 The code and data are available at https://github.",
"com/yzhan238/CGExpan Methods Wiki APR MAP@{10/20/50} MAP@{10/20/50} Oracle-Full 0.991/0.976/0.891 1.000/1.000/0.964 Oracle-NoFilter 0.994/0.983/0.887 0.988/0.966/0.894 CGExpan 0.995/0.978/0.902 0.992/0.990/0.955 Table 4: Compared to oracle models knowing ground truth class names, CGExpan automatically generates class names and achieves comparative performances.",
"Overall Performance.",
"Table 2 shows the overall performance of different entity set expansion methods.",
"We can see that CGExpan along with its ablations in general outperform all the baselines by a large margin.",
"Comparing with SetExpan, the full model CGExpan achieves 24% improvement in MAP@50 on the Wiki dataset and 49% improvement in MAP@50 on the APR dataset, which verifies that our class-guided model can re-fine the expansion process and reduce the effect of erroneous entities on later iterations.",
"In addition, CGExpan-NoCN outperforms most baseline models, meaning that the pre-trained LM itself is powerful to capture entity similarities.",
"However, it still cannot beat CGExpan-NoFilter model, which shows that we can properly guide the set expansion process by incorporating generated class names.",
"Moreover, by comparing our full model with CGExpan-NoFilter, we can see that negative class names indeed help the expansion process by estimating a clear boundary for the target class and filtering out erroneous entities.",
"Such an improvement is particularly obvious on the APR dataset.",
"The two versions of our full model overall have comparable performance, but CGExpan-MRR consistently outperforms CGExpan-Comb.",
"To explain such a difference, empirically we observe that high-quality entities tend to rank high in most of the ranked lists.",
"Therefore, we use the MRR version for the rest of our experiment, denoted as CGExpan.",
"Fine-grained Performance Analysis.",
"Table 3 reports more fine-grained comparison results between two methods.",
"Specifically, we calculate the ratio of seed entity set queries (out of total 55 queries) on which one method achieves better or the same performance as the other method.",
"We can see that CGExpan clearly outperforms SetExpan and its two variants on the majority of queries.",
"In Table 4, we further compare CGExpan with two oracle models that have the access to ground truth class names.",
"Results show that CGExpan can \u0000\u0014 \u0000\u0015 \u0000\u0016 \u0000\u0017 \u0000\u0018 \u0000\u0019 \u0000\u001a \u0000\u001b \u0000\u001c k \u0000\u0013\u0000\u0011\u0000\u001b\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001b\u0000\u001b \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u001b \u0000\u0014\u0000\u0011\u0000\u0013\u0000\u0013 \u00000 \u0000$\u00003 \u0000\u0003 \u0000V\u0000F \u0000R \u0000U \u0000H \u0000V \u00000\u0000$\u00003\u0000#\u0000\u0014\u0000\u0013\u00000\u0000$\u00003\u0000#\u0000\u0015\u0000\u0013\u00000\u0000$\u00003\u0000#\u0000\u0018\u0000\u0013 \u0000\u0014 \u0000\u0015 \u0000\u0016 \u0000\u0017 \u0000\u0018 \u0000\u0019 \u0000\u001a \u0000\u001b \u0000\u001c k \u0000\u0013\u0000\u0011\u0000\u001b\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001b\u0000\u001b \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001c\u0000\u001b \u0000\u0014\u0000\u0011\u0000\u0013\u0000\u0013 \u00000 \u0000$\u00003 \u0000\u0003 \u0000V\u0000F \u0000R \u0000U \u0000H \u0000V \u00000\u0000$\u00003\u0000#\u0000\u0014\u0000\u0013\u00000\u0000$\u00003\u0000#\u0000\u0015\u0000\u0013\u00000\u0000$\u00003\u0000#\u0000\u0018\u0000\u0013 Figure 3: Performance for different k values on Wiki (left) and APR (right).",
"achieve comparative results as those oracle models, which indicates the high quality of generated class names and effectiveness of CGExpan.",
"Parameter Study.",
"In CGExpan, we calculate the similarity between an entity and a class name based on its k occurrences that are most similar to the class name (cf.",
"Eq.",
"(1)).",
"Figure 3 studies how this parameter k would affect the overall performance.",
"We find that the model performance first increases when k increases from 1 to 5 and then becomes stable (in terms of MAP@10 and MAP@20) when k further increases to 10.",
"Overall, we find k = 5 is enough for calculating entity-class similarity and CGExpan is insensitive to k as long as its value is larger than",
"5. 4.3 Case Studies Class Name Selection.",
"Table 5 shows some results of our class name ranking module for several queries from different semantic classes in the Wiki dataset.",
"We see that CGExpan is able to select the correct class name and thus injects the correct semantics in later entity selection module.",
"Moreover, as shown in the last column, CGExpan can identify several negative class names that provide a tight boundary for the target semantic class, including sports and competition for sport league class, as well as city and country for Chinese province class.",
"These negative class names help CGExpan avoid adding those related but erroneous entities into the set.",
"From Table 5 we can see that it happens when the predicted positive class name is not exactly the ground true class name in the original dataset.",
"However, since we use both the generated class names and currently expanded entities as guidance and select new entities according to the context features in the provided corpus, those imperfect class names can still guide the set expansion process and perform well empirically.",
"Also, in principle, synonyms of the positive class name can be wrongly selected as negative class names, which also happens but very rarely in our experiments.",
"However, since these synonyms con-Seed Entity Set Ground True Class Name Positive Class Name Negative Class Names { Intel , Microsoft , Dell } company company product , system , bank , ... { United States , China , Canada } country country state , territory , island , ... { ESPNews , ESPN Classic , ABC } tv channel television network program , sport , show , ... { NHL , NFL , American league } sports league professional league sport , competition , ... { democratic , labor , tories } party political party organization , candidate , ... { Hebei , Shandong , Shanxi } Chinese province chinese province city , country , state , ... { tuberculossi , Parkinson's disease , esophageal cancer } disease chronic disease symptom , condition , ... { Illinois , Arizona , California } US state state county , country , ...",
"sistently rank lower than the positive one for the initial seeds based on the given corpus, they are indeed not good class names for this specific corpus.",
"Thus, misclassifying them will not have much influence on the performance of our model.",
"Entity Selection.",
"Table 6 shows expanded entity sets for two sample queries.",
"After correctly predicting true positive class names and selecting relevant negative class names, CGExpan utilizes them to filter out those related but erroneous entities, including two TV shows in television network class and three entities in political party class.",
"As a result, CGExpan can outperform CGExpan-NoFilter.",
"Entity Set Expansion.",
"Traditional entity set expansion systems such as Google Sets (Tong and Dean, 2008) and SEAL (Wang and Cohen, 2007, 2008) typically submit a query consisting of seed entities to a general-domain search engine and extract new entities from retrieved web pages.",
"These methods require an external search engine for online seed-oriented data collection, which can be costly.",
"Therefore, more recent studies propose to expand the seed set by offline processing a corpus.",
"These corpus-based set expansion methods can be categorized into two general approaches: (1) onetime entity ranking which calculates entity distributional similarities and ranks all entities once without back and forth refinement (Mamou et al., 2018; Yu et al., 2019b), and (2) iterative bootstrapping which aims to bootstrap the seed entity set by iteratively selecting context features and ranking new entities (Rong et al., 2016; Shen et al., 2017; Yan et al., 2019; Zhu et al., 2019; Huang et al., 2020).",
"Our method in general belongs to the later category.",
"Finally, there are some studies that incorporate extra knowledge to expand the entity set, including negative examples (Curran et al., 2007; McIntosh and Curran, 2008; Jindal and Roth, 2011), semi-structured web table (Wang et al., 2015), and external knowledge base (Yu et al., 2019a).",
"Particularly, Wang et al. (2015) also propose to use a class name to help expand the target set.",
"However, their method requires a user-provided class name and utilizes web tables as additional knowledge, while our method can automatically generate both positive and negative class names and utilize them to guide the set expansion process.",
"Language Model Probing.",
"Traditional language models aim at assigning a probability for an input word sequence.",
"Recent studies have shown that by training on next word or missing word prediction task, language models are able to generate contextualized word representations that benefit many downstream applications.",
"ELMo (Peters et al., 2018) proposes to learn a BiLSTM model that captures both forward and backward contexts.",
"BERT (Devlin et al., 2019) leverages the Transformer architecture and learns to predict randomly masked tokens in the input word sequence and to classify the neighboring relation between pair of input sentences.",
"Based on BERT's philosophy, RoBERTa (Liu et al., 2019) conducts more careful hyper-parameter tuning to improve the performance on downstream tasks.",
"XLNet (Yang et al., 2019) further combines the ideas from ELMo and BERT and develops an autoregressive model that learns contextualized representation by maximizing the expected likelihood over permutations of the input sequence.",
"Aside from generating contextualized representations, pre-trained language models can also serve as knowledge bases when being queried appropriately.",
"Petroni et al. (2019) introduce the language model analysis probe and manually define probing queries for each relation type.",
"By submitting those probing queries to a pre-trained LM, they show that we can retrieve relational knowledge and achieve competitive performance on various NLP tasks.",
"More recently, Bouraoui et al. (2020) further analyze BERT's ability to store relational knowledge by using BERT to automatically select high-Seed Entity Set CGExpan CGExpan-NoCN CGExpan-NoFilter 1 Pb 1 NBC 1 Pb 2 ABC 2 CBS 2 Mtv 3 CBS 3 Disney Channel 3 ABC ... ... ... 35 Telemundo 35 ESPN Radio * 35 MyNetworkTV 36 Fox Sports Net 36 BBC America 36 ESPN2 37 Dateline NBC 37 G4 37 the Today Show * 38 Channel 4 38 Sirius Satellite Radio * 38 Access Hollywood * 39 The History Channel 39 TNT 39 Cartoon Network { ESPN , Discovery Channel , Comedy Central } ... ... ... 1 republican 1 national party 1 republican 2 likud 2 labour party 2 likud 3 liberal democrats 3 gop establishment * 3 liberal democrats ... ... ... 40 komeito 40 republican jewish coalition * 40 young voters * 41 centrist liberal democrats 41 british parliament * 41 bjp 42 aipac * 42 tea party patriots * 42 religious * 43 aam aadmi party 43 centrist liberal democrats 43 congress * 44 ennahda 44 federal government * 44 lib dem { democratic party , republican party , labor party } ... ...",
"quality templates from text corpus for new relation prediction.",
"Comparing with previous work, in this paper, we show that probing pre-trained language model works for entity set expansion task, and we propose a new entity set expansion framework that combines corpus-independent LM probing with corpus-specific context information for better expansion performance.",
"In this paper, we propose a new entity set expansion framework that can use a pre-trained LM to generate candidate class names for the seed set, rank them according to the provided text corpus, and guide the entity selection process with the selected class names.",
"Extensive experiments on the Wiki and APR datasets demonstrate the effectiveness of our framework on both class name prediction and entity set expansion.",
"In the future, we plan to expand the method scope from expanding concrete entity sets to more abstract concept sets.",
"For example, we may expand the set { machine translation , information extraction , syntactic parsing } to acquire more NLP task concepts.",
"Another interesting direction is to generate a class name hierarchy via language model probing.",
"Research was sponsored in part by US DARPA KAIROS Program No.",
"FA8750-19-2-1004 and SocialSim Program No.",
"W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HD-TRA11810026.",
"Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon.",
"We thank anonymous reviewers for valuable feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Table-to-text generation aims at automatically generating natural text to help people conveniently obtain salient information in tables.",
"Although neural models for table-to-text have achieved remarkable progress, some problems are still overlooked.",
"Previous methods cannot deduce the factual results from the entity's (player or team) performance and the relations between entities.",
"To solve this issue, we first build an entity graph from the input tables and introduce a reasoning module to perform reasoning on the graph.",
"Moreover, there are different relations (e.g., the numeric size relation and the importance relation) between records in different dimensions.",
"And these relations may contribute to the data-to-text generation.",
"However, it is hard for a vanilla encoder to capture these.",
"Consequently, we propose to utilize two auxiliary tasks, Number Ranking (NR) and Importance Ranking (IR), to supervise the encoder to capture the different relations.",
"Experimental results on ROTOWIRE and RW-FG show that our method not only has a good generalization but also outperforms previous methods on several metrics: BLEU, Content Selection, Content Ordering.",
"Table-to-text generation is an essential task for text generation from structured data.",
"It aims at automatically producing descriptive natural language text to help people obtain the salient information from the tables.",
"Over the past several years, neural text generation methods have made significant progress on this task.",
"Lebret et al. (2016); Wiseman et al. (2017); Bao et al. (2018) view the input table as a record sequence and model it as a machine translation task.",
"To generate text containing more salient and well-organized facts, Sha et al. (2018); Moryossef et al. (2019); Trisedya et al. (2020); Corresponding authors: Can Ma, Yinliang Yue The Boston Celtics dominated the visiting New York Knicks, 115 -87, on Friday night at TD Garden Isaiah Thomas was huge for Boston ( 4 -4 ) as he led the way offensively with 29 points on 9-of-17 shooting, in only 28 minutes...",
"Bai et al. (2020) explicitly model content selection and planning.",
"To better represent tables, Liu et al. (2018); Nema et al. (2018); Gong et al. (2019) explicitly model the structure of a table from multiple levels or different dimensions.",
"Figure 1",
"(a) contains basketball game statistical tables from ROTOWIRE (Wiseman et al., 2017), a benchmark of NBA basketball games.",
"As can be seen, each entity (player or team) takes one row in the corresponding table.",
"Moreover, each row comprises several records of different types, which describe the entity's performance in different aspects.",
"In terms of generating a summary from these tables, it is necessary to make reasoning to obtain some factual results from the entities' performance and the relationships between entities.",
"For instance, when humans describe the tables in Figure 1",
"(a), they usually give some factual results, such as The Boston Celtics dominated the visiting New York Knicks or Isaiah Thomas was huge for Boston....",
"These results need to be reasoned from the entities' performance and the relationships between entities.",
"Therefore, it is necessary to give the model the reasoning ability.",
"However, previous methods do not explicitly model this ability.",
"Numerical tables mean most records in these tables are numerical and are very common.",
"For instance, 86 .",
"82% of the records and almost 86 .",
"49% of the column types are numeric in ROTOWIRE.",
"We observe that there are different relations between records in different dimensions.",
"For example, there are two kinds of relations in numerical tables.",
"The first one is numerical size relation in the column dimension, i.e., in the same type column.",
"The other is the relative importance relation in the row dimension.",
"It refers to the relative importance of different types of records, which are in the same row, to the entity that they belong to.",
"On the one hand, these relations may contribute to table-to-text generation.",
"Let us take Figure 1",
"(a) as an example.",
"I.Thomas's score is 29 , which is higher than other records in the column PTS.",
"And he has three rebounds, which is lower than most other records in the column REB.",
"Therefore, humans are more likely to describe his scores rather than his rebounds when summarizing his performance.",
"On the other hand, a vanilla encoder may not effectively capture the relations existing in different dimensions without any auxiliary supervision.",
"We employ a hierarchical encoder, which comprises a Record Encoder and a Reasoning Module, to encode the input tables from record level and row level.",
"Specifically, inspired by Gong et al. (2019), the Record Encoder utilizes two cascaded self-attention modules to encode the table from the column and the row dimension, respectively.",
"Moreover, to endow the model with the reasoning ability, we first build an entity graph on the row level according to the relations between players and teams.",
"And then, we introduce a reasoning module to perform reasoning on the graph.",
"Furthermore, we utilize different auxiliary tasks to help the encoder capture the different relations among records.",
"More specifically, two auxiliary tasks named Number Ranking (NR) and Importance Ranking (IR) are proposed to supervise the learning of the different parts of the Record Encoder, respectively.",
"We conducted experiments on ROTOWIRE and RW-FG(Wang, 2019) to verify the effectiveness of the proposed approach.",
"The experimental results demonstrate that it is necessary to enable the model the reasoning ability.",
"Moreover, the proposed two auxiliary tasks can improve the data-to-text model's performance without introducing extra parameters.",
"Furthermore, the results also show our method not only has a good generalization but also outperforms previous methods on BLEU, Content Selection, and Content Ordering metrics.",
"Recently, neural models have been the mainstream for table-to-text generation and obtained impressive results.",
"Early works on table-to-text generation regard it as a distinct machine translation task and view a structured table as a record sequence (Lebret et al., 2016; Wiseman et al., 2017; Bao et al., 2018).",
"Most recent works are inspired by the traditional methods for data-to-text generation and introduce explicit content selection and planning to improve the results (Sha et al., 2018; Puduppully et al., 2019b; Moryossef et al., 2019; Trisedya et al., 2020; Bai et al., 2020), and they obtain training labels by aligning the input tables with related summaries.",
"However, this alignment may introduce additional errors.",
"Some works attempt to use additional knowledge to improve the quality of the generated text.",
"Nie et al. (2018) utilize pre-executed symbolic operations on the input table in a sequence-to-sequence model to improve the fidelity of neural table-to-text generation.",
"Chen et al. (2019) introduce the background knowledge of the entity in the table to improve results.",
"In addition to introducing external knowledge, some works learn better representation for the table by explicitly modeling the table's structure.",
"Liu et al. (2018) propose a structure-aware seq2seq architecture, which incorporates the filed information as the additional inputs to the table encoder.",
"Some works (Bao et al., 2018; Nema et al., 2018; Jain et al., 2018) model the table's representation from the row and column levels, and utilize the dual attention decoder to generate text.",
"Gong et al. (2019) introduce the historical data for each table and utilize a self-attention-based hierarchical encoder on three dimensions (row, column, and time) to enrich the table's representation.",
"Furthermore, Liu et al. (2019) propose three auxiliary supervision tasks (sequence labeling, text auto-encoding, and multi-label classification) to help the encoder capture a more accurate semantic representation of the tables.",
"Gong et al. (2020) also explicitly model the relations between the numeric records.",
"They pretrain a multi-layer transformer encoder to obtain records' contextual numerical value representations.",
"Moreover, when training the data-to-text model, they replace the record's token embedding with its con-Number Ranking Importance Ranking RELCERERFGENI Row-wise Encoder Column-wiseEncoder Name MIN PTS AST REB STL Marcus Smart 34 12 10 6 3 Amir Johnson 22 2 1 3 1 Kelly Olynyk 30 19 3 7 2 Avery Bradley 32 15 2 10 3 Isaiah Thomas 28 29 4 3 1 I. Thomas ImportanceScores Isaiah Thomas 4 1 2 5 5 Number Ranking PTS 12 2 19 15 29 Self Attention S e l f A tt e n t i o n",
"textual representation from the pre-trained model.",
"Differently, our Number Ranking task is trained with the data-to-text model and can supervise the model actively to capture the numeric size relation without introducing extra parameters.",
"Each input instance consists of three different tables T 1 , T 2 , T 3 , containing records about players' performance in the home team, players' performance in the visiting team, and the team's overall performance.",
"Each cell in the table is regarded as a record.",
"Inspired by Gong et al. (2019), we utilize two self-attention modules to model each record's contexts from the column and the row dimension, respectively.",
"After that, we obtain the fusion representation for records by the record fusion gate.",
"Record Embedding Following previous work (Wiseman et al., 2017), we utilize four tuples to represent each record r .",
"The four tuples include: entity r.e (the name of team or player, such as Carmelo Anthony), type r.t (e.g., PTS) and value r.v as well as feature r.f (e.g., home or visiting) which indicates whether a player or a team compete in home court or not.",
"And we utilize 1-layer MLP to encode the embeddings of each record's four types of information into a dense vector r embi,j , r embi,j = Relu ( W e [ r i,j .e ; r i,j .t ; r i,j .v ; r i,j .f ] + b e ) , where i , j denote a record in the table of i -th row and j -th column, [; ] denotes the vector concatenation, W e and b e are trainable parameters.",
"Column-wise Encoder To capture the numeric size relation between records, we adopt a self-attention module to model record in the context of other records in the same column and obtain the column dimension representation vector r coli,j as: coli,j,i (cid:48) exp( W col 2 tanh( W col 1 [ r embi,j ; r embi (cid:48) ,j ]) (1) r coli,j = R (cid:88) i (cid:48) =1 ,i (cid:48) (cid:54) = i coli,j,i (cid:48) r embi (cid:48) ,j (2) r coli,j = W col 3 [ r coli,j ; r embi,j ] (3) where W col 1 , W col 2 and W col 3 are trainable parameters, R represents the number of rows in the table.",
"Row-wise Encoder Considering the size relation captured by the Column-wise Encoder (CE) may help the learning of importance relation on row level, we have the Column-wise Encoder and the Row-wise Encoder (RE) in series (as shown in Figure 2).",
"In other words, the input of RE is r coli,j rather than r embi,j .",
"We use another self-attention module, similar to the CE, to obtain the row dimension representation r rowi,j for records.",
"reflecting the record's information.",
"Therefore, we utilize a fusion gate to combine the two dimension representations adaptively(Gong et al., 2019).",
"First, we concatenate the two dimension representations of a record and utilize an MLP to obtain a general representation for it as r geni,j .",
"Then, we compare the column dimension representation with r geni,j to obtain its important score: s coli,j exp( W f 2 tanh( W f 1 [ r geni ; r coli,j ])) (4) where W f 1 and W f 2 are trainable parameters.",
"Equally, we obtain the important score s rowi,j for the row dimension representation r rowi,j .",
"Finally, we obtain the fused record representation r fi,j by weighted sum s coli,j r coli,j + s rowi,j r rowi,j .",
"The fused record representations { r fi,j } R,Ci =1 ,j =1 will be used as the input of the text decoder.",
"As mentioned in Section 1, we observe some factual results in text that require reasoning from the entities' performance and the relationships between them.",
"Therefore, it is necessary to enable model the reasoning ability.",
"To achieve this, we primarily build an entity graph according to the entities' relationships in input tables, as shown in Figure 1",
"(c).",
"And then, we leverage Graph Neural Networks (GNN) to perform reasoning.",
"Following, we describe the details of the reasoning process.",
"Primarily, we obtain the initialized representation for each entity in tables by the Entity Node Initialization module (ENI).",
"Considering that different records in the same row may not contribute the same, we combine them dynamically by attention mechanism.",
"We first compute a general representation vector e geni for the entity e i , which is given by mean-pooling over the same row records r fi, 1 , r fi, 2 , ..., r fi,C .",
"Then we compare each record in the i -th row with e geni and obtain the initialized entity representation e 0 i by weighted sum: ri,j exp( W r 2 tanh( W r 1 [ e geni ; r fi,j ])) (5) e 0 i = j = C (cid:88) j =1 ri,j r fi,j (6) After obtaining the initial representations of entities, we adopt graph neural networks to propagate entity node information to their neighbors.",
"Inspired by GAT(Velickovic et al., 2018), we use multi-head attention to measure the relatedness between target entity node e i and its neighbor nodes at layer l : li,j = MultiHeadAttention ( e l 1 i , e l 1 j ) (7) where j N i and N i means the neighbor nodes set of target entity e i .",
"The neighbor entities include information that is not relevant to the target entity.",
"Therefore, we modify the way the information flow in GAT.",
"Explicitly, we incorporate gate mechanisms into information aggregation to filter out noises from neighbor nodes and extract useful information, which we name GatedGAT.",
"The representation e li of e i at layer l is calculated as follows: e li = gate li e l 1 i + (1 gate li ) e li (8) e li = ELU ( (cid:88) j N i li,j e l 1 j ) (9) gate l i = sigmoid ( W l [ e l 1 i ; e l i ]) (10) where W l is a learnable parameter.",
"The entities' representations { e Li } Ri =1 at the last layer L are employed in text decoder.",
"To make use of record-level and row-level semantics information, we adopt the dual attention mechanism.",
"Specifically, at decoding step t , the input of the LSTM unit is the embedding of the previously predicted word y t 1 .",
"And given the decoder state d t , we first calculate the row-level attention t,i , which is based on the similarity between the decoder state d t and the entities' representations { e Li } Ri =1 .",
"Then we compute the record-level attention t,i over all the record representations { r fi,j } R,Ci,j which are normalized among records in the same row.",
"Finally, we fuse these two-level attention and obtain the context representation as: (cid:48) t,i,j = t,i t,i,j (11) c dt = R (cid:88) i =1 C (cid:88) j =1 (cid:48) t,i,j r i,j (12) Given a reference output { y i } Ti =1 , we use the cross-entropy loss as the objective function of table-to-text generation: L lm = T (cid:88) i =1 p ( y t | y 1: t 1 ; c dt ) (13) 3.4 Auxiliary Supervision Task Liu et al. (2019) have shown that a single encoder without any auxiliary assistant may not be effective to capture the accurate semantic representation.",
"Inspired by this, we propose two auxiliary tasks, Number Ranking (NR) and Importance Ranking (IR), to help the Column-wise Encoder and the Row-wise Encoder capture the size relation and the relative importance relation among records respectively.",
"Number Ranking In practice, many tables mainly comprise numeric records.",
"Different from text-type content, the numerical content contains less semantic information but the size relation.",
"The size relation means the value of a record is larger or smaller than others, and it plays an essential role in records selection.",
"For example, humans tend to focus on the highest scores or the fewest faults in a basketball game table.",
"Therefore, it is necessary to incorporate size relation into record representation.",
"To achieve this, we propose an auxiliary supervision task named Number Ranking (NR) to supervise the learning of the Column-wise Encoder.",
"As shown in Figure 2 top, we take a list of records in column PTS to illustrate how it works.",
"Specifi-cally, we regard the PTS column of the table as an out-of-order set of records C = r 1 , r 2 , ..., r R , and the goal is to generate a sequence of record pointers in descending order according to their value.",
"We adopt the Pointer Networks (Vinyals et al., 2015) to solve this problem and the output of Columnwise Encoder r coli (we omitted the indices on the column dimension) as its input.",
"Let z = z 1 , ..., z R denote the sequence of the ranked records' indices.",
"Each z k points to an input record and is between 1 and R .",
"As shown in Figure 2, we use an LSTM as the decoder.",
"The MeanP ooling ( { r i } Ri =1 ) is used as the initialization of the first hidden state of the decoder.",
"At each decoding step t , we calculate a distribution over the input records: h t = LST M ( h t 1 , r colz t 1 ) (14) p nt,i exp ( W nr [ h t ; r coli ]) (15) where W nr is a trainable parameter, and p nt,i denotes the probability that the output points to the record r i at step t .",
"We take the cross-entropy loss for this task: L nr = C (cid:88) j =1 R (cid:88) i =1 log p ni,z i (16) Importance Ranking When people describe a player's performance in a basketball game, they tend to focus on his relatively important record and describe these firstly.",
"Consequently, we introduce the Importance Ranking task (IR) to supervise the Row-wise Encoder to capture the relative importance relations between records in the same row.",
"This task's input is a sequence record in the same row, and the output is a sequence of records in descending order of the records' importance.",
"We employ a pointer network similar to the one used in the Number Ranking task to model this task.",
"However, different from the records in the same column, these in the same row cannot be directly compared as they represent different meanings.",
"To address this issue, we take the rank of each record in the column as an importance indicator.",
"Figure 2 left bottom shows an example of calculating the importance scores for records in the last row of the table.",
"The input of the decoder is the output of the Row-wise Encoder { r rowj } Rj =1 .",
"And the output is the ascending order of the input, according to the records' importance scores.",
"Let p st,j denote the probability of pointing to record r j at decoding step t , the loss function for this task is: L ir = R (cid:88) i =1 C (cid:88) j =1 log p sj,z j (17) 3.5 Loss Function and Training These two tasks are trained together with the table-to-text task, and the overall objective function consists of three parts: L = L lm + 1 L nr + 2 L ir (18) where 1 and 2 are tunable hyper-parameters.",
"We conduct experiments on both ROTOWIRE and RW-FG datasets.",
"They all comprise pairs of NBA basketball game statistics and summaries.",
"There are two main differences between ROTOWIRE and RW-FG.",
"The first is the team statistic table in later containing more numeric records.",
"The other is RW-FG removes the unsupported sentences by the input tables.",
"We use the official training, development, and test splits for both datasets, which are 3,398/727/728 and 5,232/1,125/1,119, respectively.",
"Following previous works, we use BLEU and three extractive evaluation metrics, Relation Generation (RG), Content Selection (CS), and Content Ordering (CO) (Wiseman et al., 2017) to evaluate the table-to-text results.",
"More specifically, RG measures the content fidelity of generated text, CS measures how well the generated text matches the reference in selecting which records to generate, and CO measures the ability on context planning.",
"We refer the readers to Wiseman et al. (2017)'s paper for more detailed information on these extractive metrics.",
"We apply Accuracy (Acc) and normalized Dam-erau Levenshtein Distance (DLD) (Brill and Moore, 2000) to evaluate the two auxiliary supervision tasks.",
"Accuracy measures the percentage of record sequences for which their absolute positions are correctly predicted (Logeswaran et al., 2018).",
"To make a fair comparison, we follow the config-urations in (Puduppully et al., 2019a; Gong et al., 2019).",
"For the table-to-text model, we set word embedding and LSTM decoder hidden size as 600.",
"We set GatedGat's layer as 2 and the numbers of heads as 2 .",
"We employ a two-layer LSTM decoder with Input feeding during text generation.",
"We apply dropout at a rate 0.3.",
"For text decoding, we use BPTT and set the truncate size to 100.",
"We set the beam size to 5 during inference.",
"For the two auxiliary tasks, we employ two one-layer LSTM as the decoder and set the LSTM decoder hidden size as 600, respectively.",
"We adjust 1 between 0 .",
"8 and 1 .",
"0 , 2 between 0 .",
"2 0 .",
"4 .",
"Finally, we set them to 0 .",
"9 and 0 .",
"25 on ROTOWIRE, 1 .",
"0 and 0 .",
"4 on RW-FG.",
"For inferring, we use the greedy search algorithm.",
"All experiments are conducted on an NVIDIA Tesla V100.",
"Code of our model can be found at https://github.com/liang8qi/ Data2TextWithAuxiliarySupervision .",
"TEMP (Wiseman et al., 2017) is a template-based method.",
"We refer the readers to this paper for more detailed information on templates.",
"CC (Wiseman et al., 2017) is a standard encoder-decoder system with conditional copy mechanism.",
"NCP (Puduppully et al., 2019a) and NCP + TR (Wang, 2019) are two Conditional Copy models with the explicit content planning.",
"The latter improves NCP by introducing a table restructure loss.",
"ENT (Puduppully et al., 2019b) is a method that creates entity-specific representations and generates text using hierarchical attention over the input table and entity memory.",
"HETD (Gong et al., 2019) is a method modeling table from three different dimensions (Row, Column and, Time).",
"DU & DUV (Gong et al., 2020): the DU brings the sense of value comparison into content planning.",
"Furthermore, DUV introduces content plan verification into DU.",
"Automatic Evaluation Our results on the two test datasets are summarized in Table 1.",
"For ROTOWIRE, compared with previous neural models, our method achieves state-of-the-art results on Content Selection (CS), Content Ordering (CO), and BLEU.",
"More specifically, compared with the previous best neural models, we obtain more than 4 improvement on CS-P and achieve the best results on CS-R.",
"This implies our method can generate text that contains more salient records.",
"Compared with NCP , DU , and DUV , our method scores the highest on CO, even without explicitly modeling content selection and planning.",
"This indicates that our model can better organize the records when generating a summary for the input tables.",
"We consider there are two main reasons.",
"The first is that our Reasoning Module can learn a better entity representation on row level.",
"The other is that our proposed two auxiliary tasks can supervise the Record Encoder to learn a number-aware and relative importance-aware record representation.",
"As a result, the data-to-text model can make good con-Model RG CS CO BLEU # P% F1% DLD% Our Model 34.37 90.03 44.34 23.64 17.31 Series 32.74 91.56 41.42 21.52 17.19 RM 33.91 89.58 43.71 23.04 16.98 + NE 38.41 92.28 44.22 23.16 16.23 + NE & IE 32.85 92.68 45.33 24.49 16.81 + NR 32.47 93.76 45.93 24.29 18.56 + IR 35.30 92.65 43.34 22.04 17.47 + NR & IR 33.93 92.40 46.13 25.28 17.68 Table 3: Ablation results for evaluating each compo-nent's contribution on ROTOWIRE development set.",
"tent planning by considering the entity's performance and the relative importance of the record.",
"As shown in Table 1, the results on RW-FG follow a pattern similar to ROTOWIRE.",
"We notice that all models perform better on RW-FG than on ROTOWIRE.",
"We consider that the improvement comes from the purification of data in RW-FG.",
"Wang (2019) removes the sentences that are not supported by the input tables, which reduces the noise in the text and improves the dataset's quality.",
"Due to this, we can obtain more accurate content planning labels from the dataset to train the models (NCP, NCP+TR) that explicitly model content planning and lead to better performance.",
"Therefore, NCP outperforms ENT on RW-FG.",
"However, the purification may make the task easier because some sentences that do not be supported by the tables directly but can be obtained by reasoning may also be removed.",
"This may weaken the Reasoning Module of our model.",
"Nevertheless, we still outperform the compared baselines.",
"Table 2 shows our model's performance, which is trained together with the two auxiliary tasks on the two auxiliary tasks.",
"We compare it with two baselines.",
"The first is Original , which denotes a method that takes the input record sequence as the outputs.",
"Moreover, we separately train our model on the two auxiliary tasks, denoted as Separate .",
"As a result, our model achieves comparable performance to Separate and is much better than Original , even only using the greedy search at testing.",
"The results indicate that the two auxiliary tasks can help the Record Encoder capture the size relation and relative importance relation among records.",
"Ablation Study First, we examine the effect of changes in the model structure on the results.",
"From Table 3, Our Model means our data-to-text model without two auxiliary tasks.",
"We change the connection mode between the Column-wise Encoder Model RG CS CO BLEU P% # F1% DLD% NCP 86.67 31.46 40.02 18.73 15.61 NCP +HEnc 87.22 27.36 43.55 22.42 15.83 + NR 89.41 28.54 44.56 23.50 16.17 + NR&IR 90.96 27.71 46.29 24.23 16.29 Table 4: Generalization study on ROTOWIRE development set.",
"( CE ) and the Row-wise Encoder ( RE ) to parallel from series ( Series ).",
"Moreover, we replace the Reasoning Module with a row-level encoder with the content selection gate ( RM ), which is proposed by Puduppully et al. (2019a).",
"According to the results, the serial connection and the Reasoning Module contribute to the overall performance because BLEU, CS, and CO drop significantly after subtracting them from the full model.",
"Furthermore, we investigate the impact of the two auxiliary tasks on table-to-text generation.",
"Table 3 shows that both Number Ranking (NR) and Importance Ranking (IR) tasks can improve our basic model.",
"This indicates that it is necessary to explicitly model the size relation and relative importance relation between records.",
"We notice that the model's performance is degraded on CS-F1 and CO when only the IR task is introduced.",
"On the one hand, we believe this is because the modeling of relative importance relation in the row dimension between records depends heavily on its size relation in the column dimension.",
"On the other hand, the CE cannot accurately capture the size relation between records without direct supervision.",
"Finally, we compare the method that introduces additional feature vectors of the ranking of number and relative importance to Record Embedding with the two auxiliary tasks.",
"Specifically, we first introduce the embedding of ranking of the number ( + NE ) and further add the embedding of the relative importance of records ( + IE ).",
"As shown in the third section in Table 3, the NE only improves the model on RG.",
"Moreover when the IE is incorporated, the model achieves better performance on almost all metrics.",
"However, the improvement is not as significant as the auxiliary tasks.",
"We believe it may be a better way to effectively capture the accurate semantic representation by introducing auxiliary supervision tasks than adding feature vectors directly.",
"Generalization Study Our method can be applied to the existing works, especially those that explicitly model content selection and planning (NCP, DUV), to improve their performance.",
"To exam our method's generalization, we combine our method with NCP and conduct experiments on the ROTOWIRE development set.",
"The results are summarized in Table",
"4. First, we use the released code to retrain the NCP model.",
"And then, we replace the NCP's content selection encoder with our hierarchical encoder.",
"As can be seen, our hierarchical encoder with the Reasoning Module improves the NCP model on almost all evaluation metrics.",
"Moreover, we train the model with the proposed two auxiliary supervision tasks.",
"The performance of the model is further improved.",
"This indicates that our method has a good generalization, as it can be easily adapted to other methods and improve their performance.",
"Human Evaluation To examine whether human judgments corroborate improvements in automatic evaluation metrics, we conducted a human evaluation.",
"Three graduate students with basketball background knowledge and good English reading ability were invited to conduct the evaluation.",
"We compared our best performing model against Gold, NCP, ENT, and HETD.",
"Specifically, we randomly selected 30 games from the test set, and each game is rated by three workers.",
"For each game, we arranged every 5-tuple of summaries into ten pairs.",
"Given each pair, the participants were asked to choose which one is better according to five criteria: Supporting (does the summary contain more supported facts?), Contradicting (does the summary contain more contradicting facts?), Grammaticality (is the summary fluent and grammatical?), Coherence (do the sentences, in summary, follow a coherent discourse?), and Conciseness (does the summary contain less redundant information and repetitions?).",
"Following previous work (Pudup-pully et al., 2019a), we calculated a model's score for each criterion as the difference between the percentage of times when the model is chosen as the best and the percentage of times when the model is chosen as the worst.",
"The results are summarized in Table",
"5. As can be seen, the gold texts have significant advantages in contradicting, grammaticality, coherence, and conciseness.",
"Compared with other neural methods, our method receives the highest scores in coherence and grammaticality.",
"This implies that our method can generate texts that contain well-organized facts.",
"Though the ENT model outperforms our model in contradicting and conciseness, our method can be easily applied to it, which we leave for future work.",
"In this work, we mainly make two contributions.",
"The first one is we introduce a reasoning module into a hierarchical table encoder, which enables the model reasoning ability.",
"Moreover, we present to utilize the different auxiliary supervision tasks to help the encoder capture the different relations between records.",
"In detail, the Number Ranking (NR) task is proposed to supervise the Column-wise Encoder to model the numeric size relation between records in the same column.",
"And the Importance Ranking (IR) task helps the Row-wise Encoder capture the relative importance between records in the same row.",
"Experimental results conducted on ROTOWIRE and RW-FG datasets demonstrate the effectiveness of our method.",
"Furthermore, we migrate our method to the NCP model and significantly improve its performance on ROWTOWIRE.",
"This indicates that our proposed method has a good generalization."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"result",
"objective"
] |
[
"Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data.",
"In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors.",
"In combination with an automatic grammar error correction system, we can recover 1 .",
"0 BLEU out of 2 .",
"4 BLEU lost due to grammatical errors.",
"We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors.",
"Neural Machine Translation (NMT) is undeniably a success story: public benchmarks (Bojar et al., 2016) are dominated by neural systems, and neural approaches are the de facto option for industrial systems (Wu et al., 2016; Hassan Awadalla et al., 2018; Crego et al., 2016; Hieber et al., 2018).",
"Even under low-resource conditions, neural models were recently shown to outperform traditional statistical approaches (Nguyen and Chiang, 2018).",
"However, there are still several shortcomings of NMT that need to be addressed: a (non-exhaustive) list of six challenges is discussed by Koehn and Knowles (2017), including out-of-domain testing, rare word handling, the wide-beam problem, and the large amount of data needed for learning.",
"An additional challenge is robustness to noise, both during training and at inference time.",
"In this paper, we study the e ect of a specific type of noise in NMT: grammatical errors.",
"We primarily focus on errors that are made by non-native Equal contribution.",
"Work performed at the University of Notre Dame.",
"source-language speakers (as opposed to dialectal language, SMS or Twitter language).",
"Not only is this linguistically important, but we believe that it would potentially have great social impact.",
"Our contributions are three-fold.",
"First, we con-firm that NMT is vulnerable to source-side noise when trained on clean data, losing up to 3 .",
"6 BLEU on our test set.",
"This is consistent with previous work, yet orthogonal to it, since we use more realistic noise for our experiments.",
"Second, we explore training methods that can deal with noise, and show that including noisy synthetic data in the training data makes NMT more robust to handling similar types of errors in test data.",
"Combining this simple method with an automatic grammar correction system, we find that we can recover 1 .",
"5 BLEU.",
"Third, we release Spanish translations of the JFLEG corpus, 1 a standard benchmark for English Grammar Error Correction (GEC) systems.",
"We also release all other data and code used in this paper.",
"Our additional annotations on both the JFLEG corpus and the English WMT data will enable the evaluation of the robustness of NMT systems on realistic, natural noise: a robust system would ideally produce the same output when presented with either the original or the noisy source sentence.",
"We hope that our datasets will become a benchmark for noise-robust NMT, because we believe that deployed systems should also be able to handle source-side noise.",
"We focus on NMT from English to Spanish.",
"We choose English to be our source-side language because there exist English corpora annotated with grammar corrections, which we can use as a 1 Freely available at https://bitbucket.com/ antonis/nmt-grammar-noise source of natural noise.",
"Moreover, since English is probably the most commonly spoken non-native language (Lewis et al., 2009), our work could be directly applicable to several translation applications.",
"Our choice of Spanish as a target language enables us to have access to existing parallel data and easily create new parallel corpora (see below, 2.3).",
"For all experiments, we use the Europarl English-Spanish dataset (Koehn, 2005) as our training set.",
"In the synthetic experiments of Section 2.2, we use the newstest2012 and new-stest2013 as dev and test sets, respectively.",
"Furthermore, to test our translation methods on real grammatical errors, we introduce a new collection of Spanish translations of the JFLEG corpus ( 2.3).",
"To our knowledge, there are five publicly available corpora of non-native English that are annotated with corrections, which have been widely used for research in Grammar Error Correction (GEC).",
"The NUS Corpus of Learner English (NUCLE) contains essays written by students at the National University of Singapore, corrected by two annotators using 27 error codes (Dahlmeier et al., 2013).",
"It has become the main benchmark for GEC, as it was used in the CoNLL GEC Shared Tasks (Ng et al., 2013, 2014).",
"Other corpora include the Cambridge Learner Corpus First Certificate in English FCE corpus (Yannakoudakis et al., 2011), which is only partially public, the L ang -8 corpus (Tajiri et al., 2012), which was harvested from online corrections, and the AESW 2016 Shared Task corpus, which contains corrections on texts from scientific journals.",
"The last corpus is the JHU FLuency-Extended GUG corpus (JFLEG) (Napoles et al., 2017).",
"This corpus covers a wider range of English proficiency levels on the source side, and its correction annotations include extended fluency edits rather than just minimal grammatical ones.",
"That way, the corrected sentence is not just grammatical, but also guaranteed to be fluent.",
"Ideally, we would train a translation model to translate grammatically noisy language by training it on parallel data with grammatically noisy language.",
"Since, to our knowledge, no such data exist in the quantities that would be needed, an al-Error Type Confusion Set art { a, an, the, } prep { on, in, at, from, for, under, over, with, into, during, until, against, among, throughout,of, to, by, about, like, before, after, since, across, behind, but, out, up, down, o , } nn { SG, PL } sva { 3SG, not 3SG, 2SG-Past, not 2SG-Past } Table 1: Confusion sets for each grammar error type.",
"ternative is to add synthetic grammatical noise to clean data.",
"An advantage of this approach is that controlled introduction of errors allows for fine-grained analysis.",
"This is a two-step process, similar to the methods used in the GEC literature for creating synthetic data based on confusion matrices (Ro-zovskaya et al., 2014; Rozovskaya and Roth, 2010; Xie et al., 2016; Sperber et al., 2017).",
"First, we mimic the distribution of errors found in real data, and then introduce errors by applying rule-based transformations on automatic parse trees.",
"The first step involves collecting error statistics on real data.",
"Conveniently, the NUCLE corpus has all corrections annotated with 27 error codes.",
"We focus on five types of errors, with the last four being the most common in the NUCLE corpus: drop : randomly deleting one character from the sentence.",
"2 art : article / determiner errors prep : preposition errors nn : noun number errors sva : subject-verb agreement errors Using the annotated training set of the NUCLE corpus, we compute error distribution statistics, resulting in confusion matrices for the cases outlined in Table",
"1. For art and prep errors, we obtain probability distributions that an article, determiner, or preposition is deleted, substituted with another member of the confusion set, or inserted in the beginning of a noun phrase.",
"For nn errors, 2 This error is not part of the NUCLE error list.",
"we obtain the probability of a noun being replaced with its singular or plural form.",
"For sva errors, the probability that a present tense verb is replaced with its third-person-singular (3SG) or not-3SG form.",
"An additional sva error that we included is the confusion between the appropriate form for the verb to be' in the past tense (was' and were').",
"The second step involves applying the noise-inducing transformations using our collected statistics as a prior.",
"We obtained parses for each sentence using the Berkeley parser (Petrov et al., 2006).",
"The parse tree allows us to identify candidate error positions in each sentence (for example, the beginning of a noun phrase without a determiner, were one could be inserted).",
"For each error type we introduced exactly one error per sentence, wherever possible, which we believe matches more realistic scenarios than previous work.",
"It also allows for controlled analysis of the behaviour of the NMT system (see Section 4).",
"For each error and each sentence, we first identify candidate positions (based on the error type and the parse tree) and sample one of them based on the specific error distribution statistics.",
"Then, we sample and introduce a specific error using the corresponding probability distribution from the confusion matrix.",
"(In the case of drop , nn , and sva errors, we only need to sample the position and only insert / substitute the corresponding error.)",
"If no candidate positions are found (for example, a sentence doesn't have a verb that can be substituted to produce a sva error) then the sentence remains unchanged.",
"Following the above procedure, we added errors in our training, dev, and test set (henceforth referred to as [ error ]).",
"Basic statistics on our produced datasets can be found in Table 2, while example sentences are shown in Table 3.",
"Furthermore, we created training and dev sets that mix clean and noisy data.",
"The clean+ [ error ] training sets are the concatenation of each [ error ] with the clean data, e ectively including a clean and a noisy version of each sentence pair.",
"We also created a training and dev dataset with mixed error types, in our attempt to study the effect of including all noise types during training.",
"The mix all dataset includes each training pair six times: once with the original (clean) sentence as the source, and once for every possible error.",
"We experimented with a mixed dataset that included each training sentence once, with the number of noisy sentences being proportional to the real error distributions of the NUCLE dataset, but obtained results similar to the [ error ] datasets.",
"The JFLEG corpus consists of a dev and test set (no training set), with 747 and 754 English sentences, respectively, collected from non-native English speakers.",
"Each sentence is annotated with four di erent corrections, resulting in four (fluent and grammatical) reference sentences.",
"About 14% of the sentences do not include any type of error, with the source and references being equivalent.",
"We created translations of the JFLEG corpus that allow us to evaluate how well NMT fares compared to a human translator, when presented with noisy input.",
"We will refer to the augmented JFLEG corpus as JFLEGes .",
"Two professional translators were tasked with producing translations for the dev and the test set, respectively.",
"The translators were presented only with the original erroneous sentences; they did not have access to the correction annotations.",
"They were asked to produce fluent, grammatical translations in European Spanish (to match the Error Type Example art In October , Tymoshenko was sentenced to seven years in prison for entering into what was reported to be a / * disadvantageous gas deal with Russia. Its ratification would require / *the 226 votes. It is a / *the good result, which nevertheless involves a certain risk. prep [. . . ] the motion to revoke an article based on / *in which the opposition leader , Yulia Tymoshenko , was sentenced. Its ratification would require / *for 226 votes. nn Its ratification would require 226 votes / *vote . The verdict / *verdicts is not yet final ; the court will hear Tymoshenko 's appeal in December. sva As a rule, Islamists win / *wins in the country; the question is whether they are the moderate or the radical ones. This cultural signature accompanies / *accompany the development of Moleskine; Table 3: Example grammatical errors that were introduced in the En-Es WMT test set. Spanish used in the Europarl corpus).",
"There exist cases where a translator might choose to preserve a source-side error when producing the translation, such as translation of literary works where it's possible that grammar or fluency errors are intentional; however, our translators were explicitly asked not to do that.",
"The exact instructions were as follows: Please translate the following sentences.",
"Note that some sentences will have grammatical errors or typos in English.",
"Don't try to translate the sentences word for word (e.g. replicate the error in Spanish).",
"Instead, try to translate it as if it was a grammatical sentence, and produce a fluent grammatical Spanish sentence that captures its meaning.",
"In this section, we provide implementation details and the results of our NMT experiments.",
"For convenience, we will refer to each model with the same name as the dataset it was trained on; e.g. the mix all model will refer to the model trained on the mix all dataset.",
"All data are tokenized, truecased, and split into subwords using Byte Pair Encoding (BPE) with 32 , 000 operations (Sennrich et al., 2016).",
"We filter the training set to only contain sentences up to 80 words.",
"Our LSTM models are implemented using DyNet (Neubig et al., 2017), and our transformer models using PyTorch (Paszke et al., 2017).",
"The transformer model uses 6 layers, 8 attention heads, the dimension for embeddings and positional feed-forward are 512 and 2048 respectively .",
"The sublayer computation sequence follows the guidelines from Chen et al. (2018).",
"Dropout probability is set to 0.2 (also in the source embeddings, following Sperber et al. (2017)).",
"We use the learning rate schedule in Vaswani et al. (2017) with warm-up steps of 24000 but only decay the learning rate until it reaches 10 5 as inspired by Chen et al. (2018).",
"For testing, we select the model with the best performance on the dev set corresponding to the test set.",
"At inference time, we use a beam size of 4 with length normalization (Wu et al., 2016) with a weight of 0 .",
"6.",
"We report the results obtained with the transformer model, as they were consistently better than the LSTM one.",
"All the result tables for the LSTM models can be found in the Appendix.",
"The performance of our systems on the synthetic WMT test sets, as measured by detokenized BLEU (Papineni et al., 2002), is summarized in Table",
"4. When the system is trained only on clean data (first row) and tested on noisy data, it unsurprisingly exhibits degraded performance.",
"We observe significant drops in the range of 1 .",
"03 .",
"6 BLEU.",
"The largest drop (more than 3 . 5 BLEU) is observed with nn errors in the source sentence.",
"This is not unreasonable: nouns almost always carry content significant for translation.",
"Especially when translating into Spanish, a noun number change can, and apparently does, also a ect the rest of the sentence significantly, for example, by influencing the conjugation of a subsequent verb.",
"The second-largest drop (more than 3 . 0 BLEU points) is observed in the case of drop errors.",
"This is also to be expected; typos produce out-of-vocabulary (OOV) words, which in the case of BPE are usually segmented to a most likely rarer subword sequence than the original correct word.",
"We find that a training regime that includes both clean and noisy sentences ([ clean+error ) results in better systems across the board. Importantly, these models manage to perform en par with the clean model on the clean test set. Since the original training set is part of the [ clean+error training sets, this behavior is expected. We conclude, thus, that including the full clean dataset during training is important for performance on clean data one cannot just train on noisy data. The [ clean+error ] systems exhibit a notable pattern: their BLEU scores are generally similar to the clean system on all test sets, except for the test set that matches their training set errors (high-lighted in Table 4), where they generally obtain the best performance.",
"The mix all model is our best system on all test sets (except drop ) and on average.",
"Unlike the [ clean+error ] systems, it outperforms the clean model on all noisy test sets and not only on a specific one.",
"On average, using the mix all training set leads to an improvement of 0 .",
"4 BLEU over the clean model and 0 .",
"1 0 .",
"7 BLEU over the [ clean+error ] models.",
"Furthermore, the mix all model exchibits the smallest performance standard deviation of all models, averaging over all test sets.",
"This is another indication that our system is more robust to multiple source-side variations.",
"We further explore this intuition in Section",
"4. On the more realistic JFLEGes dev and test sets, we observe same trends but at a smaller scale, as shown in Table",
"5. Our mix all model generally achieves comparable results when presented with each of the four reference corrections of the test set ( cor X columns).",
"However, when we use the noisy source sentence as input (N o corr column) our mix all model obtains 1 .",
"4 BLEU improvements over the clean model.",
"The di erence between the performance of the models when presented with clean and noisy input is another indicator for robustness.",
"On the JFLEGes test set, the noisy source results in a 3 .",
"1 BLEU point drop for the clean model, while the drop for our mix all model is smaller, at 1 .",
"7 BLEU points.",
"In addition, we experimented with using an automatic error-corrected source as input to our sys-JFLEGes Dev Training Manual correction No Auto cor 0 cor 1 cor 2 cor 3 avg.",
"tem (column A uto corr of Table 5).",
"We used the publicly available JFLEG outputs of the (al-most) state-of-the-art model of Junczys-Dowmunt and Grundkiewicz (2016) as inputs to our NMT system.",
"3 This experiment envisions a pipeline where the noisy source is first automatically corrected and then translated.",
"As expected, this helps the clean model (by + 1 . 1 BLEU), but our mix all training helps even further (by another + 0 . 8 BLEU).",
"Interestingly, the automatic GEC system only helps in the test set, while there are no improvements in the dev set.",
"Naturally, since automatic GEC systems are imperfect, the performance of this pipeline still lags behind translating on clean data.",
"We attempt an in-depth analysis of the impact of the di erent source-side error types on the behavior of our NMT system, when trained on clean data and tested on the artificial noisy data that we created.",
"A rt Errors Table 6 shows the di erence of the BLEU scores obtained on the sentences, broken down by the type of article error that was introduced.",
"The first observation is that in all cases the di erence is negative, meaning that we get higher BLEU scores when testing on clean data.",
"Encouragingly, there is practically no di erence when we substitute a' with an' or an' with a'; the model 3 This model has been recently surpassed by other systems, e.g. (Junczys-Dowmunt et al., 2018), but their outputs are not available online.",
"seems to have learned very similar representations for the two indefinite articles, and as a result such an error has no impact on the produced output.",
"However, we observe larger performance drops when substituting indefinite articles with the definite one and vice versa; since the target language makes the same article distinction as the source language, any article source error is propagated to the produced translation.",
"P rep Errors Due to the large number of prepositions, we cannot present a full analysis of preposition errors, but highlights are shown in Table",
"7. Deleting a correct preposition or inserting a wrong one leads to performance drops of 1 .",
"2 and 0 .",
"8 BLEU points for the clean model, but drops of 0 .",
"4 and 0 .",
"7 for the mix all model.",
"N n and S va Errors We found no significant performance di erence between the di erent nn errors.",
"Incorrectly pluralizing a noun has the same adverse e ect as singularizing it, leading to performance reductions of over 4 .",
"0 and 3 .",
"5 BLEU points respectively.",
"We observe a similar behavior with sva errors: each error type leads to roughly the same performance degradation.",
"The e ect of noise in NMT was recently studied by Khayrallah and Koehn (2018), who explored noisy situations during training due to web-crawled data.",
"This type of noise includes misaligned, mistranslated, or untranslated sentences which, when used during training, significantly degrades the performance of NMT.",
"Unlike our Correct Substituted article article a an the all a 0 2 .",
"work, they primarily focus on a setting where the training set is noisy but the test set is clean.",
"In addition, Heigold et al. (2018) evaluated the robustness of word embeddings against word scrambling noise, and showed that performance in downstream tasks like POS-tagging and MT is especially hurt.",
"Sakaguchi et al. (2017a) studied word scrambling and the Cmabrigde Uin-ervtisy (Cambridge University) e ect , where humans are able to understand the meaning of sentences with scrambled words, performing word recognition (word level spelling correction) with a semi-character RNN system.",
"Focusing only on character-level NMT models, Belinkov and Bisk (2018) showed that they exhibit degraded performance when presented with noisy test examples (both artificial and natural occurring noise).",
"In line with our findings, they also showed that slightly better performance can be achieved by training on data artificially induced with the same kind of noise as the test set.",
"Sperber et al. (2017) proposed a noise-introduction system reminiscent of WER, based on insertions, deletions, and substitutions.",
"An NMT system tested on correct transcriptions achieves a BLEU score of 55 (4 references), but tested on the ASR transcriptions it only achieves a BLEU score of 35",
".",
"7. By introducing similar noise in the training data, they were able to make the NMT system slightly more robust.",
"Interestingly, they found that the optimal amount of noise on the training data is smaller than the amount of noise on the test data.",
"The notion of linguistically plausible corruption is also explored by Li et al. (2017), who created adversarial examples with syntactic and semantic noise (reordering and word substitutions respec-Substitution model BLEU di erence clean mix all in with 6 .",
"tively).",
"When training with these noisy datasets, they obtained better performance on several text classification tasks.",
"Furthermore, in accordance with our results, their best system is the one that combines di erent types of noise.",
"We present a summary of relevant previous work in Table",
"8. Synthetic errors refer to noise introduced according an artificially created distribution, and natural errors refer to actual errorful text produced by humans.",
"As for semi-natural , it refers to either noise introduced according to a distribution learned from data (as in our work), or to errors that are learned from data but introduced according to an artificial distribution (as is part of the work of Belinkov and Bisk (2018)).",
"We consider our work to be complementary to the works of Heigold et al. (2018); Belinkov and Bisk (2018), and Sperber et al. (2017).",
"However, there are several important di erences:",
"1. Belinkov and Bisk (2018) and Sperber et al. (2017) train their NMT systems on fairly small datasets: 235K (Fr-En), 210K (De-En), 122K (Cz-En), and 138K sentences (Es-En) respectively.",
"Even though they use systems like Nematus (Sennrich et al., 2017) or XNMT (Neubig et al., 2018) which generally achieve nearly SOTA results, it is unclear whether their results generalize to larger training data.",
"In contrast, we train our system on almost 2M sentences.",
"2. All three systems introduce somewhat unrealistic amounts of noise in the data.",
"The natural noise of Belinkov and Bisk (2018) consists of word substitutions based on Wikipedia errors or corrected essays (in the Work Errors Noise Types NMT level Languages (Heigold et al., 2018) synthetic character swaps, character flips, word scrambling char, BPE De En (Sperber et al., 2017) synthetic ASR errors word Es En (Belinkov and Bisk, 2018) synthetic character swap, middle scramble, full scramble, keyboard typo char, BPE Fr,De,Cz En semi-natural word substitutions this work semi-natural grammar errors: article, preposition, noun number, verb agreement BPE En Es natural JFLEG corpus Table 8: Previous work on evaluating the e ect of noise in NMT systems.",
"Czech case) but they substitute all possible correct words with their erroneous version, ending up with datasets with more than 40% of the tokens being noisy.",
"For that reason, we refer to it as semi-natural noise in Table",
"8. Meanwhile, Sperber et al. (2017) test on the outputs of an ASR system that has a WER of 41 .",
"3%.",
"For comparison, in the JFLEG datasets, we calculated that only about 3 .",
"5%5% of the tokens are noisy the average Levenshtein distance of a corrected reference and its noisy source is 13 characters.",
"3.",
"The word scrambling noise, albeit interesting, could not be claimed to be applicable to realistic scenarios, especially when applied to all words in a sentence.",
"The solution Belinkov and Bisk (2018) suggested and Sperber et al. (2017) discussed is a characteror spelling-aware model for producing wordor subword-level embeddings.",
"We suspect that such a solution would indeed be appropriate for dealing with typos and other character-level noise, but not for more general grammatical noise.",
"Our method could potentially be combined with GloVe (Pennington et al., 2014) or fastText (Bojanowski et al., 2017) embeddings that can deal with slight spelling variations, but we leave this for future work.",
"On the other side, Grammar Error Correction has been extensively studied, with significant incremental advances made recently by treating GEC as an MT task: among others, Junczys-Dowmunt and Grundkiewicz (2016) used phrased-based MT, Ji et al. (2017) used hybrid character-word neural sequence-to-sequence systems, Sakaguchi et al. (2017b) used reinforcement learning, and Junczys-Dowmunt et al. (2018) combined several techniques with NMT to achieve the current state-of-the-art.",
"Synthetic errors for training GEC systems have also been studied and applied with mixed success (Rozovskaya and Roth, 2010; Rozovskaya et al., 2014; Xie et al., 2016), while more recently Xie et al. (2018) used backtranslation techniques to add synthetic noise for GEC.",
"In this work, we studied the e ect of grammatical errors in NMT.",
"We not only confirmed previous findings, but also expanded on them, showing that realistic human-like noise in the form of specific grammatical errors also leads to degraded performance.",
"We added synthetic errors on the English WMT training, dev, and test data (including dev and test sets for all WMT 18 evaluation pairs), and have released them along with the scripts necessary for reproducing them.",
"We also produced Spanish translations of the JFLEG corpus, so that future NMT systems can be properly evaluated on real noisy data.",
"This material is based upon work generously supported by the National Science Foundation under grants 1464553 and 1761548.",
"We are grateful to the anonymous reviewers for their useful comments."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"method",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"method",
"result",
"method",
"method",
"other",
"other"
] |
[
"Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining.",
"The intrinsic complexity of these tasks demands powerful learning models.",
"While pretrained Transformer-based Language Models (LM) have been shown to provide state-of-the-art results over different NLP tasks, the scarcity of manually annotated data and the highly domain-dependent nature of argumentation restrict the capabilities of such models.",
"In this work, we propose a novel transfer learning strategy to overcome these challenges.",
"We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge by finetuning pretrained LMs on a selectively masked language modeling task.",
"Furthermore, we introduce a novel prompt-based strategy for inter-component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context.",
"Exhaustive experiments show the generalization capability of our method on these two tasks over within-domain as well as out-of-domain datasets, outperforming several existing and employed strong baselines.",
"1 1 Introduction Computational argument mining from texts is the fine-grained process of understanding opinion dynamics.",
"In the most fundamental sense, argument understanding requires the identification of the opinions posed and justifications provided to support or falsify them.",
"Generally, automated argument mining is a multi-stage pipeline identified with three general steps (Lippi and Torroni, 2015; Stab and Gurevych, 2017) separating argumentative spans from non-argumentative ones, classi*Equal contribution 1 We release all code, models and data used at https: //github.com/Jeevesh8/arg_mining Figure 1: Token-level claim (red) and premise (blue) annotation of a discussion thread formed by consecutive posts from two users.",
"fying argument components, and inducing a structure among them (support, attack, etc.).",
"While different argumentation models define different taxonomies for argument components, popular approaches broadly categorize them as claims' and premises' (Stab and Gurevych, 2017; Egawa et al., 2019; Mayer et al., 2020).",
"As these components are not necessarily aligned to sentence-level segments and can be reflected within clausal levels, the task of argument component identification requires a token-level boundary detection of components and component type classification .",
"Context of argumentation in online discussions.",
"Online discussions originating from back-and-forth posts from users reflect a rich interaction of opinion dynamics on large scale.",
"In Figure 1, we show a sample argument component annotation of consecutive posts from two users.",
"The token-level granularity of components ensures that a single sentence may contain multiple components of the same (in 1st post) or different kinds (in 2nd and 4th posts).",
"Moreover, two adjacent spans of texts, even with the same argumentative role, can be defined as two separate components (see the 4th post for example).",
"It is trivial to say that the meaning of any post (as well as its argumentative role) is de-7774 pendent on the context.",
"To be specific, the third post can be identified as argumentative (a premise in this case) only when its predecessor post and its components are taken as the context.",
"Similarly, a certain span of the first post is quoted in the second one signaling a concrete manifestation of dialogic continuity.",
"One may even observe the user-specific argumentation styles: 1st user (author of the first and third posts) usually keeps claims and premises in separate sentences, while the 2nd user prefers to use multi-component, complex sentences.",
"Existing studies on argumentation formalism recognize such continuity and define inter-post component relations (Ghosh et al., 2014; Hidey et al., 2017).",
"However, the previous approaches for automated extraction, classification and relating argumentative components work on individual posts only and define the inter-post discourse in the later stages of relation prediction.",
"This is trivially counter-intuitive for two major reasons:",
"(i) if we consider two text spans from separate comments to be linked by some argumentative relation, then there exists a continuity of discourse between these spans and a model is likely to ben-efit if it decides the boundaries and types of these two components conditioned on that continuous information;",
"(ii) users carry their style of argumentation (simple consecutive sentences vs. long complex ones, usage of particular markers like I think that ' etc.), and if the model is informed about these while observing the complete conversation with back-and-forth posts, it is more likely to extract correct components easily.",
"Scarcity of labeled data.",
"Irrespective of the domain, argument annotation is a resource-intensive process.",
"A few previous studies (Habernal and Gurevych, 2015; Al-Khatib et al., 2016) attempted to exploit a large amount of unlabeled data in a semi-supervised fashion.",
"However, such methods require the components to be defined at sentence-level (and thereby adding redundant spans into the predictions) as they perform some sentence similarity matching to generate pseudo-labels.",
"Pretrained language models like BERT (Devlin et al., 2019) provide a workaround to handle the scarcity of task-specific annotated data.",
"A parameter-intensive model is initially trained in a self-supervised manner on a large bulk of text; this pretraining enables the model to learn general language representation, which is then finetuned on task-specific labeled data.",
"However, the amount of the latter still determines the expressive power of such models (Wang et al., 2020).",
"Present work.",
"Considering these challenges, we formulate a novel transfer learning method using Transformer-based language models.",
"We use large amount of unlabelled discussion threads from Reddit's r/ChangeMyView (CMV) community as the source of argumentative knowledge.",
"Pretrained, Transformer-based language models are finetuned on this dataset using a Masked Language Modelling task.",
"Instead of randomly masking tokens to predict, we select several markers in the text that are shown to signal argumentative discourse in previous works (Chakrabarty et al., 2019; Eckle-Kohler et al., 2015).",
"The language models are then made to predict these markers in the MLM task, thereby learning to relate different components of text according to their role in the argumentation presented.",
"We call this novel finetuning method Selective Masked Language Modeling ( sMLM ).",
"Furthermore, to explore the role of context in argument mining, we use sMLM to finetune a post-level language model based on BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019) and a thread-level language model based on Longformer (Beltagy et al., 2020).",
"We present efficient incorporation of several Reddit-specific structural cues into the Longformer architecture.",
"These finetuned language models are then used for two fundamental components of argument mining: token-level argument component identification ( ACI ) and inter-component relation type prediction ( RTP ).",
"To further utilize the sMLM -based training of the language models, we propose a novel prompt-based approach to predict relations among argument components.",
"We perform exhaustive experiments to explore the efficacy of our proposed methods for argument mining in both in-domain and out-of-domain benchmark datasets: manually annotated Reddit discussions and scientific papers.",
"Our experiments show clear improvements achieved by our methods ( 0 . 59 and 0 . 69 F1 for ACI and RTP, respectively) over several state-of-the-art baselines.",
"2 2 Related Work A general overview of argument mining can be found in the survey articles by Lytos et al. (2019) and Lawrence and Reed (2019).",
"In the current scope, we look into three major areas of research 2 The source codes and datasets have been submitted separately.",
"Argument component detection and classification.",
"Previous studies have sought to address argument boundary detection and component type prediction either as separate, successive tasks in the pipeline (Stab and Gurevych, 2017) or jointly in a single computational pass (Eger et al., 2017).",
"Studies also explored classical machine learning frameworks like SVM-HMM (Habernal and Gurevych, 2017), CRF (Stab and Gurevych, 2017), etc. with rich manual feature engineering.",
"With the development of neural network-based algorithms, BiLSTM-CNN-CRF models emerged as a popular choice (Schulz et al., 2018; Eger et al., 2017; Chern-odub et al., 2019).",
"Very recently, large pretrained language models like BERT have also been utilized (Mayer et al., 2020; Chakrabarty et al., 2019).",
"representation.",
"Similar to our sMLM finetuning strategy, Nie et al. (2019) proposed an unsupervised sentence representation learning strategy where a neural model is trained to predict the appropriate discourse marker connecting two input sentences.",
"Using a set of 15 markers, they showed that such a finetuning can help models in downstream NLI tasks.",
"Chakrabarty et al. (2019) used a distant supervision approach using a single marker In my honest opinion to finetune BERT on a large collection of ChangeMyView threads and then performed argument component classification.",
"However, they did not deal with the component identification task and performed classification of already identified components at sentence-level.",
"Opitz and Frank (2019) suggested that while identifying the relation between two components, these models often rely more on the context and not the content of the components; discourse markers present within the context provide strong signals for the relation prediction task.",
"Argument mining over Reddit.",
"A few recent studies explored argumentation over Reddit.",
"Hidey et al. (2017) proposed a two-tier annotation scheme of claim-premise components and their relations, defining five different semantic roles of premises, using ChangeMyView discussion data.",
"Egawa et al. (2019) also analyzed semantic roles of argument components over ChangeMyView threads; however, their primary focus remained on the dynamics of persuasion, similar to Dutta et al. (2020).",
"Though pretrained language models are developed to overcome the problem of small annotated data on different language processing tasks, they still require task-specific finetuning for better results (Wang et al., 2020).",
"In the specific domain of argument mining, annotated data is scarce, and attempting to finetune a massive language model with very small training data comes with the risk of overfitting.",
"Moreover, different datasets follow different strategies for annotation.",
"We seek to devise a novel transfer learning strategy where a given Transformer-based pretrained language model is directed to focus on argumentative discourse using large-scale, unlabelled data.",
"We choose the ChangeMyView (CMV) community as the source of this transfer for two specific reasons:",
"(i) it provides us with a large, readily available resource of interactions strictly focused on debates around versatile topics, and",
"(ii) discussions in CMV contain a mixture of dialogic continuity over successive turns along with elaborate argumentation presented in a single turn.",
"We hypothesize that such a versatile combination of discourse can make the language model more generalizable over dialogic as well as monologic argument mining tasks.",
"Discussion forums like Reddit facilitate users to begin a discussion with an initial post ( submissions , in the case of Reddit) and then comments under that post to instantiate a discussion.",
"Users may post a comment in reply to the submission as well as the already posted comments.",
"A typical discussion over Reddit forms a tree-like structure rooted at the submission.",
"Any path from the root to a leaf comment can be perceived as an independent dialogic discourse among two or multiple users; henceforth, we will call such paths as threads .",
"Formally, a thread T is an ordered sequence { ( u i , P j ) | i, j N , u i UT } , where P j is a text object (a submission when j = 1 and a comment, otherwise), u i is the author of P j , and UT is the set of all unique users engaged in the thread T .",
"For brevity, we indicate P j as a post in general.",
"The dialogic nature of discussions naturally assumes this context to be the whole thread T .",
"However, if we consider any two successive posts P j and P j +1 in T , they manifest the interests and styles of two separate participants along with the 7776 discourse continuity of the overall thread, which must be distinguished within the definition of the context.",
"To take into account the complete dialogic context of the thread, we represent a thread as a single contiguous sequence of tokens with each post P j from user u i being preceded by a special token [ USER i ] with i { 0 , , | UT | 1 } , to encode which post is written by which user.",
"Reddit also offers users a quoting facility: users can quote a segment from the previous post (one to which they are replying) within their posts and emphasize that their opinions are specifically focused on that segment.",
"We delimit such quoted segments with special tokens [STARTQ] and [ENDQ] in the quoting post to demarcate the dialogic discourse.",
"Chakrabarty et al. (2019) also used quoting as signals for following premises.",
"Additionally, we replace URLs with the special token [URL] to inform the presence of external references that often act as justifications of subjective opinions.",
"Masked Language Modeling is a common strategy of training large language models; a certain frac-tion of the input tokens are masked and the model is trained to predict them, consequently learning a generalized language representation.",
"Instead of randomly selecting tokens to mask, we select specific markers that might signal argumentative discourse.",
"While the model is trained to predict these markers, it learns the roles and relationships of the text spans preceding and following them.",
"Following the work by Eckle-Kohler et al. (2015), we select multiple markers signaling Opinion , Causation , Rebuttal , Fact presentation , Assumption , Summary , and some additional words, which serve multiple purposes depending on the context.",
"As shown in Figure 2, to predict the marker I think in the first post, the model needs to learn that the following text span that most Jewish people expresses the user's opinion on the topic.",
"Similarly, in the second post, for the input segment (cid:104) span 0 (cid:105) So (cid:104) span 1 (cid:105) if (cid:104) span 2 (cid:105) , to correctly predict the masked markers as So and if , a language model needs to learn the fact that the truth value of the statement expressed in (cid:104) span 1 (cid:105) is conditioned upon (cid:104) span 2 (cid:105) , and this dependence is inferred from (cid:104) span 0 (cid:105) .",
"a natural segmentation of the discourse context into comment/post-level vs. thread-level.",
"We seek to exFigure 2: Example of selective masking in a sample CMV thread; sMLM finetuning requires a pretrained language model to predict the masked (highlighted in red) tokens (or all the subwords constituting them) based on the context.",
"plore the effect of the context size at different modules of argument mining (i.e., argument component detection and relation type prediction).",
"For this, we use our proposed selective MLM approach to finetune a pretrained RoBERTa/BERT-base model in the comment/post-level regime, and train Longformer models in the thread-level regime.",
"Longformer uses sparse, global attention (i.e., some tokens attend to all the tokens in the input sequence) to capture the long-range dependencies.",
"We use the special tokens indicating the users (c.f. Section 3.1) as the globally attending tokens for Longformer.",
"After finetuning the language model on the selective MLM task, we proceed to our first task of identifying argument components in threads.",
"Since the detection is done in token-level, we use the standard BIO tagging scheme: for a component class (cid:104) type (cid:105) , the beginning and the continuation of that component are marked as B(cid:104) type (cid:105) and I(cid:104) type (cid:105) , respectively, while any non-component token is labeled as O .",
"Therefore, if one uses the usual claim-premise model of argumentation, the label set becomes { B -claim , I -claim , B -premise , I -premise , O } .",
"While identifying the relation between two given related argument components, it is important to understand the role of those text segments within the context of the discourse.",
"Furthermore, we seek 7777 USER-1 CMV: I feel skill is largely determined by experience.",
"to utilize the knowledge acquired by a language model in the sMLM finetuning step as well.",
"Keeping these two factors in mind, we propose a novel, prompt-based identification of argument components.",
"This approach is inspired by recent popularity of prompt-based fine-tuning methods in the community (Liu et al., 2021).",
"At its core, these methods involve directly prompting the model for the required knowledge, rather than fine-tuning [CLS] or mean-pooled embeddings.",
"For example, to directly use a model to summarise a text, we can append \" TL;DR: \" to the text (Radford et al., 2019), and let the model generate tokens following it; we expect the next few tokens to constitute a summary of all the previous text.",
"Since the underlying Transformer LMs have been trained using some Cloze task (i.e., filling the blanks from the context) previously, it is more natural for it to predict a token given a context.",
"However, there are two challenges:",
"(i) one needs to design a suitable prompt, and",
"(ii) in case of classification tasks like RTP , it is challenging to perform Answer Mapping , i.e., to map all the possible tokens to some particular relation class.",
"To tackle these challenges, we design our proposed relation prediction method in the following manner (see Figure 3) For each pair of related components, say, component-1 and component-2, said by user-i and user-j, respectively, where component-2 refers to component-1, we append to the thread, a prompt with the template: \"[USER-i] said <component1> [MASK] [MASK] [MASK] [USER-j] said <com-ponent2>\" (we used three mask tokens since that is the upper bound of the marker size used for sMLM ).",
"We expect that the words predicted at the masked position such as because, in spite of what etc. would be indicative of the relation of the two components.",
"For the example thread shown in Figure 3, in a zero-shot prediction, sMLM -finetuned Longformer predicts \"I\", \"disagree\", \"I\" at the three masked positions.",
"This disagree\" clearly corresponds to the undercutter relation between the two components. In fact, the base Longformer without sMLM finetuning predicts a space, a full stop and another space at the three masked positions. This additionally proves the efficacy of the sMLM finetuning. Instead of engineering a token-to-relation type mapping, the predicted token embeddings at the masked positions are concatenated and fed into a linear layer to predict probabilities over the set of relation types. This way, we allow the model to learn and map from the token space to the relation type space. 4 Experiment Setup 4.1 Dataset For the sMLM finetuning, we use the subset of Winning Args (ChangeMyView) (CMV) dataset (Tan et al., 2016) provided in ConvoKit (Chang et al., 2020). We use 99% of this data for training, and reserve 1% for checking accuracy on the sMLM task. The entire data consists of 3 , 051 submissions and 293 , 297 comments posted in the ChangeMyView subreddit by 34 , 911 unique users. We extract the threads from these posts following the reply structure and end up with 120 , 031 threads in total. To train and evaluate all the models for ACI and RTP , we use the manually annotated Reddit discussion threads provided by Hidey et al. (2017) and further extended by Chakrabarty et al. (2019) for training and evaluation. The extended version of this dataset contains 113 CMV discussion threads manually annotated with argument components following the standard claim-premise model. Additionally, we use the argument annotated Dr. Inventor Corpus (Lauscher et al., 2018) which consists of 40 scientific publications from the field of computer graphics. There are three types of argumentative components here: Background Claims 7778 ( BC ), consisting of claims from previous works in the paper, Own Claim ( OC ) consisting of the new claims made by the authors of the paper, and Data . The Data class mainly consists of citations, references to figures, etc. This dataset has three relation types, viz., support , contradicts and semantically same . Additional dataset details are provided in Appendix A. 4.2 Baseline methods For ACI , we consider two state-of-the-art token-level argument identification models: (cid:3) LSTM-MTL. Eger et al. (2017) proposed an end-to-end argument mining architecture which uses a BiLSTM-CNN-CRF sequence tagger to jointly learn component detection, classification, and relation parsing tasks. (cid:3) LSTM-MData. Schulz et al. (2018) proposed a BiLSTM-CNN-CRF based model which aims to generalize argument mining using multi-domain training data in an MTL setting. We augment our data with their original set of 6 datasets. For RTP , as no prior work exists to the best of our knowledge, we consider our own baselines. First, we consider (cid:3) Context-less RoBERTa , a pretrained RoBERTa model, which takes the two components with a [SEP] token between them and predicts the relation using [CLS] token's embedding. It is context-less as only two components without the surrounding context are used to predict the label. Second, we consider (cid:3) Contextless QR-Bert. This uses the same fine-tuning methodology as Contextless RoBERTa and is initialized from the pre-trained Quote-Response relation prediction model of Chakrabarty et al. (2019). For RTP , we try another traditional strategy, instead of prompting, for our models: (cid:3) Mean Pooling . The mean pooling approach first finds an embedding of each of the two related components by averaging the Transformer embeddings at all token positions within a component. These embeddings are concatenated and passed into a linear layer for predicting the type of relation between the two related components. To further evaluate the efficacy of our sMLM training strategy, we finetune a pretrained Longformer on the Winning Args Corpus, with the usual MLM, i.e., masking 15% of tokens at random, instead of selective masking. We call this the domain-adapted Longformer, DA-LF . Model Claim Premise F1 Acc P R F1 P R F1 sMLM-LF 0 . 49 0 . 57 0 . 53 0 . 61 0 . 67 0 . 64 0 . 59 0 . 74 Base-LF 0 . 50 0 . 50 0 . 50 0 . 58 0 . 64 0 . 61 0 . 56 0 . 74 sMLM-RoBERTa 0 . 49 0 . 60 0 . 53 0 . 55 0 . 57 0 . 55 0 . 55 0 . 72 RoBERTa 0 . 49 0 . 55 0 . 51 0 . 56 0 . 62 0 . 59 0 . 56 0 . 73 BERT 0 . 21 0 . 25 0 . 23 0 . 19 0 . 26 0 . 22 0 . 22 0 . 62 LSTM-MData 0 . 19 0 . 18 0 . 18 0 . 26 0 . 23 0 . 24 0 . 22 0 . 54 LSTM-MTL 0 . 19 0 . 18 0 . 18 0 . 24 0 . 25 0 . 24 0 . 21 Table 1: Performance of different models on ACI -task on CMV Modes dataset (P: Precision, R: Recall, F1: F1 score). The F1 and Acc. in the rightmost columns denote the micro-averaged F1 score over claims and premises and the token level accuracy of predicting argumentative tags, respectively. Model BC OC Data P R F1 P R F1 P R F1 sMLM-LF 0 . 45 0 . 52 0 . 48 0 . 39 0 . 45 0 . 42 0 . 50 0 . 48 0 . 48 Base-LF 0 . 49 0 . 51 0 . 50 0 . 38 0 . 50 0 . 43 0 . 44 0 . 44 0 . 44 Table 2: Results on Dr. Inventor dataset for argument component identification using sMLM -finetuned and base Longformer models. 4.3 Implementation details We use the pretrained base version of Longformer ( 12 layers, 768 model size). The size of the local attention window was set to the default 512 . The maximum sequence length was fixed at 4096 . Following the suggestions in Reimers and Gurevych (2017), we repeat our experiments on the 5 different data splits. The scores reported in the tables for various models correspond to the average value of the mean of 5 runs, over the last 5 epochs for that particular metric. We provide additional implementation details in Appendix B. 5 Evaluation We evaluate the models based on precision, recall, and F1 scores for predicting claims and premises. For a more rigorous setting, we use exact match of the whole span between gold and predicted labels, i.e., if the gold label is [ O , B -claim, I -claim, I -claim, I -claim, O ] then only the predictions [ O , B -claim, I -claim, I -claim, I -claim, O ], or [ O , I claim, I -claim, I -claim, I -claim, O ] can be considered as true positives. We use the popular SeqE-val (Nakayama, 2018) framework. 5.1 Argument component identification Table 1 shows the results for argument component identification on the CMV Modes dataset. We compare models based on their micro-averaged F1 over the two component types (claims, premises), and token level accuracy. Firstly, we observe huge difference in token-level accuracy scores as we move from the existing best performing LSTM based methods with accuracy of 0.54 to BERT, 7779 Model Support Agreement Direct Attack Undercutter Partial OverallF1 P R F1 P R F1 P R F1 P R F1 P R F1 80-20 split sMLM-LF-prompt 0 . 88 0 . 93 0 . 91 0 . 51 0 . 46 0 . 48 0 . 32 0 . 35 0 . 33 0 . 43 0 . 51 0 . 46 0 . 28 0 . 12 0 . 16 0 . 67 DA-LF-prompt 0 . 78 0 . 84 0 . 81 0 . 44 0 . 45 0 . 43 0 . 22 0 . 19 0 . 19 0 . 30 0 . 32 0 . 30 0 . 27 0 . 11 0 . 15 0 . 61 sMLM-LF-mp 0 . 73 0 . 87 0 . 79 0 . 49 0 . 36 0 . 38 0 . 32 0 . 24 0 . 26 0 . 32 0 . 33 0 . 41 0 . 35 0 . 21 0 . 25 0 . 59 Base-LF-prompt 0 . 79 0 . 88 0 . 84 0 . 48 0 . 44 0 . 46 0 . 30 0 . 23 0 . 25 0 . 31 0 . 39 0 . 34 0 . 37 0 . 12 0 . 17 0 . 62 Base-LF-mp 0 . 71 0 . 87 0 . 78 0 . 47 0 . 33 0 . 37 0 . 24 0 . 17 0 . 18 0 . 27 0 . 26 0 . 26 0 . 35 0 . 20 0 . 24 0 . 56 RoBERTa 0 . 78 0 . 83 0 . 80 0 . 46 0 . 34 0 . 37 0 . 29 0 . 29 0 . 28 0 . 15 0 . 24 0 . 18 0 . 36 0 . 15 0 . 20 0 . 60 QR-Bert 0 . 76 0 . 85 0 . 80 0 . 46 0 . 27 0 . 34 0 . 21 0 . 13 0 . 16 0 . 19 0 . 25 0 . 20 0 . 32 0 . 16 0 . 20 0 . 59 50-50 split sMLM-LF-prompt 0 . 87 0 . 92 0 . 89 0 . 53 0 . 47 0 . 49 0 . 30 0 . 28 0 . 28 0 . 45 0 . 58 0 . 50 0 . 35 0 . 09 0 . 14 0 . 69 DA-LF-prompt 0 . 85 0 . 89 0 . 87 0 . 47 0 . 47 0 . 44 0 . 32 0 . 20 0 . 24 0 . 39 0 . 55 0 . 44 0 . 32 0 . 13 0 . 16 0 . 66 sMLM-LF-mp 0 . 70 0 . 90 0 . 79 0 . 426 0 . 22 0 . 28 0 . 28 0 . 20 0 . 22 0 . 32 0 . 26 0 . 28 0 . 38 0 . 18 0 . 24 0 . 56 Base-LF-prompt 0 . 78 0 . 87 0 . 82 0 . 49 0 . 44 0 . 46 0 . 30 0 . 19 0 . 22 0 . 32 0 . 40 0 . 35 0 . 32 0 . 13 0 . 18 0 . 62 Base-LF-mp 0 . 73 0 . 86 0 . 79 0 . 36 0 . 21 0 . 26 0 . 25 0 . 18 0 . 21 0 . 23 0 . 28 0 . 25 0 . 43 0 . 18 0 . 25 0 . 56 RoBERTa 0 . 72 0 . 83 0 . 77 0 . 47 0 . 25 0 . 31 0 . 22 0 . 21 0 . 21 0 . 13 0 . 16 0 . 14 0 . 17 0 . 08 0 . 10 0 . 55 QR-Bert 0 . 72 0 . 84 0 . 77 0 . 47 0 . 28 0 . 34 0 . 19 0 . 13 0 . 14 0 . 13 0 . 18 0 . 15 0 . 22 0 . 07 0 . 09 0 . 54 Table 3: Relation type wise Precision (P), Recall (R) and F1 score on the CMV Modes dataset for various models. The highest scores in every column are in bold . The suffix \"mp\" and \"prompt\" indicate that the model was trained using Mean Pooling and Prompting strategies, respectively.",
"having an accuracy of 0.62.",
"Such a difference is expected since pretrained language models like BERT provide a head-start in case of small datasets like CMV Modes.",
"Though the token-level accuracy increases, the micro-averaged F1 for exact component match does not increase much till we start using RoBERTa.",
"Since pretrained Longformer was trained originally from the RoBERTa checkpoint (Beltagy et al., 2020), we can conclude that RoBERTa provides significant performance gain compared to BERT, owing to its larger training data and protocol.",
"Longformer trained with our proposed sMLM finetuning clearly outperforms the rest of the models in terms of overall F1 score for component identification.",
"However, the effects of selective MLM is more prominant in case of thread-level context (i.e, Longformer) compared to comment-level context (i.e, RoBERTa).",
"We can observe that context plays different roles for different component types: while sMLM finetuned Longformer and RoBERTa perform comparably for claim detection, in case of premises, the access to the complete context helps the Longformer to perform better.",
"We can observe a similar trend in ACI -task on Dr. Inventor dataset (see Table 2).",
"While Base Longformer performs comparable to its sMLM counterpart to detect Background and Own Claims, sMLM provides a 4 point improvement in F1 score for the Data class which plays a similar role of premises towards the claims.",
"Intuitively, textual segments expressing claims contain independent signals of opinion that is less dependent on the context; pretrained language models might be able to decipher their roles without additional information either from the thread-level context (in case of CMV Modes, specifically) or enhanced relation-awareness induced by the sMLM finetuning.",
"However, identifying segments that serve the role of premises to a claim intrinsically depends on the claims as well as the discourse expressed in a larger context.",
"In Table 3, we present the results for relation type identification on the CMV Modes dataset.",
"We again compare models based on their micro-averaged F1 over all relation types.",
"Firstly, we consider the traditional mean pooling approach.",
"Within this approach, we observe a 3 point improvement for the sMLM pre-trained Longformer on the 80-20 split, while maintaining same performance on the 50-50 split.",
"Furthermore, the prompt based methods consistently outperform the mean pooling one, irrespective of whether we use base Longformer or sMLM pretrained one.",
"Within the prompting approach, we also observe increased and consistent improvement in performance due to sMLM pretraining on both 80-20 and 50-50 splits.",
"The gap in micro-F1 scores between sMLM and base Longformer for 80-20 split increases from 3 points in mean pooling to 5 points in prompting (0 to 7 points improvements for 50-50 split).",
"As we can observe in Figure 4, sMLM finetuned Longformer admits a very narrow margin of variation on random splits, compared to the base Longformer.",
"Furthermore, sMLM finetuning consistently outperforms domain-adapted finetuning ( DA-LF ), indicating the unique knowledge transfer achieved by the former.",
"We hypothesise that this approach works better as this regime models our final RTP task, as a task that is more natural (in a sense similar to the ( , B ) natural tasks of Saunshi et al. (2021)) for a Longformer model pre-trained with sMLM .",
"Intuitively, the model learns to predict discourse markers at masked positions during sMLM pre-training and during fine-tuning on downstream tasks too, the model will naturally try to predict discourse markers at the masked positions.",
"The discourse markers occurring at the masked positions are directly related to the relation between the two components.",
"For instance, when there is a but between two components, we know that the two components present opposing views more or less.",
"Here again, we observe that sMLM does not hurt the base performance under domain shift (Table 4).",
"We observe that the RoBERTa model performs worse than Base-LF-prompt, which incorporates the entire context of the thread.",
"Also the effect worsens with reduced training set size, and RoBERTa model performs worse by 7 points in terms of micro-F1 for the 50-50 split.",
"Furthermore, we observe that the mean pooling strategy, even though it uses context, performs worse (by 4 points Model Claim Premise F1 P R F1 P R F1 base-LF-near 0 .",
"on 80-20 split) than the context-less RoBERTa.",
"Though, our sMLM pretrained model, manages to perform at par with the context-less RoBERTa with the mean pooling strategy.",
"This means, that the using the right fine-tuning method is essential.",
"Extra context can be utilised fully in longformer, only when pre-training and fine-tuning tasks are nicely aligned.",
"Following the analyses presented by Opitz and Frank (2019), we investigate whether the pres-ence/absence of the markers used in the sMLM step within the vicinity of the components play any role in the ACI or RTP performances.",
"Since the relation type among component-pairs that reside distant from each other are less likely to be inferred by the presence of markers in the context, we analyse the percentage of wrong predictions as we vary the distance between two related components, in Figure",
"5. While error rate does vary proportionally to the distance, we observe that sMLM-LF consistently yields lower percentage of wrong predictions as we vary the distance between the related components compared to base Longformer.",
"This clearly indicates the superior capability induced by the sMLM finetuning to decipher the relationship among components not linked by direct context (i.e., not within a sentence or a single comment).",
"For the ACI task, however, we observe that the absence of markers in the vicinity of the components actually enables better identification, both in case of sMLM finetuned and pretrained Longformer (see Table 5).",
"We presented the results for two important tasks in the argument mining pipeline, viz., ACI and RTP .",
"The experiments clearly elucidated the importance of alignment between the downstream and pre-trainig tasks, and the effect of various ways of modelling the tasks.",
"The importance of entire thread's context in discussion forums, as well 7781 as how to incorporate that into transformer-based models fruitfully has also been made clear.",
"The authors would like to thank Chris Hidey and Smaranda Muresan, for clarifications providing regarding their work.",
"T. Chakraborty would like to acknowledge the support of Ramanujan Fellowship, CAI, IIIT-Delhi and ihub-Anubhuti-iiitd Foundation set up under the NM-ICPS scheme of the Department of Science and Technology, India."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length.",
"Evaluation of the accuracy of predicted DRSs plays a key role in developing semantic parsers and improving their performance.",
"DRSs are typically visualized as nested boxes, in a way that is not straightforward to process automatically.",
"COUNTER , an evaluation algorithm for DRSs, transforms them to clauses and measures clause overlap by searching for variable mappings between two DRSs.",
"Unfortunately, COUNTER is computationally costly (with respect to memory and CPU time) and does not scale with longer texts.",
"We introduce DSCORER , an efficient new metric which converts box-style DRSs to graphs and then measures the overlap of n -grams in the graphs.",
"Experiments show that DSCORER computes accuracy scores that correlate with scores from COUNTER at a fraction of the time.",
"Discourse Representation Theory (DRT) is a popular theory of meaning representation (Kamp, 1981; Kamp and Reyle, 2013; Asher, 1993; Asher et al., 2003) designed to account for a variety of linguistic phenomena within and across sentences.",
"The basic meaning-carrying units in DRT are Discourse Representation Structures (DRSs).",
"They consist of discourse referents (e.g., x 1 , x 2 ) representing entities in the discourse and conditions (e.g., male.n.",
"02( x 1 ) , Agent ( e 1 , x 1 ) ) representing information about discourse referents.",
"Every variable and condition are bounded by a box label (e.g., b 1 ) which implies that the variable or condition are interpreted in that box.",
"DRSs are constructed recursively.",
"An example of a DRS in box-style notation is shown in Figure",
"1(a).",
"representations that go beyond individual sentences.",
"Despite the large amount of recently developed DRS parsing models (van Noord et al., 2018b; van Noord, 2019; Evang, 2019; Liu et al., 2019b; Fan-cellu et al., 2019; Le et al., 2019), the automatic evaluation of DRSs is not straightforward due to the non-standard DRS format shown in Figure",
"1(a).",
"It is neither a tree (although a DRS-to-tree conversion exists; see Liu et al. 2018, 2019a for details) nor a graph.",
"Evaluation so far relied on COUNTER (van Noord et al., 2018a) which converts DRSs to clauses shown in Figure",
"1(b).",
"Given two DRSs with n and m ( n m ) variables each, COUNTER has to consider n !",
"( n m )! possible variable mappings in order to find an optimal one for evaluation.",
"The problem of finding this alignment is NP-complete, similar to other metrics such as SMATCH (Cai and Knight, 2013a) for Abstract Meaning Representation.",
"COUNTER uses a greedy hill-climbing algorithm to obtain one-to-one variable mappings, and then computes precision, recall, and F1 scores according to the overlap of clauses between two DRSs.",
"To get around the problem of search errors, the hill-climbing search implementation applies several random restarts.",
"This incurs unacceptable runtime, especially when evaluating document-level DRSs with a large number of variables.",
"Another problem with the current evaluation is that COUNTER only considers local clauses without taking larger window sizes into account.",
"For example, it considers b 4 sing e 2 and b 3 NOT b 4 as separate semantic units.",
"However, it would also make sense to assess b 3 NOT b 4 sing e 2 as a whole without breaking it down into smaller parts.",
"By considering higher-order chains, it is possible to observe more global differences in DRSs which are important when assessing entire documents.",
"In order to address the above issues, we propose DSCORER , a highly efficient metric for the evaluation of DRS parsing on texts of arbitrary length.",
"DSCORER converts DRSs (predicted and gold) to graphs from which it extracts n -grams, and then computes precision, recall and F1 scores between them.",
"The algorithm operates over n -grams in a fashion similar to BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), which are metrics widely used for evaluating the output of machine translation and summarization systems.",
"While BLEU only calculates precision with a brevity penalty (it is not straightforward to define recall given the wide range of possible translations for a given in-put), ROUGE is a recall-oriented metric since the summary length is typically constrained by a pre-specified budget.",
"1 However, in DRS parsing, there is a single correct semantic representation (gold-standard reference) and no limit on the maximum size of DRSs.",
"Our proposed metric, DSCORER , converts box-style DRSs to a graph format used for evaluation and computes F1 with high efficiency (7,000 times faster compared to COUNTER ).",
"We release our code, implementing the metric, at https: //github.com/LeonCrashCode/DRSScorer .",
"The proposed metric converts two box-style DRSs into graphs, extracts n -grams from these graphs, and then computes precision, recall, and F1 score based on the n -gram overlap.",
"Following the work of van Noord et al. (2018a), box-style DRSs can be converted to clauses as shown in Figure",
"1(b).",
"For example, box b 1 is in a contrast relationship to box b 4 within box b 0 which corresponds to the clause b 0 CONTRAST b 1 b 4 ; variable b 2 : x 1 is converted to clause b 2 REF x 1 , and the condition b 1 : t 1 < now is converted to b 1 TPR t 1 now.",
"2 We now explain how we convert DRSs to graphs.",
"There are two types of clauses depending on the number of arguments: 2-argument clauses (e.g., b 2 male.n.02 x 1 ) and 3-argument ones (e.g., b 1 Agent e 1 x 1 ).",
"The two types of clauses can be formatted as node edge node and node edge node edge node , respectively.",
"For example, clause b 2 male.n.02 x 1 is rendered as 1 See https://github.com/tensorflow/ tensor2tensor for computing ROUGE F1.",
"2 REF and TPR are operators abbreviating referent and temporally precedes, respectively; see https://pmb.",
"let.rug.nl/drs.php for more detail.",
"b 2 male.n.",
"02 x 1 , and clause b 1 Agent e 1 x 1 as b 1 Agent A 1 e 1 Agent A 2 x 1 .",
"Same nodes are further merged to a single node.",
"For example, x 1 nodes in b 2 male.n.",
"02 x 1 and e 1 Agent A 2 x 1 are merged to a single node x 1 .",
"The induced graph is directed and yields the chain b 1 Agent A 1 e 1 Agent A 2 x 1 .",
"In order to capture interactions between chains, (e.g., chain b 2 male.n. 02 x 1 , assigns x 1 as a predicate male.n.02 but x 1 is also an agent), we make edges bidirectional (red in Figure",
"1(c)) if they do not connect the two b nodes.",
"Next, we rewrite the nodes, keeping their type 3 (e.g., B , X , E , S , P , and T ) but not their indices and the resulting graph is shown in Figure",
"1(c).",
"In addition to being typed, variables can be distinguished by their neighboring nodes and connecting edges.",
"For example, the two E nodes are different.",
"One is on the path B play.v.",
"03 E Theme A 2 X piano.n.",
"01 B showing that the Theme of the predicate play is piano , and the other is on the path B sing.v.",
"01 E Agent A 2 X female.n.",
"02 B showing that the Agent of the predicate sing is female .",
"To compare two graphs, we compute the overlap between extracted paths instead of searching for best node mappings, which saves computational resources (i.e., CPU memory and time).",
"An n -gram in our case is an Euler path 4 on a graph with n edges.",
"For example, B Theme A 1 E is a 1-gram as it contains a single edge, B Theme A 1 E Theme A 2 X piano.n.",
"01 B is a 3-gram since it has three edges, and a single node is a 0-gram.",
"We extract the n -grams for each node in a graph.",
"Due to the high sparsity of graphs typical for DRSs, the number of n -grams does not explode as the size of graphs increases, | G | = | N | + | E | , where | N | and | E | are the number of nodes and edges in graph G , respectively.",
"Given the n -grams of predicted and gold DRS graphs, we compute precision p k and recall r k as: p k = | k -grams pred k -grams gold | | k -grams pred | (1) r k = | k -grams pred k -grams gold | | k -grams gold | (2) where k -grams pred and k -grams gold are k -grams on predicted and gold DRS graphs, respectively, and f k = 2 p k r k p k + r k , where p 0 = r 0 = f 0 = min ( | N pred | , | N gold | ) max ( | N pred | , | N gold | ) .",
"DSCORER calculates precision, recall, and F1 as: DSCORER n F = exp (cid:32) n (cid:88) k =1 w k log F k (cid:33) (3) 3 B refers to box labels, X to entities, E to events, S refers to states, P to propositions, and T to time.",
"4 An Euler path is a path that visits every edge of a graph exactly once (allowing for revisiting nodes).",
"where w k is a fixed weight for k -gram ( 0 k n ) counts, and F { p, r, f } .",
"In our experiments, we investigate the correlation between DSCORER and COUNTER , and the efficiency of the two metrics.",
"We present results on two datasets, namely the Groningen Meaning Bank (GMB; Bos et al. 2017) and the Parallel Meaning Bank (PMB; Abzianidze et al. 2017).",
"We compare two published systems on the GMB: DRTS-sent which is a sentence-level parser (Liu et al., 2018) and DRTS-doc which is a document-level parser (Liu et al., 2019a).",
"On the PMB, we compare seven systems: Boxer, a CCG-based parser (Bos, 2015), AMR2DRS, a rule-based parser that converts AMRs to DRSs, SIM-SPAR giving the DRS in the training set most similar to the current DRS, SPAR giving a fixed DRS for each sentence, seq2seq-char, a character-based sequence-to-sequence clause parser (van Noord et al., 2018b), seq2seq-word, a word-based sequence-to-sequence clause parser, and a transformer-based clause parser (Liu et al., 2019b).",
"COUNTER takes 100 hill-climbing restarts to search for the best variable mappings on PMB and 10 restarts on GMB.",
"Both DSCORER and COUNTER are computed on one CPU (2.10GHz).",
"The weight w 0 is set to 0.1 and the weights w k ( 1 k n ) in DSCORER are set to 0 .",
"9 /n , where n = 4 .",
"We analyze the number of n -grams extracted by DSCORER ; we also report the values obtained by",
"average number of nodes in a graph.",
"Number of n -grams Figure",
"2(a) shows the number of n -grams across graphs in GMB where the largest size of 4-grams extracted on one graph is 1 .",
"47 10 6 .",
"Figure",
"2(b) shows the number of n -grams across graphs in PMB where the largest size of 4-grams extracted on one graph is 2 .",
"27 10 3 .",
"The number of n -grams will increase exponentially with n or as the size of the graph increases.",
"Nevertheless, the number of 4-grams remains manageable.",
"We set k = 4 for computing our metric (see Equations (1) and (2)) as 4-grams are detailed enough to capture differences between meaning representations whilst avoiding overly strict matching (which would render the similarity between predicted and gold DRSs unncessarily low and not very useful).",
"Metric Values Table 1 shows the various scores assigned by DSCORER and COUNTER to the different systems.",
"We observe similar trends for both metrics; DSCORER penalizes more harshly SPAR and SIM-SPAR, which output random DRSs without any parsing algorithm.",
"Generally speaking, the two metrics are highly correlated; across systems and datatasets, Pearson's correlation coeffi-cient r is 0.93 on 1-grams, 0.94 on 2-grams, 0.91 on 3-grams, and 0.88 on 4-grams, with 2-grams being most correlated.",
"This is not surprising, 2-grams 0 0 .",
"in DSCORER are most similar to COUNTER which only considers predicates with at most two arguments.",
"Figure 3 shows the 4-gram correlation between COUNTER and DSCORER .",
"We found most points are around the curve of y = x 3 , which means that considering high-order grams renders the two metrics less similar, but nevertheless allows to more faithfully capture similarities or discrepancies between DRSs.",
"Efficiency Table 2 shows the average run-time for COUNTER and DSCORER on a pair of DRSs.",
"Both metrics have similar run-times on PMB which mostly consists of small graphs.",
"However, in GMB, which consists of larger graphs with many nodes, the run-time of COUNTER explodes (more than 4 hours per graph), while DSCORER evaluates DRSs within an acceptable time frame (2.35 seconds per graph).",
"In GMB-doc, DSCORER runs seven thousand times faster than COUNTER , showing it is very efficient at comparing large graphs.",
"We further conducted a case study in order to analyze what the two metrics measure.",
"Figure 4 shows two different sentences in their clause-style DRS format used by COUNTER and graph-style DRS format used by DSCORER .",
"Note that the two sentences have totally different meanings (dis-tinguished using various meaning constructs in the corresponding DRSs).",
"Using COUNTER to compare the two sentences yields an F1 of 47.06, which drops to 16.11 when employing DSCORER on 4-grams.",
"Note that DSCORER on 1-grams obtains an F1 of 46.42 which is close to COUNTER .",
"(marked as red in Figure 4), which might inflate the similarity between two sentences without actually measuring their core meaning.",
"For example, the common relation b 3 Time e 1 t 1 is matched to b 2 Time e 1 t 1 without considering what e 1 and t 1 are.",
"Instead, DSCORER aims to find matches for paths B Time A 1 e 1 Time A 2 t 1 and B smile.v.",
"01 e 1 Time A 2 t 1 as well.",
"And the mismatch of the second path reduces the final score.",
"The metric SEMBLEU (Song and Gildea, 2019) is most closely related to ours.",
"It evaluates AMR graphs by calculating precision based on n -gram overlap.",
"SEMBLEU yields scores more consistent with human evaluation than SMATCH (Cai and Knight, 2013b), an AMR metric which is the basis of COUNTER .",
"SEMBLEU cannot be directly used on DRS graphs due to the large amount of indexed variables and the fact that the graphs are not explicitly given; moreover, our metric outputs F1 scores instead of precision only.",
"Opitz et al. (2020) propose a set of principles for AMR-related metrics, showing the advantages and drawbacks of alignmentand BLEU-based AMR metrics.",
"However, efficiency of the metric is crucial for the development of document-level models of semantic parsing.",
"Basile and Bos (2013) propose to represent DRSs via Discourse Representation Graphs (DRGs) which are acyclic and directed.",
"However, DRGs are similar to flattened trees, and not able to capture clause-level information (e.g., b 1 Agent e 1 x 1 ) required for evaluation (van Noord et al., 2018a).",
"In this work we proposed DSCORER , as a DRS evaluation metric alternative to COUNTER .",
"Our metric is significantly more efficient than COUNTER and considers high-order DRSs.",
"DSCORER allows to speed up model selection and development removing the bottleneck of evaluation time.",
"We thank the anonymous reviewers for their feedback.",
"We gratefully acknowledge the support of the European Research Council (Lapata, Liu; award number 681760), the EU H2020 project SUMMA (Cohen, Liu; grant agreement 688139) and Bloomberg (Cohen, Liu)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other"
] |
[
"The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT) and resulted in alternative training algorithms (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Wu et al., 2018).",
"However, MLE training remains the de facto approach for autoregressive NMT because of its computational efficiency and stability.",
"Despite this mismatch between the training objective and task measure, we notice that the samples drawn from an MLE-based trained NMT support the desired distribution there are samples with much higher BLEU score comparing to the beam decoding output.",
"To benefit from this observation, we train an energy-based model to mimic the behavior of the task measure (i.e., the energy-based model assigns lower energy to samples with higher BLEU score), which is resulted in a re-ranking algorithm based on the samples drawn from NMT: energy-based re-ranking (EBR).",
"We use both marginal energy models (over target sentence) and joint energy models (over both source and target sentences).",
"Our EBR with the joint energy model consistently improves the performance of the Transformer-based NMT: +3.7 BLEU points on IWSLT'14 German-English, +3.37 BELU points on Sinhala-English, +1.4 BLEU points on WMT'16 English-German tasks.",
"Autoregressive models are widely used for neural machine translation (NMT) (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).",
"The autoregressive factorization provides a tractable likelihood computation as well as efficient sampling.",
"The former results in the effective maximum likelihood estimation (MLE) for training the Amirmohammad Rooshenas is the corresponding author.",
"parameters of NMT models.",
"However, optimizing likelihood does not guarantee an improvement in task-based measures such as the BLEU score, which has motivated directly optimizing task measures with reinforcement learning (Ranzato et al., 2016; Norouzi et al., 2016; Shen et al., 2016; Bah-danau et al., 2017; Wu et al., 2018).",
"However, for NMT, these training algorithms are often used in conjunction with MLE training (Wu et al., 2018) or as fine-tuning (Choshen et al., 2020).",
"Interestingly, we observe that samples drawn from an NMT model trained using MLE may have higher quality (measured with BLEU) than the outputs of beam search.",
"In particular, we draw 100 target samples for each source sentence from an NMT model trained using MLE on the IWSLT'14 German-English task, and observe that an oracle ranker i.e. argmax y PNMT ( y | x ) BLEU ( ., y ) , where ( x , y ) is the pair of source and gold target sentence achieves the high score of 67.54, while the beam decoding achieves 33.87.",
"We also look at the distribution of the Spearman rank correlation coefficient of the drawn samples with respect to the log probability score of the baseline NMT (BaseNMT).",
"Figure 1 shows that there is no strong correlation between the BLEU score ranking of samples and the log probability score ranking for the majority of source sentences; thus, maximum a priori (MAP) decoding is incapable of finding the desired output.",
"In parallel to our study, Eikema and Aziz (2020) also report that the mismatch regarding MLE training of autoregressive models is attributable to the distribution of the probability mass rather than the parameter estimation, resulting in a poor MAP decoding.",
"Instead of looking for an alternate algorithm for parameter estimation, these results motivate us to explore training a parametric approximation of the metric, here BLEU score: ( y , x ) BLEU ( y , y ) .",
"Therefore the decoding becomes: Figure 1: Distribution of the Spearman rank-order correlation coefficients for the training data (left) and test data (right) of the IWSLT'14 German-English task.",
"We use energy-based models (EBMs) to parameterize ( y , x ) .",
"EBMs (LeCun et al., 2006) are general parametric models that assign a scalar energy value to each configuration of input variables, thus defining an unnormalized probability distribution.",
"Although computing the partition function is intractable for general EBMs, we only require the relative energy of the sampled sentences from the BaseNMT model, thus canceling out the normalization constant.",
"In this paper we use two different energy-based models: marginal energy model (Marginal-EBM) defined only over target sentences and joint energy model (Joint-EBM) defined over both source and target sentences.",
"Figure 1 also shows the correlation coefficient of the energy ranking and BLEU score using both Marginal-EBM and Joint-EBM.",
"The shift in the coefficient distribution suggests that decoding based on energy scores results in better BLEU scores compared to decoding based on the log probability scores of the BaseNMT model.",
"Also we observe that Joint-EBM works better than using Marginal-EBM as Joint-EBM better captures the correlation of source and target sentences, while Marginal-EBM is not directly conditioned on the source sentence.",
"In this paper, we describe how to train EBMs 1 to achieve the desired ranking.",
"Our energy ranker consistently improves the performance of Transformer-based NMT on German-English, Romanian-English and Italian-English tasks from IWSLT'14, the French-English task from IWSLT'17, German-English task from WMT'14, and English-German task from WMT'16, as well as the low-resource Sinhala-English and Nepali-English tasks described in the FLoRes dataset (Guzman et al., 2019).",
"Using EBM E to reweight the samples from an NMT defines a new probability distribution over the output sentences (see Grover et al. (2019)): P ( y | x ) PNMT ( y | x ) exp( E ( y , x ) T ) , where T is temperature.",
"The ideal re-ranker requires an EBM with the energy function E ( y , x ) such that P ( y | x ) and BLEU ( y , y i ) have similar modes for all ( x i , y i ) D , where D is an empirical data distribution.",
"To train we use rank-based training (Rohanimanesh et al., 2011; Rooshenas et al., 2018, 2019).",
"Rank-based training enforces that the samples from P ( . ) have similar ranking with respect to both the energy score and task measure (see Figure 2).",
"To sample from P ( y | x ) , we sample k sentences from PNMT ( y | x ) using multinomial sampling from locally normalized distributions over the output and reweight the samples based on the energy network exp( E ( y , x ) T ) .",
"Then we resample two sentences, y 1 and y 2 , from the renormalized set, which defines a conditional distribution: P i ( y | x ) = exp( E ( y , x ) /T ) (cid:80) k exp( E ( y k , x ) /T ) (a similar sampling approach has been used in Deng et al. (2020)).",
"Now we train the energy model such that the ranking of y 1 and y 2 with respect to the energy model is consistent with their ranking with respect to the task metric, BLEU score.",
"In general, we assume y h is the sentence with the higher BLEU score and y l is the sentence with with the lower BLEU score.",
"Therefore, the training objective of E ( y , x ) becomes: M = ( BLEU ( y h , y i ) BLEU ( y l , y i )) ( y i , x i ) = M + E ( y h , x i ) E ( y l , x i ) min (cid:88) ( y i , x i ) D max( ( y i , x i ) , 0) .",
"(1) Where ( y i , x i ) is the margin violation and is the margin weight.",
"Algorithm 1 outlines the whole training procedure.",
"If we define the energy only over sentences of the target language, E ( y ) , we can share the energy-model among multiple language pairs with the same target language.",
"In this case we have to, first, sample the language l from our language set and then sample a sentence pair from the selected language training set D l .",
"The probability of selecting a language is proportional to the number of sentences in its training set.",
"In this paper, we use BERT (Devlin et al., 2019) to parameterize both E ( y , x ) and E ( y ) .",
"Section 4.3 and 4.4 discuss the construction of E in detail.",
"Grover et al. (2019) show that importance weights can be used to make generative models better fit the desired data distribution: p ( y ) q ( y ) ( y ) , where q ( y ) is a generative model that we can effi-ciently take samples from and ( y ) is the importance weight function.",
"The importance weights can be determined using a discriminator that differentiates the generated samples from the target data.",
"Rosenfeld et",
"al.; Parshakova et al. (2001; 2019) define q ( y ) as autoregressive model and ( y ) using a log-linear model: ( y ) = exp( T ( y )) , where ( y ) is the vector of sufficient statistics (features) evaluated at y .",
"The log-linear model simplifies training the parameters : p ( y ) = (cid:80) y D ( y ) E y p ( . ) ( y ) .",
"The expectation term can be estimated using rejecting sampling or importance sampling given the proposal distribution q .",
"Deng et al. (2020) extend this approach for text generation by using unrestricted EBMs instead of log-linear models: ( y ) = exp( E ( y )) .",
"They train the EBM using noise contrastive estimation (Gutmann and Hyvarinen, 2010).",
"We find this less suitable for re-ranking in the translation tasks (see Section 4).",
"Discriminative re-ranking was first introduced by Shen et al. (2004) for improving the performance of machine translation (MT).",
"They have trained a linear separator using the perceptron learning algorithm to distinguish the top r translations from the rest of the translations in the n-best possible outputs.",
"The features for the discriminator are extracted from both source and target sentences.",
"Mizumoto and Matsumoto (2016) combine the score of MT and the linear model using more complex syntactical features to re-rank the target sentences.",
"Here, we rely on the features learned by BERT, and given the high capacity of the energy model, we train the energy model to respect the ranking of every pair of samples.",
"Gulcehre et al. (2017) describe using language model (LM) to improve the performance of NMT using shallow and deep fusion.",
"Shallow models combine the marginal probability of predicting each word in NMT and LM: log PNMT ( y i | y <i ) + log PLM ( y i | y <i ) , while deep fusion concatenates the hidden states of two models before predicting each word and uses parallel data to fine-tune the weights.",
"Similar to deep fusion, Domhan and Hieber (2017) feed the unnormalized output of LM to the decoder of NMT.",
"Domhan and Hieber (2017) jointly train the LM and NMT using monolingual target-side data and parallel data, respectively.",
"Sen-nrich et al. (2016a) augment the parallel training data with monolingual data with the target language and back-translation.",
"Re-ranking with LM has also been explored by Ng et al. (2019), where they decode the output based on log p ( y | x ) + 1 log p ( x | y ) + 2 log p ( y ) , where p ( y | x ) is the direct model provided by NMT, p ( x | y ) is computed via back-translation and p ( y ) is an LM.",
"Our approach differs from the previous methods that use LMs for re-ranking as we train our energy-based model to be consistent with the task measure instead of using pre-trained LMs.",
"In our experiments, we only explore the effect of using the direct model plus LM, nevertheless, back-translation can also be added into our model for further improvement.",
"Recently, Salazar et al. (2020) use masked language models (MLM) such as BERT to score hypotheses from NMT.",
"Salazar et al. (2020) describe the score of a MLM as pseudo-log-likelihood score (PLL).",
"To calculate PLL score of a sentence, each token w i in the sentence is sequentially masked, which allows the calculation of log p ( w i | w \\ i ) from the output of the MLM.",
"The normalized pseudo-log-probability of the sentence is the average of log-probability of the masked words given the rest of the words in the sentence: 1 N (cid:80) Ni =1 log p ( w i | w \\ i ) , where N is the length of the sentence.",
"We use this approach as one of our baselines.",
"In parallel to our work, Guo et al. (2020) proposes using two different BERT models as an encoder of the source language (X-BERT) and a decoder of the target language (Y-BERT).",
"Guo et al. (2020) add an extra trainable encoder-decoder adaption module followed by a feed-forward module to each layer of the decoder and a feed-forward module to each layer of the encoder.",
"(Please see Guo et al. (2020) for more detail on the",
"architec-ture.) For fine-tuning XY-BERT for translation tasks, Guo et al. (2020) keep all XY-BERT's parameters fixed except the parameters of the new modules, and use mask-predict decoding (Ghazvinine-jad et al., 2019) for running test-time inference.",
"Guo et al. (2020) report a significant improvement over prior non-autoregressive models and superior performance comparing to autoregressive methods on IWSLT'14 German-English task.",
"Their finding is consistent with our improvement using the pretained BERT model.",
"However, our Joint-EBM model is a different way of using BERT for translation, which does not require separate BERT models for source and target language.",
"Please see Section 4.9 for a detailed comparison.",
"Finally, other works also discuss using BERT to improve the performance of NMT.",
"Clinchant et al. (2019) describe initializing the embedding or the whole encoder with BERT's parameters.",
"Zhu et al. (2020) use an attention model to incorporate the output of BERT into encoder and decoder of NMT.",
"In our approach, we use BERT as an external energy-based ranker.",
"We use German-English (De En), Romanian-English (Ro En) and Italian-English (It En) from IWSLT'14 datasets and French-English (Fr En) from IWSLT'17 translation tasks.",
"We also use IWSLT'14 English-German (En De) to show that the proposed method can be expanded to translation tasks with a different target language.",
"All sentences were preprocessed using byte-pair-encoding (Sennrich et al., 2016b).",
"For all language pairs in IWSLT'14 and IWSLT'17, we merge the test datasets tst2010, tst2011, tst2012 and report BLEU on the merged dataset.",
"We also use German-English (De En) from the WMT'14 and English-German (En De) from WMT'16 translation tasks.",
"Finally, we use low-resource translation tasks Nepali-English (Ne En) and Sinhala-English (Si En) from FLoRes (Guzman et al., 2019) translation tasks.",
"We follow dataset distribution and preprocessing steps described in Guzman et al. (2019) using the FLoRes implementation.",
"FLoRes dataset contains development (dev), devtest and test dataset for both language pairs.",
"Similar to Guzman et al. (2019) we use the devtest dataset for all our evaluations.",
"We use the Transformer 2 (Vaswani et al., 2017) as our BaseNMT.",
"Our Transformer architecture includes six encoder and six decoder layers, and the number of attention heads, embedding dimension and inner-layer dimension are 8, 512 and 4096, respectively.",
"We use dropout, weight decay, label smoothing to regularize our models.",
"We use layer normalization and early stopping.",
"Models are optimized using Adam (Kingma and Ba, 2015) with parameters 1 = 0 .",
"9 , 2 = 0 .",
"98 , and (cid:15) = 1 e 8 and we use the same learning rate scheduler as Ott et al. (2019).",
"We trained our models on 1 Nvidia TITANX GPU.",
"2 We use the implementation in Opennmt (Klein et al., 2017) and Fairseq (Ott et al., 2019) toolkits.",
"BaseNMT + Si En + De En + Fr En all 7.10 8.62 9.30 9.76 10.29 4.3 Marginal-EBM To construct the energy network over the sentences of the target language, we use a pretrained BERT (Devlin et al., 2019) from Hugging-face (Wolf et al., 2019) as our pretrained language model and project the hidden state of BERT for each output token into a scalar value and define the energy value of the target sentence as the average of the scalar values.",
"We use the BERT-base uncased model with 12 encoder layers, 768 hidden state dimension, 12 attention heads and 110M parameters.",
"For the projection layer, we use a 2-layer MLP with 256 hidden variables.",
"In our experiments, we only train the parameters of the projection layer and the rest of BERT's parameters remain frozen.",
"We use margin weight of = 10 and temperature T = 1000 for our experiments.",
"We regularize the projection layer using L2 regularization.",
"Models are optimized using Adam (Kingma and Ba, 2015) with parameters 1 = 0 .",
"9 , 2 = 0 .",
"98 , and (cid:15) = 1 e 8 and a learning rate of 0 .",
"01 .",
"We run all experiments on 1 Nvidia TESLA M40 GPU.",
"Joint-EBM must assign a score to a pair of sentences from source and target languages, so to construct the Joint-EBM, similar to Marginal-EBM, we need a Joint-BERT.",
"We feed the sentence pairs from source and target languages jointly to BERT, thus the name Joint-BERT.",
"Since Joint-BERT has not been pre-trained to accept pairs of sentences from two different languages, we fine-tune it for 12 epochs using the input format of [CLS]Source[SEP]Target[SEP] with the pairs of source and target sentences for each translation task.",
"For fine-tuning, we only mask the tokens of the target sentence.",
"For all translation tasks we use the BERT-Base, Multilingual Cased model with 12 encoder layers, 768 hidden state dimension, 12 attention heads and 110M parameters.",
"After fine-tuning Joint-BERT, we follow the same architecture as Marginal-EBM for the Joint-EBM.",
"As the main baseline, we run beam decoding with a beam size of five over the trained BaseNMT (BaseNMT+Beam).",
"We also use the samples drawn from the BaseNMT and report the BLEU score of the sample with the highest log-probability score on BaseNMT (BaseNMT+Sample).",
"For all methods we use 100 target samples for each source sentence.",
"BaseNMT+LM draws samples from the BaseNMT and uses log PNMT ( y | x ) + log PLM ( y ) to rank the samples ( = 0 . 01 out of the set of { 0.001, 0.01, 0.1 } results in the best performance).",
"In our BaseNMT+LM baseline, we use pretrained language model to calculate log PLM ( y ) .",
"For the { De, Fr, It, Ro, Si, Ne } En tasks, we use a pretrained Transformer-XL (Dai et al., 2019) transfo-xl-wt103 and for the En De task we use a pretrained XLM (Lample and Con-neau, 2019) xlm-mlm-ende-1024 from Hugging-face (Wolf et al., 2019).",
"BaseNMT+MLM is similar to BaseNMT+LM but it uses log PNMT ( y | x ) + log PMLM ( y ) , where PMLM is the average pseudo-log-probability of sample y calculated using BERT.",
"We use the same architecture of BERT as Marginal-EBM, but we fine-tuned BERT for MLM over the target sentences in training sets for 10 epochs.",
"We tuned similar to BaseNMT+LM.",
"EBR is our method that uses rank-based training for EBMs.",
"We explore EBR with Marginal-EBM (Marginal-EBR) and Joint-EBM (Conditional-EBR).",
"We also use noise-contrastive estimation to train our Marginal-EBM, similar to Deng et al. (2020), which we refer to as NCE-EBR.",
"Next, we have Shared-EBR that trains single Marginal-EBM for the tasks with the same target language.",
"Shared-EBR is only trained on IWSLT and FLoRes tasks with English target.",
"For this method, we first sample a translation task and then sample a batch from that task and follow Algorithm 1 for the training of the Marginal-EBM.",
"Finally, as an upper bound for the best achievable result, we also extract the translations from the sample that are closest to the gold data (based on BLEU score).",
"Table 1 shows the performance of the described methods for IWSLT, FLoRes, and WMT translation tasks.",
"3 BaseNMT+Sample achieves a better score than beam decoding suggesting that our multinomial sampling supports the modes of the distribution defined by the BaseNMT.",
"Similarly, oracle values are high, indicating that the samples also support the desired distribution.",
"This satisfies the necessary condition for P ( y | x ) PNMT ( y | x ) exp( E ( y , x ) /T ) to be closer to the desired distribution.",
"Re-ranking with a language model using BaseNMT+LM improves over BaseNMT+Sample for De En, Fr En, It En, and En De, but fails on Ro En, Si En, and Ne En.",
"However, in all of these tasks, the difference between BaseNMT+Sample and BaseNMT+LM is not substantial.",
"BaseNMT+MLM is consistently better than BaseNMT+LM.",
"The performance of BaseNMT+MLM is attributable to PLL scoring, as the encoder has the global information over the sentence.",
"Marginal-EBR performs considerably better than BaseNMT+ { Beam, Sample, LM, MLM } and better than NCE-EBR on all tasks except on Ne En, where NCE-EBR outperforms Marginal-EBR.",
"The main advantage of Marginal-EBR over NCE-EBR is the use of only sampled data instead of gold data for training.",
"See Section 4.7 for detailed discussion.",
"Shared-EBR has a significant improvement over the Marginal-EBR, especially it improves the low-resource task of Si En by more than 2 BLEU points.",
"For this task, we also show that how using more language pairs in training improves performance (Table 2).",
"Conditional-EBR outperforms Shared-EBR on all tasks.",
"The performance of Conditional-EBR is 3 We use SacreBLEU (Post, 2018) as a consistent BLEU implementation for all of our experiments.",
"due to the use of Joint-EBM model, which enables the model to define different energy landscapes for different source sentences.",
"Therefore, samples from the target language are more separable given the source sentence, while Marginal-EBM may not distinguish target sentences for different source sentences.",
"The translation improvement of using EBR on IWSLT and FLoRes translation tasks are more considerable than the improvement of using EBR on WMT tasks.",
"We believe that pre-trained BERT helps low-resource tasks more than large-scale translation tasks.",
"Noise-contrastive estimation (NCE) trains the energy model using a discriminative training to distinguish gold data from the sampled data (Gutmann and Hyvarinen, 2010; Deng et al., 2020).",
"In contrast to the NCE-EBR, EBR does not directly use gold data in the training of the EBM, but only exploit it to determine the rank of two points as well as the margin.",
"To show that our approach is effective, we introduce parameter as the percentage of the time that we can use gold data as one of the points (for example, y h in Algorithm 1).",
"Table 3 shows the results for both De En and Fr En tasks using Marginal-EBR.",
"As we increase the value of , the performance of Marginal-EBR drops.",
"The main reason is that BaseNMT rarely produces the exact correct translation in the sample set, thus learning the ranking with respect to the gold data is not very informative.",
"When the is zero, the Marginal-EBM learns to re-rank the samples with respect to their distance to the gold data.",
"We hypothesize that the performance of EBR improves as we increase the support of the base distribution toward the mode of the true distribution.",
"To show that we add an entropy regularization term to the likelihood training of BaseNMT: max (cid:88) ( x , y ) D (cid:88) i log p ( y i | y <i , x ) (cid:88) i p ( y i ) log p ( y i ) .",
"(2) Entropy regularization improves the diversity of samples, and as a result, Oracle's score increases by 0.67 BLEU points.",
"While BaseNMT only benefits less than 0.1 BLEU points from the regularization, Conditional-EBR improves by 0.3 BLEU points (see Table 4).",
"For this study we explored from { 0.01, 0.1 } , and reported results use = 0 .",
"01 selected based on the validation set.",
"BaseNMT trained with = 0 .",
"1 has the Oracle score of 65.76 on the test set (comparing to the Oracle score of 68.21 for = 0 . 01 ), which indicates that stronger regularization reduces the sample quality.",
"To explore the effect of a different way of conditioning on the source language, we compare the EBM constructed using the Joint-BERT model with EBM constructed using recently introduced XY-BERT (Guo et al., 2020).",
"To construct EBM from XY-BERT, we remove the output layer and project each hidden-state of the final layer to a scalar energy value similar to how we build EBM from BERT.",
"We compare these two models on IWSLT'14 De En task.",
"For XY-BERT we use German BERT for the encoder and English BERT for the decoder, following Guo et al. (2020).",
"Our Joint-BERT uses Multilingual BERT because we feed both source and target sentences to BERT jointly.",
"Conditional-EBR with XY-BERT achieves 38.33 BLEU score, which is 0.75 BLEU points higher than Conditional-EBR with Joint-BERT and improves the performance of XY-BERT with mask-predict decoding (Ghazvininejad et al., 2019) by 1.84 BLEU points.",
"4 We believe that the improvement in Conditional-EBR using XY-BERT is mostly attributable to using specialized BERT models.",
"Moreover, XY-BERT has extra trainable modules, so we could fine-tune XY-BERT on the trans-4 Guo et al. (2020) report 36.49 BLEU score using XY-BERT with 10 iterations of mask-predict decoding.",
"lation task for 60 epochs, while keeping the rest of the parameters fixed without causing catastrophic forgetting.",
"Joint-BERT, on the other hand, does not have any extra parameters, so we fine-tuned all parameters for only 15 epochs.",
"Further training of Joint BERT resulted in poor performance.",
"We leave adding extra modules for better fine-tuning of Joint BERT for future studies.",
"As another comparison, we train our models by directly maximizing the expected BLEU score (com-pared to rank-based training):",
"We use log-trick to calculate the gradient of the above objective:",
"We use self-normalized importance sampling to draw samples from the energy-based model.",
"We use one sample to approximate the outer expectation and 10 samples to approximate the inner expectation.",
"We train both Marginal-EBM and Joint-EBM by maximizing the expected BLEU score on IWSLT'14 DE-EN.",
"The former obtains a score of 34.20 BLEU and the latter achieves 34.77 BLEU points.",
"Both models underperform rank-based training.",
"We compare the inference latency of EBR variations with BaseNMT (Table 5).",
"We use 100 samples for re-ranking using Marginal-EBR, Conditional-EBR with Joint-BERT and Conditional EBR with XY-BERT (Guo et al., 2020).",
"Inference on Marginal-EBR takes on average about 170 milliseconds per sentence more than inference in BaseNMT as we have to sample 100 sentences from BaseNMT and evaluate them on the energy model.",
"We evaluate the Marginal-EBR only on the target sentences, while we evaluate Conditional-EBR for sequences from both source and target language, so the input sequence of Conditional-EBR is longer, thus having higher latency comparing to Marginal-EBR.",
"We also measure the latency of Conditional-EBR when we use XY-BERT architecture to construct Joint-EBM.",
"In this case, we have Table 5: Average inference time per sentence (millisec-onds), baseline transformer uses beam width of 5 and EBR uses 100 samples per sentence.",
"two separate BERT models for source and target languages, increasing the number of parameters by 3.3 million and latency by about 90 milliseconds per sentence compared to Conditional-EBR that uses the Joint-BERT model.",
"In this section, we study the sentence preference of Marginal-EBR created by the energy ranking.",
"We qualitatively investigate how the output of Marginal-EBR differs from that of BaseNMT model.",
"On the IWSLT'14 test set, we examined 200 examples on which Marginal-EBR did better than NMT and 200 examples where BaseNMT is better.",
"We find that about 30% of the time, the Marginal-EBR model chooses a translation with changed pronoun.",
"Another frequent preference' Marginal-EBR makes compared to BaseNMT is to use the contraction form.",
"Since this IWSLT data set is from TED talk, we conjecture that the energy model favors the translations that are in more oral style.",
"Besides, it is also common for the Marginal-EBR model to prefer rephrases, for example, instead of using will' as used in BaseNMT, Marginal-EBR chooses the form am going to'.",
"Finally, we find, for some pairs, Marginal-EBR chooses a different tense compared to the BaseNMT model (from MAP decoding).",
"Table 6 presents quintessential examples we find after examining 400 examples on IWSLT'14 De En test set.",
"It is worth to mention that examples do not strictly land in only one category.",
"For example, the sentences we show in the Rephrase type will also be counted as the change of pronouns. With this in mind, we compute statistics over the 400 sentences and find each of the Pronoun', Con-traction' and Rephrase' appears approximately 30% of the time while 10% of the sentences change Tense'.",
"The other less frequent types are changing of determiners, prepositions and deletion (compar-ing the MAP decoding of BaseNMT and preferred Type Example Pronoun N: to us , he meant the freedom .",
"Besides the qualitative analysis, we are also curious to see whether the improvement is affected by length.",
"Table 7 shows the BLEU scores on the IWSLT'14 test set, which is divided into three bins according to the target length.",
"Shorter sentences have the largest increase in BLEU, and the gain is decreasing as length increases.",
"We reckon that it is easier for EBR to cover larger training space for sentences of shorter length and thus has the largest improvement in BLEU for these sentences.",
"In the absence of access to the source sentence, the energy model ranks the outputs purely according to the features of target sentences.",
"We hypothesize that the energy model is better at differentiating incoherent and coherent sentences and manage to show that through the following analysis.",
"We apply two kinds of shuffle on IWSLT'14 test set targets: (1) global shuffle: tokens in the sentence are randomly shuffled (2) local shuffle: we first randomly select a token and randomly shuffle the tokens within a local window of three.",
"Then we compute the energy scores of these shuffled sentences as well as the untouched ones.",
"The energy scores are listed in Table 8.",
"(The energy model assign a lower energy to its preference.)",
"We find 87% Table 8: Energy scores of randomly shuffled sentences as well as original targets on IWSLT'14 De En test set.",
"of the time, the energy model is able to distinguish the original sentence from a local shuffled one, and 90.5% from the global shuffled one.",
"This supports our hypothesis that the energy model is capable of capturing the fluency of generated candidates.",
"We introduce energy-based re-ranking (EBR) to improve the performance of autoregressive neural machine translation.",
"Despite its superior performance, EBR suffers from high latency because of its dependency on sampling from an autoregressive model.",
"Directly sampling from the underlying EBM can speed up the inference, which is our future direction in order to benefit from the power of energy-based models for machine translation."
] | [
"abstain",
"abstain",
"objective",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method"
] |
[
"Reading strategies have been shown to improve comprehension levels, especially for readers lacking adequate prior knowledge.",
"Just as the process of knowledge accumulation is time-consuming for human readers, it is resource-demanding to impart rich general domain knowledge into a deep language model via pre-training.",
"Inspired by reading strategies identified in cognitive science, and given limited computational resources just a pre-trained model and a fixed number of training instances we propose three general strategies aimed to improve non-extractive machine reading comprehension (MRC):",
"(i) BACK ANDFORTHREADING that considers both the original and reverse order of an input sequence,",
"(ii) HIGHLIGHTING , which adds a trainable embedding to the text embedding of tokens that are relevant to the question and candidate answers, and",
"(iii) SELFASSESSMENT that generates practice questions and candidate answers directly from the text in an unsupervised manner.",
"By fine-tuning a pre-trained language model (Radford et al., 2018) with our proposed strategies on the largest general domain multiple-choice MRC dataset RACE, we obtain a 5 .",
"8% absolute increase in accuracy over the previous best result achieved by the same pre-trained model fine-tuned on RACE without the use of strategies.",
"We further fine-tune the resulting model on a target MRC task, leading to an absolute improvement of 6 .",
"2% in average accuracy over previous state-of-the-art approaches on six representative non-extractive MRC datasets from different domains (i.e., ARC, OpenBookQA, MCTest, SemEval-2018 Task 11, ROCStories, and MultiRC).",
"These results demonstrate the effectiveness of our proposed strategies and the versatility and general applicability of This work was done when K. S. was an intern at the Tencent AI Lab, Bellevue, WA.",
"our fine-tuned models that incorporate these strategies.",
"Core code is available at https: //github.com/nlpdata/strategy/ .",
"Recent years have seen a growing interest in machine reading comprehension (MRC) (Rajpurkar et al., 2016; Choi et al., 2018; Kocisk`y et al., 2018; Reddy et al., 2018).",
"In this paper, we mainly focus on non-extractive MRC (Khashabi et al., 2018; Ostermann et al., 2018; Clark et al., 2018), in which a significant percentage of candidate answers are not restricted to text spans from the reference document or corpus.",
"In comparison to extractive MRC tasks (Section 2.1), non-extractive MRC (Section 2.2) requires diverse reading skills and, as a result, the performance of machine readers on these tasks more accurately indicates the comprehension ability of machine readers in realistic settings such as exams (Lai et al., 2017).",
"Recently, significant progress has been achieved on many natural language processing tasks including MRC by fine-tuning a pre-trained general-purpose language model (Radford et al., 2018; Devlin et al., 2018).",
"However, similar to the process of knowledge accumulation for human readers, it is time-consuming and resource-demanding to impart massive amounts of general domain knowledge from external corpora into a deep language model via pre-training.",
"For example, it takes a month to pre-train a 12 -layer transformer on eight P 100 GPUs over the BooksCorpus (Zhu et al., 2015; Radford et al., 2018); Devlin et al. (2018) pre-train a 24 -layer transformer using 64 TPUs for four days on the BooksCorpus plus English Wikipedia, a feat not easily reproducible considering the tremendous computational resources ( one year to train on eight P 100 GPUs).",
"From a practical viewpoint, given a limited number of training instances and a pre-trained model, can we improve machine reading comprehension during fine-tuning instead of imparting more prior knowledge into a model via expensive pre-training?",
"Inspired by reading strategies identified in cognitive science research that have been shown effective in improving comprehension levels of human readers, especially those who lack adequate prior knowledge of the topic of the text (Mokhtari and Sheorey, 2002; Mokhtari and Reichard, 2002; McNamara et al., 2004), we propose three corresponding domain-independent strategies to improve MRC based on an existing pre-trained transformer (Section 3.1): BACK ANDFORTHREADING ( I go back and forth in the text to find relationships among ideas in it. ): consider both the original and reverse order of an input sequence (Section 3.2) HIGHLIGHTING ( I highlight information in the text to help me remember it. ): add a trainable embedding to the text embedding of those tokens deemed relevant to the question and candidate answers (Section 3.3) SELF-ASSESSMENT ( I ask myself questions I would like to have answered in the text, and then I check to see if my guesses about the text are right or wrong. ): generate practice questions and their associated span-based candidate answers from the existing reference documents (Section 3.4) By fine-tuning a pre-trained transformer (Rad-ford et al., 2018) according to our proposed strategies on the largest general domain multiple-choice MRC dataset RACE (Lai et al., 2017) collected from language exams, we obtain a 5 .",
"8% absolute improvement in accuracy over the previous best result achieved by the same pre-trained transformer fine-tuned on RACE without the use of strategies (Section 4.2).",
"We further fine-tune the resulting model on a target MRC task.",
"Experiments show that our method achieves new state-of-the-art results on six representative non-extractive MRC datasets that require a range of reading skills such as commonsense and multi-sentence reasoning (i.e., ARC (Clark et al., 2016, 2018), OpenBookQA (Mihaylov et al., 2018), MCTest (Richardson et al., 2013), SemEval-2018 Task 11 (Yang et al., 2017), ROCStories (Mostafazadeh et al., 2016), and MultiRC (Khashabi et al., 2018)) (Section 4.4).",
"These results indicate the effectiveness of our proposed strategies and the versatility and generality of our fine-tuned models that incorporate the strategies.",
"We roughly categorize machine reading comprehension tasks into two groups: extractive (Sec-tion 2.1) and non-extractive (Section 2.2) based on the expected answer types.",
"Recently large-scale extractive MRC datasets have been constructed (Hermann et al., 2015; Hill et al., 2016; Onishi et al., 2016; Chen and Choi, 2016; Mostafazadeh et al., 2016; Bajgar et al., 2016; Nguyen et al., 2016; Joshi et al., 2017; Ma et al., 2018), such as SQuAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017).",
"Given a reference document and a question, the expected answer is a short span from the document.",
"In contrast, answers in datasets such as SearchQA (Dunn et al., 2017) and NarrativeQA (Kocisk`y et al., 2018) are free-form human generated texts based on given documents (Nguyen et al., 2016; Reddy et al., 2018; Choi et al., 2018).",
"However, since annotators tend to directly copy spans as answers, the majority of answers are still extractive (Reddy et al., 2018; Kocisk`y et al., 2018).",
"In this section, we primarily discuss multiple-choice MRC datasets, in which answer options are not restricted to extractive text spans.",
"Given a question and a reference document/corpus, multiple answer options are provided, and at least one of them is correct.",
"It involves extensive human efforts to build such a dataset (e.g., MCTest (Richardson et al., 2013), SemEval-2018 Task 11 (Ostermann et al., 2018), MultiRC (Khashabi et al., 2018), and OpenBookQA (Mihaylov et al., 2018)) by crowdsourc-ing.",
"Besides crowdsourcing, datasets such as RACE (Lai et al., 2017) and ARC (Clark et al., 2018) are collected from language or science exams designed by educational experts (Penas et al., 2014; Shibuki et al., 2014; Tseng et al., 2016) to evaluate the comprehension level of human participants.",
"Compared to questions in extractive MRC tasks, besides surface matching, there are various types of complicated questions such as math word problems, summarization, logical reasoning, and sentiment analysis, requiring advanced read-RACE ARC OpenBookQA MCTest SemEval-2018 Task 11 ROCStories MultiRC construction method exams exams crowd.",
"ing skills and prior world knowledge.",
"Besides, in most cases, we can adopt an objective evaluation criteria such as accuracy to evaluate system performance (Clark et al., 2016; Lai et al., 2017).",
"As these kind of datasets are relatively difficult to construct or collect, most existing datasets are small in size, which hinders the development of state-of-the-art deep neural models.",
"In response, in this paper we investigate how to make use of limited resources to improve MRC, using seven representative multiple-choice MRC datasets as case studies.",
"As shown in Table 1, the majority of the correct answer options in most of the datasets (except for ARC and MCTest) are non-extractive.",
"Except for MultiRC, there is exactly one correct answer option for each question.",
"For ARC and OpenBookQA, a reference corpus is provided instead of a single reference document associated with each question.",
"Here we give a formal task definition .",
"Given a reference document d , a question q , and associated answer options { o 1 , o 2 , . . . , o m } , the goal is to select the correct answer option(s).",
"We can easily adapt our method to an MRC task that only provides a reference corpus (Section 4.4).",
"We first introduce a neural reader based on a pre-trained transformer (Section 3.1) and then elaborate on the strategies that are applied during the fine-tuning stage back and forth reading (Sec-tion 3.2), highlighting (Section 3.3), and self-assessment (Section 3.4).",
"Our neural reader follows the framework of discriminatively fine-tuning a generative pre-trained transformer (GPT) (Radford et al., 2018).",
"It adapts a pre-trained multi-layer transformer (Vaswani et al., 2017; Liu et al., 2018) language model to a labeled dataset C , where each instance consists of a sequence of input tokens x 1 , . . . , x n , along with a label y , by maximizing: (cid:88) x,y log P ( y | x 1 , . . . , x n ) + L ( C ) (1) where L is the likelihood from the language model, is the weight of language model, and P ( y | x 1 , . . . , x n ) is obtained by a linear classification layer over the final transformer block's activation of the language model.",
"For multiple-choice MRC tasks, x 1 , . . . , x n come from the concatenation of a start token, a reference document, a question, a delimiter token, an answer option, and an end token; y indicates the correctness of an answer option.",
"We refer readers to Radford et al. (2018) for more details.",
"Apart from placing a delimiter to separate the answer option from the document and question, the original framework pays little attention to task-specific structures in MRC tasks.",
"Inspired by reading strategies, with limited resources and a pre-trained transformer, we propose three strategies to improve machine reading comprehension.",
"We show the whole framework in Figure",
"1. 3.2 Back and Forth Reading (BF) For simplicity, we represent the original input sequence of GPT during fine-tuning (Radford et al., 2018) as [ dq $ o ], where [, $ , and ] represent the start token, delimiter token, and end token, respectively.",
"Inspired by back and forth reading, we consider both the original order and the reverse order [ o $ qd ].",
"The token order within d , q , and o is still preserved.",
"We fine-tune two GPTs that use [ dq $ o ] and [ o $ qd ] as the input sequence respectively, and then we ensemble the two models.",
"We also consider other similar pairs of input sequences such as [ qd $ o ] and [ o $ dq ] in the experiments (Sec-tion 4.3).",
"In the original implementation (Radford et al., 2018), during the fine-tuning stage of GPT, the text embedding of a document is independent of its associated questions and answer options.",
"Inspired by highlights used in human reading, we aim to make the document encoding aware of the associated question-answer option pair ( q , o i ).",
"We focus on the content words in questions and answer options since they appear to provide more useful information (Mirza and Bernardi, 2013), and we identify them via their part of speech (POS) tags, one of: noun, verb, adjective, adverb, numeral, or foreign word.",
"Formally, we let T be the set of POS tags of the content words.",
"We let d denote the sequence of the text embedding of document d .",
"We use d j to represent the j th token in d and d j to denote the text embedding of d j .",
"Given d and a ( q , o i ) pair, we define a highlight embedding h ji for the j th token in d as: h ji = (cid:96) + if the POS tag of d j belongs to T , and d j appears in either q or o i (cid:96) otherwise (2) where (cid:96) + and (cid:96) are two trainable vectors of the same dimension as d j .",
"Following the above definition, the sequence of the highlight embedding h i = h 1 i , h 2 i , . . . , h ni is of the same length as d .",
"We replace d with d i = d + h i when we encode a document.",
"More specifically, we use the concatenation of b , d i , q , l , o i , and e as the new input of GPT during fine-tuning (Section 3.1), where b , l , and e denote the embedding of the start token, delimiter token, and end token, respectively, and q and o i represent the sequence of the text embedding of q and o i , respectively.",
"While in previous work (Radford et al., 2018), the original GPT is directly fine-tuned on an MRC end task, we instead develop a fine-tuning approach inspired by the self-assessment reading strategy.",
"In particular, we propose a simple method to generate questions and their associated multiple span-based answer options, which cover the content of multiple sentences from a reference document.",
"By first fine-tuning a pre-trained model on these practice instances, we aim to render the resulting fine-tuned model more aware of the input structure and to integrate information across multiple sentences as may be required to answer a given question.",
"on each document from the end task (i.e., RACE in this paper).",
"We describe the steps as follows.",
"Input: a reference document from the end task.",
"Output: a question and four answer options associated with the reference document.",
"1. Randomly pick no more than n s sentences from the document and concatenate these sentences together.",
"2. Randomly pick no more than n c nonoverlapping spans from the concatenated sentences.",
"Each span randomly contains no more than n t tokens within a single sentence.",
"We concatenate the selected spans to form the correct answer option.",
"We remove the selected spans from the concatenated sentences and use the remaining text as the question.",
"3. Generate three distractors (i.e., wrong answer options) by randomly replacing spans in the correct answer option with randomly picked spans from the document.",
"where n q , n s , n c , and n t are used to control the number and difficulty level of the questions.",
"For most of the hyperparameters, we follow the work of Radford et al. (2018).",
"We use the same preprocessing procedure and the released pre-trained transformer.",
"We generate 119 k instances based on the reference documents from the training and development set of RACE (Lai et al., 2017), with n q = 10 , n s = 3 , n c = 4 , and n t = 4 (Section 3.4).",
"We first fine-tune the original pre-trained model on these automatically generated instances with 1 training epoch (data flow 1 boxed in Figure 1).",
"We then fine-tune the model on a large-scale general domain MRC dataset RACE with 5 training epochs (data flow 2 boxed in Figure 1).",
"Finally, we fine-tune the resulting model on one of the aforementioned six out-of-domain MRC datasets (at max 10 epochs).",
"See data flow 3 boxed in Figure",
"1. When we fine-tune the model on different datasets, we set the batch size to 8 , language model weight to 2 .",
"We ensemble models by averaging logits after the linear layer.",
"For strategy highlighting (Section 3.3), the content-word POS tagset T = { NN, NNP, NNPS, NNS, VB, VBD, VBG, VBN, VBP, VBZ, JJ, JJR, JJS, RB, RBR, RBS, CD, FW } , and we randomly initialize (cid:96) + and (cid:96) .",
"In Table 2, we first report the accuracy of the state-of-the-art models (MMN and original fine-tuned GPT) and Amazon Turkers (Human perfor-mance).",
"We then report the performance of our implemented fine-tuned GPT baselines and our models (GPT+Strategies).",
"Results are shown on the RACE dataset (Lai et al., 2017) and its two subtasks: RACE-M collected from middle school exams and RACE-H collected from high school exams.",
"Our single and ensemble models outperform previous state-of-the-art (i.e., GPT and GPT ( 9 )) by a large margin ( 63 . 8% vs. 59 . 0% ; 66 . 7% vs. 60 . 6% ).",
"The two single-model strategies self-assessment and highlighting improve over the single-model fine-tuned GPT baseline ( 58 . 7% ) by 1 .",
"7% and 4 .",
"5% , respectively.",
"Using the back and forth reading strategy, which involves two models, gives a 3 .",
"0% improvement in accuracy compared to the ensemble of two original fine-tuned GPTs ( 59 . 6% ).",
"Strategy combination further boosts the performance.",
"By combining self-assessment and highlighting, our single model achieves a 5 .",
"1% improvement in accuracy over the fine-tuned GPT baseline ( 63 . 8% vs. 58 . 7% ).",
"We apply all the strategies by ensembling two such single models that read an input sequence in either the original or the reverse order, leading to a 5 .",
"8% improvement in accuracy over the ensemble of two original fine-tuned GPTs ( 65 . 4% vs. 59 . 6% ).",
"To further analyze performance, we roughly divide the question types into five categories: detail ( facts and details ), inference ( reasoning ability ), main ( main idea or purpose of a document ), attitude ( author's attitude toward a topic or tone/source of a document ), and vocabulary ( vocabulary questions ) (Qian and Schedl, 2004; Lai et al., 2017) and annotate all the instances of the RACE development set.",
"As shown in Figure 2, compared to the fine-tuned GPT baseline, our single-model strategies (SA and HL) consistently improve the results across all categories.",
"Compared to other strategies, highlighting is likely to lead to bigger gains for most question types.",
"Compared to human performance, there is still a considerable room for improvements, especially on RACE-M.",
"We take a close look at the instances from the RACE-M development set that all our implementations fail to answer correctly.",
"We notice that 82 .",
"0% of them require one or multiple types of world knowledge (e.g., negation resolution, commonsense, paraphrase, and mathemat-ical/logic knowledge (Sugawara et al., 2017b,a, 2018)), especially when correct answer options are not explicitly mentioned in the reference document.",
"For example, we need the knowledge the type of thing that is written by a writer can probably be a book to answer the question follow your heart is a from the context Follow your heart by Andrew Matthews, an Australian writer, tells us that making our dreams real is life's biggest challenge .",
"Besides, 19 .",
"7% of these failed instances require coreference resolution.",
"It might be promising to leverage coreference resolvers to connect nonadjacent relevant sentences.",
"Besides the strategies introduced in Section 3, we also explore other reading strategies such as SUMMARIZATION ( I take an overall view of the text to see what it is about before carefully reading it. ) by appending an extractive summary (Boudin et al., 2015) before each reference document, which is shown less effective for machine reading comprehension in our experiments compared to the strategies we focus on.",
"In this section, we further discuss the three strategies.",
"Back and Forth Reading We notice that the input order difference between two ensemble models is likely to yield performance gains.",
"Besides ensembling two models that use input sequence [ dq $ o ] and [ o $ qd ] respectively, we also investigate other reverse or almost reverse pairs.",
"For example, we can achieve better results by ensembling [ qd $ o ] and [ o $ dq ] ( 61 . 0% ) or [ qd $ o ] and [ o $ qd ] ( 61 . 7% ), compared to the ensemble of two original fine-tuned GPTs (both of them use [ d $ qo ]) on the RACE dataset ( 59 . 6% in Table 2).",
"Highlighting We try two variants to define highlight embeddings (Equation 2 in Section 3.3) by considering the content of questions only or answer options only.",
"Experiments show that using partial information yields a decrease in accuracy ( 60 . 6% and 61 . 0% , respectively) compared to 63 .",
"2% (Table 2), achieved by considering the content words in a question and its answer options.",
"We attempt to also highlight the coreferential mentions of the content words, which does not lead to further gains, though.",
"Self-Assessment We explore alternative approaches to generate questions.",
"For example, we use the Wikipedia articles from SQuAD (Ra-jpurkar et al., 2016) instead of the general domain documents from the end task RACE.",
"We generate the same number of questions as the number of questions we generate using RACE following the same steps mentioned in Section 3.4.",
"Experiments show that this method also improves the accuracy of the fine-tuned GPT baseline ( 59 . 7% vs. 58 . 7% ).",
"As self-assessment can be somehow regarded as a data augmentation method, we investigate other unsupervised question generation methods such as sentence shuffling and paraphrasing via back-translation (Ding and Zhou, 2018; Yu et al., 2018).",
"Our experiments demonstrate that neither of them results in performance improvements on the RACE dataset.",
"We follow the philosophy of transferring the knowledge from a high-performing model pre-trained on a large-scale supervised data of a source task to a target task, in which only a small amount of training data is available (Chung et al., 2018).",
"RACE has been used to pre-train a model for other MRC tasks as it contains the largest number of general domain non-extractive questions (Table 1) (Ostermann et al., 2018; Wang et al., 2018a).",
"In our experiment, we also treat RACE as the source task and regard six representative non-extractive multiple-choice MRC datasets from multiple domains as the target tasks.",
"We require some task-specific modifications considering the different structures of these datasets.",
"In ARC and OpenBookQA, there is no reference document associated with each question.",
"Instead, a reference corpus is provided, which consists of unordered science-related sentences relevant to questions.",
"We therefore first use Lucene (McCandless et al., 2010) to retrieve the top 50 sentences by using the non-stop words in a question and one of its answer options as a query.",
"The retrieved sentences are used to form the reference document for each answer option.",
"In MultiRC, a question could have more than one correct answer option.",
"Therefore, we use a sigmoid function instead of softmax at the final layer (Figure 1) and regard the task as a binary (i.e., correct or incorrect) classification problem over each (docu-ment, question, answer option) instance.",
"When we adapt our method to the non-conventional MRC dataset ROCStories, which aims at choosing the correct ending to a four-sentence incomplete story from two answer options (Mostafazadeh et al., 2016), we leave the question context empty as no explicit questions are provided.",
"Since the test set of MultiRC is not publicly available, we report the performance of the model that achieves the highest micro-average F1 (F1 a ) on the development set.",
"For other tasks, we select the model that achieves the highest accuracy on the development set and report the accuracy on the test set.",
"We first fine-tune GPT using our proposed three strategies on RACE and further fine-tune the resulting model on one of the six target tasks (see Table 3).",
"During the latter fine-tuning stage, besides the highlighting embeddings inherited from the previous fine-tuning stage, we also apply the strategy back and forth reading , and we do not consider self-assessment since the model has already benefited from the high-quality RACE instances during the first fine-tuning stage.",
"We compare with the baselines that are first fine-tuned on RACE and then fine-tuned on a target task without the use of strategies, which already outperform previous state-of-the-art (SOTA) on four out of six datasets (OpenBookQA, SemEval-2018 Task 11, ROCStories, and MultiRC).",
"By using the strategies, we obtain a 7 .",
"8% absolute improvement in average accuracy over the ensemble baseline ( 58 . 5% ) and a 6 .",
"2% absolute improvement over previous SOTA ( 60 . 1% ).",
"To further investigate the contribution of the strategies, we directly fine-tune GPT on a target task without using the labeled data in RACE (i.e., we only keep data flow 3 in Figure 1).",
"Compared to the baseline that is fine-tuned without using strategies ( 54 . 6% ), we obtain a 10 .",
"4% relative improvement in average accuracy ( 60 . 3% ) and especially large improvements on datasets ARC, OpenBookQA, and MCTest (Table 4).",
"We primarily discuss methods applied to large-scale datasets such as RACE (Lai et al., 2017).",
"Researchers develop a variety of methods with attention mechanisms (Chen et al., 2016; Dhingra et al., 2017; Xu et al., 2018; Tay et al., 2018; Tang et al., 2019) for improvement such as adding an elimination module (Parikh et al., 2018) or applying hierarchical attention strategies (Zhu et al., 2018; Wang et al., 2018b).",
"These methods seldom take the rich external knowledge (other than pre-trained word embeddings) into considerations.",
"Instead, we investigate different strategies based on an existing pre-trained transformer (Radford et al., 2018) (Section 3.1), which leverages rich linguistic knowledge from external corpora and achieves state-of-the-art performance on a wide range of natural language processing tasks including machine reading comprehension.",
"Transfer learning techniques have been successfully applied to machine reading comprehension (Golub et al., 2017; Chung et al., 2018) and question answering (Min et al., 2017; Wiese et al., 2017).",
"Compared to previous work, we simply fine-tune our model on the source data and then further fine-tune the entire model on the target data.",
"The investigation of methods such as adding additional parameters or an L2 loss and fine-tuning only part of the parameters is beyond the scope of this work.",
"Previous methods augment the training data for extractive machine reading comprehension and question answering by randomly reordering words or shuffling sentences (Ding and Zhou, 2018; Li and Zhou, 2018) or generating questions through paraphrasing (Yang et al., 2017; Yuan et al., 2017), which require a large amount of training data or limited by the number of training instances (Yu et al., 2018).",
"In comparison, our problem (i.e., question and answer options) generation method does not rely on any existing questions in the training set, and the generated questions can involve the content of multiple sentences in a reference document.",
"Inspired by previous research on reading strategies for improved comprehension levels of human readers, we propose three strategies (i.e., back and forth reading, highlighting, and self-assessment), aiming at improving machine reading comprehension using limited resources: a pre-trained language model and a limited number of training instances.",
"By applying the proposed three strategies, we obtain a 5 .",
"8% absolute improvement in accuracy over the state-of-the-art performance on the RACE dataset.",
"By fine-tuning the resulting model on a new target task, we achieve new state-of-the-art results on six representative non-extractive MRC datasets from multiple domains that require a diverse range of reading skills.",
"These results consistently indicate the effectiveness of our proposed strategies and the general applicability of our fine-tuned model that incorporates these strategies.",
"We would like to thank the anonymous reviewers for their constructive suggestions.",
"We thank Hai Wang, Chengzhu Yu, and Chao Weng for their useful discussions.",
"We especially thank Chao for helping us speed up the release of the preprint 1 with technical supports.",
"We thank Rishi Bom-masani for proofreading the paper and Saku Sug-awara for sharing annotations with us."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"other",
"abstain",
"result",
"other",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"objective",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Parsing spoken dialogue poses unique diffi-culties, including disfluencies and unmarked boundaries between sentence-like units.",
"Previous work has shown that prosody can help with parsing disfluent speech (Tran et al., 2018), but has assumed that the input to the parser is already segmented into sentence-like units (SUs), which isn't true in existing speech applications.",
"We investigate how prosody affects a parser that receives an entire dialogue turn as input (a turn-based model ), instead of gold standard pre-segmented SUs (an SU-based model ).",
"In experiments on the English Switchboard corpus, we find that when using transcripts alone, the turn-based model has trouble segmenting SUs, leading to worse parse performance than the SU-based model.",
"However, prosody can effectively replace gold standard SU boundaries: with prosody, the turn-based model performs as well as the SU-based model (90.79 vs. 90.65 F1 score, respec-tively), despite performing two tasks (SU segmentation and parsing) rather than one (pars-ing alone).",
"Following Tran et al. (2019) and others, we use a human-generated gold transcript instead of an automatic speech recognition RETRACTED This paper was retracted.",
"Analysis shows that pitch and intensity features are the most important for this corpus, since they allow the model to correctly distinguish an SU boundary from a speech disfluency a distinction that the model otherwise struggles to make.",
"1 Introduction Parsing spoken dialogue poses unique difficulties: spontaneous speech is full of disfluencies, including false starts, repetitions, and filled pauses.",
"In addition, speech transcripts lack punctuation, which would otherwise help signal the boundaries of sentence-like units (SUs).",
"1 Because of these diffi-culties, current parsers struggle to accurately parse 1 We follow Kahn et al. (2004) in using the term sentence-like units' rather than sentences' throughout, since conversational speech doesn't always consist of syntactically complete sentences.",
"English speech transcripts, even when they handle other English text well.",
"However, research has shown that prosody can help with at least one of these problems, improving parsing performance for speech that contains disfluencies (Tran et al., 2018, 2019).",
"In this work, we hypothesize that incorporating prosodic features from the speech signal can actually help with both of these problems: not only parsing disfluent speech, but also parsing speech that isn't segmented into SUs.",
"Other researchers have augmented parsers with prosodic features, but always with the assumption that the parser has access to gold SU boundaries, which cannot be assumed in a deployed speech application.",
"For example, Gregory et al. (2004); Kahn et al. (2005) and Hale et al. (2006) incorporated prosody into statistical parsers or parse rerankers, with mixed results.",
"More recently, Tran et al. (2018) and Tran et al. (2019) found that prosody improved an end-to-end neural parser, with the most significant gains in disfluent sentences.",
"Parsing without access to gold SU boundaries is much more difficult: Kahn and Ostendorf (2012) showed that parsing quality depends on the quality of the sentence segmentation.",
"Furthermore, finding SU boundaries is not as simple as finding long pauses in speech, as we demonstrate below.",
"We hypothesize that access to prosodic features will help an English parser that has to both parse and correctly identify SU boundaries (which we call SU segmentation ).",
"We test this hypothesis by inputting entire dialog turns to a neural parser without gold SU boundaries.",
"We call this the turn-based model, and compare it to an SU-based model, which assumes gold SU boundaries and parses one SU at a time.",
"We use turns as our input unit because they resemble the input a dialog agent would receive from a user.",
"980 (ASR) transcript; we plan to use ASR output in future work.",
"We build on the work of Tran et al. (2018) and Tran et al. (2019), considering two different experimental conditions for each model: inputting text features only and inputting both text and prosodic features.",
"Using the Switchboard corpus of English conversational dialogue, we find that when only transcripts are used, the turn-based parser performs considerably worse than the SU-based parser, which is not surprising given that it needs to perform two tasks instead of one.",
"However, when prosodic features are included, there is no difference in performance between the turn-based and SU-based models, and both models outperform the text-only counterparts.",
"Our primary contributions are: We show that a parser that has access to prosody can perform both SU segmentation and parsing as well as a model that only has to parse.",
"We show that one difficultly for the prosody-free turn-based model is that it confuses speech disfluencies with SU boundaries, as illustrated in Figure 1. Further analysis indicates that adding pitch and intensity features can help the model to disambiguate the two, while pause and duration features do not.",
"RETRACTED This paper was retracted.",
"2 Background: prosody and syntax Prosodic signals divide speech into units (Pier-rehumbert, 1980).",
"The location and type of these prosodic units are determined by information structure (Steedman, 2000), disfluencies (Shriberg, 2001), and to some extent, syntax (Cutler et al., 1997).",
"Some psycholinguistic research shows that in experimental conditions, speakers can use prosody to predict syntax for example, that English speakers can use prosody to determine where to attach a modifier or prepositional phrase, or how to correctly group coordinands (e.g., Kjelgaard and Speer (1999); Speer et al. (1996); Warren et al. (1995)).",
"However, Cutler et al. (1997) argues that English speakers often fail to exploit this prosodic information even when it is present, so it isn't actually a signal for syntax in practice.",
"Many computational linguists have experimented with this possible link between syntax and prosody by incorporating prosody into syntactic parsers (e.g., Noeth et al. (2000); Gregory et al. (2004); Kahn et al. (2005); Tran et al. (2018)).",
"These models have had mixed success: For example, Gregory et al. (2004) found that prosody was at best a neutral addition to their model, while Kahn et al. (2005) found that prosody helped rerank PCFG output.",
"One possible reason that prosody is only somewhat effective in previous research is that prosodic units below the level of the SU do not always coincide with traditional syntactic constituents (Selkirk, 1995, 1984).",
"2 In fact, the only prosodic boundaries that consistently coincide with syntactic boundaries are the prosodic boundaries at the ends of SUs (Wagner and Watson, 2010).",
"The prosodic boundaries at the end of SUs are more distinctive (i.e., tending to correspond to longer pauses and more distinctive pitch and intensity variations) and less likely appear in any other location.",
"These features make prosody a reliable signal for SU boundaries, even though it is an unreliable signal for syntactic structure below the SU level.",
"Some researcheres have used this correlation between prosody and SU boundaries to help in SU boundary detection.",
"Examples of SU segmentation models that found prosodic cues were important include Gotoh and Renals (2000); Kolar et al. (2006); Kahn et al. (2004); Kahn and Ostendorf (2012), who all used traditional statistical models (e.g., HMMs, finite state machines, and decision trees), and Xu et al. (2014), who used a neural model.",
"Kahn et al. (2004) and Kahn and Ostendorf (2012) also looked at downstream parsing accuracy on the same corpus we use.",
"Like us, Kahn and Ostendorf (2012) don't use gold SU boundaries, but direct comparison is impossible because they use ASR output instead of human transcriptions and a different metric for parse performance (SPar-seval; Roark et al. (2006)).",
"However, they show that having access to gold SU boundaries increases the SParseval score from 78.5 to 82.3, which shows that parsing without gold SU boundaries is difficult.",
"However, in some research areas, prosody is less frequently used for SU detection.",
"Some ASR corpora and applications segment at relatively arbitrary boundaries such as long silences or even regular intervals (e.g., Jain et al. (2020)).",
"Other applications, such as speech translation, do require syntactically coherent input, but even there, systems targeting SUs have often used only textual features (Sridhar et al., 2013; Wan et al., 2020).",
"2 We refer here to traditional constituency parsing; CCG (Steedman and Baldridge, 2011) proposes different syntactic constituents that coincide with prosodic units.",
"TURN S { th} that's of course being facetious SBARQ { how do you } how do you feel",
"(a) Text+prosody model output TURN SQ do you feel { th} that's of course being facetious WHADVP how EDITED { how do you }",
"disfluencies (shown in curly braces) and an SU boundary.",
"(a), which matches the gold SU boundaries.",
"(b)).",
"Systems for restoring punctuation from ASR output must identify SU boundaries to correctly insert sentence-final punctuation, but these systems are typically evaluated on rehearsed monologues (such as TED talks) or read speech, which largely lack disfluencies (e.g., Federico et al. (2012)).",
"Here, we show that prosody is primarily helpful for distinguishing SU boundaries from disfluencies, so although some of these systems have used prosody (e.g., Tilk and Alumae (2016)), text-only systems are very competitive (e.g., Che et al. (2016); Alam et al. (2020)).",
"Even when SU boundaries are already known, other research in parsing conversational speech has shown that prosody helps identify and correctly handle disfluencies.",
"Tran et al. (2018) found that prosody only modestly affects parsing of fluent SUs, but has a marked effect on disfluent SUs.",
"This accords with other previous work that has found that prosody is helpful in disfluency detection (Zayats and Ostendorf, 2019) We discuss the relationship between prosody and disfluencies in greater detail in Section 6, including how prosody helps the model not to confuse disfluencies and SU boundaries, as shown in Figure 1 above.",
"This example shows how the two sentences in (1) would be fused RETRACTED This paper was retracted.",
"We use the American English corpus Switchboard NXT (henceforth SWBD-NXT) (Calhoun et al., 2010).",
"We choose this corpus mainly so we can compare performance with Tran et al. (2018) and Tran et al. (2019), as well as other earlier probabilistic models such as Kahn et al. (2005).",
"SWBD-NXT comprises 642 dialogues between strangers conducted by telephone.",
"These dialogues are transcribed and hand-annotated with Penn Treebank-style constituency parses.",
"We preprocess the transcripts to remove punctuation and lower-case all letters, making the input more like an ASR transcript that would be used in a deployed application.",
"The transcript divides the corpus into SUs and turns .",
"Since these SUs may be sentences or other syntactically independent units such as sentence fragments, we use the generic term sentence-like unit' (SU).",
"A turn is a contiguous span of speech by a single speaker.",
"Turns are hand-annotated in SWBD-NXT, but for a deployed dialog agent, a turn is simply whatever contiguous input the user gives.",
"Not all turns in the SWBD-NXT contain more than one SU: of a total 60.1k turns, 35.8k consist of a single SU.",
"The remaining 24.3k contain more than one SU; the majority (52.4 percent) of these contain just two SUs.",
"The average number of SUs per turn is 1.82.",
"We follow the general approach of Tran et al. (2018), but where they parse a single SU at a time, we give our parser a single dialog turn at a time for our turn-based model.",
"The model returns constituency parses for the turn in the form of Penn Treebank (PTB)-style trees.",
"In order to keep the output in the form of valid PTB trees, we add a top-level constituent, labelled TURN , to all turns, however many SUs they consist of.",
"(1) Separate SUs:",
"a. ( S ( NP Kim) ( VP sings))",
"b. ( S ( NP Sidney) ( VP dances)) (2) Merged into a single turn:",
"a. ( TURN ( S ( NP Kim) ( VP sings)) ( S ( NP Sidney) ( VP dances))) Of course, using turns instead of SUs leads to longer inputs.",
"We experiment with a pipeline approach (first segmenting turns into SUs, then parsing) as well as an end-to-end approach.",
"In the end-to-end approach, we can't handle extremely long inputs since these longer sequences lead to high memory usage for transformers.",
"We still want to capture the model's behavior on generally longer inputs, so we filter out two problematically long turns from the training set (out of 49,294 turns).",
"We do not have to remove any turns from the development or test sets.",
"This leaves the maximum turn length at 270 tokens.",
"We also remove any turns for which some or all speech features are missing from the corpus.",
"From the speech signal, we extract features for pauses between words, word duration, pitch, and intensity.",
"We largely follow the feature extraction procedure outlined in Tran et al. (2018) and Tran et al. (2019), which we summarize here, noting any deviations from or additions to their procedure.",
"3 The model is a neural end-to-end constituency 3 Original: https://github.com/trangham283/prosody nlp; our extended code: https://github.com/ekayen/prosody nlp RETRACTED This paper was retracted.",
"Pause features are extracted from the time-aligned transcript.",
"Each word's pause feature corresponds to the pause follows it.",
"Each pause is categorized into one of six bins by length in seconds: p > 1 , 0 .",
"2 < p 1 , 0 .",
"05 < p 0 .",
"2 , 0 < p 0 .",
"05 , p 0 (see below), and pauses where we are missing time-aligned data.",
"Following Tran et al. (2018), the model learns 32-dimensional embeddings for each pause category.",
"Since we use turns instead of SUs, we have to determine how to handle pauses at the beginnings and endings of turns.",
"We decide to calculate pauses based on all words in the transcript, not just the words for a single speaker at a time.",
"This means that at a turn boundary, we calculate the pause as the time between the end of one speaker's turn and the beginning of the other speaker's turn.",
"If one speaker interrupts another, the pause duration has a negative value.",
"We place these negative-valued pauses in the same bin as pauses with length 0. Duration features are also extracted from the time-aligned transcript.",
"We are interested in the relative lengthening or shortening of word tokens, so we normalize the raw duration of each token.",
"Following the code base for Tran et al. (2019), we perform two different types of normalization.",
"In the first case, we normalize the token's raw duration by the mean duration of every instance of that word type.",
"In the second, we normalize the token's raw duration by the maximum duration of any word in the input unit (SU or turn).",
"These two normalization methods result in two duration features for each word token, which are concatenated and input to the model.",
"Pitch features (or more accurately, F0 features) are extracted from the speech signal using Kaldi (Povey et al., 2011).",
"These are extracted from 25ms frames every 10ms.",
"Three pitch features are extracted: warped Normalized Cross Correlation Function (NCCF); log-pitch with mean subtraction over a 1.5-second window, weighted by Probability of Voicing (POV); and the estimated derivative of the raw log pitch.",
"For further details on these features, see Ghahremani et al. (2014).",
"Intensity features are also extracted from the speech signal using the same software and frame size as we use for pitch features.",
"Starting with 40-dimensional mel-frequency filterbank features, we calculate three features: (1) the log of the total energy, normalized by the maximum total energy for the speaker over the course of the dialog; (2) the log of the total energy in the lower half of the 40 mel-frequency bands, normalized by the total energy; and (3) the log of the total energy in the upper half of the 40 mel-frequency bands, normalized by the total energy.",
"For training, development, and testing, we use the split described in Charniak and Johnson (2001), which is a standard split for experiments on SWBD-NXT (e.g., Kahn et al. (2005); Tran et al. (2018)).",
"The training set makes up 90 percent of the data, and the development and testing sets make up 5 percent each.",
"We use the parser described in Tran et al. (2019), directly extending the code base described in their paper.",
"parser based on Kitaev and Klein (2018)'s text-only parser, with a transformer-based encoder and a chart-style decoder based on Stern et al. (2017) and Gaddy et al. (2018).",
"This encoder-decoder is augmented with a CNN on the input side that handles prosodic features (Tran et al., 2019).",
"For further description of the model and hyperparameters, see Appendices A.1 and A.2.",
"The text is encoded using 300-dimensional GloVe embeddings (Pennington et al., 2014).",
"4 Of the four types of prosodic features described in Section 3, pause and duration features are already token-level.",
"However, pitch and intensity features are extracted from the speech signal at the frame level.",
"In order to map from these frame-level features to a token-level representation, the pitch and intensity features pass through a CNN, and are then concatenated with the token-level pause and duration features.",
"We follow Tran et al. (2019) in training each model 10 times with different random seeds.",
"For the development set, we report the mean of these 10 models' performance.",
"We then select the median model by development set performance, and use it to calculate test set results.",
"For any further experiments, such as those discussed in Section 6, we use the random seed for this median model.",
"Each model is trained for 50 epochs and use the epoch with highest development set performance.",
"The turn-based parser's task is also more com-R ETRACTED This paper was retracted.",
"In addition to this end-to-end approach, we also report results for a pipeline approach.",
"For the pipeline, we first segment the speech into SUs using a modified version of the parser architecture: We keep the encoder the same, but we change the decoder so that it only does sequence labelling, and we frame the SU segmentation task as a sequence labelling task.",
"We then use the SU-based parser to parse the resulting SUs.",
"We report the model's performance with and without prosodic features during the segmentation and parsing steps.",
"5 Results We compare the turn-based F1 performance of our parser to a replication of the SU-based performance described in Tran et al. (2018) and Tran et al. (2019).",
"Table 1 shows the development and test set results.",
"5 We find that the turn-based model benefits significantly from prosody.",
"The turn-based 4 See Appendix A.3 for results using BERT embeddings.",
"5 We use PyEvalb to evaluate our parser's performance, though we modify it so that it behaves identically to Evalb: https://github.com/ekayen/PYEVALB SU-based Turn-based Test set: Text only 90.29 86.56 Text+prosody 90.65 90.79 Dev.",
"model performs equivalently well to the SU-based model, despite doing two tasks instead of one.",
"The SU-based model also improves by 0.36 in F1 score on the test set with the addition of prosody.",
"Note that while prosody has a considerably larger effect on the turn-based model than on the SU based model, the exact size of this change will depend on the corpus.",
"For example, in a corpus with very few multi-SU turns, the performance change in the turn-based model might not be as large.",
"However, our results suggest that prosody helps when a model needs to both detect SU boundaries and parse SUs.",
"The biggest difference between the SUand turn-based models' performance on this corpus is in the text-only scenario, where the turn-based parser is substantially worse.",
"This is expected for a few reasons.",
"First, the text-only turn-based parser encounters longer inputs.",
"Longer inputs tend to lead to more parse errors simply because there are more ways to parse a longer string.",
"Table 2 shows this correspondence between length and performance.",
"The median length of turns in the development set is 9 tokens, while the median length of SUs is 6 tokens.",
"Longer strings are also more likely to contain the things that make parsing difficult, namely disfluencies and SU boundaries.",
"This gives the turn-based parser novel ways to make errors by splitting a turn into the wrong number of SUs.",
"However, prosody brings the turn-based parser up to the level of the SU-based parser, even though the turn-based model's task is more complex.",
"Table 5 shows how the text-only parser significantly overestimates the number of SU boundaries.",
"Without prosody, the model achieves an F1 score of 63.74 on SU prediction on the development set, compared to 99.41 with prosody (see Table 3).",
"The most comparable work on SWBD is Kahn and Ostendorf (2012), who achieved 78 F1 using a hidden-event model, where we use a much more powerful transformer model; however, their model used ASR transcripts as input, so these scores aren't directly comparable.",
"RETRACTED This paper was retracted.",
"We also test the pipeline model described in Section 4, which first segments turns into SUs and then parses them, both with and without prosody.",
"We train just one segmentation model with the same random seed as the median development set model.",
"We report the development set performance on segmentation (measured by segmentation F1 (Makhoul et al., 2000)) and parse F1 in Table 3. The text+prosody pipeline model achieves an F1 score of 99.71, which is statistically indistinguishable from the end-to-end text+prosody model.",
"In both cases, we see that the addition of prosody boosts SU segmentation accuracy to near-perfect levels, which explains why the parser performance is similar (and much better than without prosody).",
"Comparing the two text-only models reveals a more interesting pattern: while the pipeline model achieves much better segmentation F1, its parsing performance is worse.",
"This is unexpected, as parsing and segmentation performance are usually correlated.",
"This effect seems to arise because the two models err in different directions on segmentation: The pipeline model under-segments turns (corre-sponding to higher segmentation precision), while the end-to-end over-segments (higher recall, substantially lower precision).",
"When it over-segments, the end-to-end text-only model often splits a word or short constituent off of an otherwise well-formed SU subtree; by contrast, the pipeline model tends to leave two or more SUs combined and and then to generate many SU-internal parsing errors.",
"These SU-internal parsing errors include more coordination errors as well as VP, NP, and clause attachment errors than the end-to-end model.",
"6 However, the pipeline model does as well as the end-to-end model at PP attachment and modifier attachment.",
"Overall, these results show that a pipeline model can be as effective at parsing as an end-to-end one, but that including prosody is even more important for a pipeline model.",
"Since we care about parsing performance and the end-to-end text-only model does much better at parsing, we use the end-to-end model for all remaining analyses.",
"We use the Berkeley Parser Analyser (Kummer-feld et al., 2012) to determine what types of errors each of the SU-based and end-to-end turn-based models makes.",
"Figure 2 summarizes the output of the Analyser.",
"Overall, the SU-based parser shows only small effects from prosody, but the turn-based model does significantly worse on certain error types without prosody.",
"Even for the turn-based model, prosody only affects error types that have to do with the shape of the tree.",
"The different label category shows errors where two identically shaped trees have different constituent labels, and prosody has no effect on these.",
"For the turn-based model, poor SU segmentation by the text-only model explains some of the differences between the text+prosody and text-only models.",
"Since 68.8 percent of SUs are clauses (i.e., 6 We use the Berkeley Parser Analyser to analyze types of parse error (Kummerfeld et al., 2012).",
"Our turn-based model performs worse overall on disfluent turns than on fluent turns, which was also true of Tran et al. (2018)'s SU-based model.",
"Prosody also leads to a greater gain in F1 for disfluent turns than for fluent turns.",
"These differences in performance are shown in Table 4. The lower performance on disfluent sentences may be at least partially attributable to length differences: the median length of turns with disfluencies is 28 tokens, compared to 3 tokens for fluent turns, where we define a disfluent turn as any turn containing the constituent tag EDITED .",
"As discussed in Section 5, longer input generally leads to more parser errors, meaning that disfluent sentences are more likely to cause parser errors.",
"However, there are other reasons disfluencies are difficult for the turn-based model, as discussed in the following section.",
"One effect of disfluencies is that the text-only model tends to confuse certain kinds of disfluencies for SU boundaries, as illustrated in Figure 1. Table 5 shows that the text+prosody model largely avoids this confusion, and indeed can do so almost as well using only pitch or intensity features.",
"However, models using only pause or duration features are not good at distinguishing disfluencies from SU boundaries and predict boundaries too often.",
"These results largely concur with previous work describing the similarities and differences between prosodic features of disfluencies and SU boundaries (Shriberg, 2001; Wagner and Watson, 2010).",
"(and do not) accord with expectations.",
"The disfluencies that are relevant to this discussion include repetitions and restarts.",
"Examples of these from SWBD-NXT are shown here, with bracketing added for clarity: (3) Spurious repetition: it [may] may be at this point Restart: [but it's] but I think it's relatively unimportant In these examples, the text in square brackets is called the reparandum , which is immediately followed by the interruption point .",
"Disfluencies in SWBD-NXT are marked in the constituency parse annotation, where the reparandum is marked as a constituent with the label EDITED .",
"The interruption point is the right edge of this constituent.",
"Our model may be able to learn such temporal patterns, but even just looking at static pitch features re-R ETRACTED This paper was retracted.",
"Our analysis draws on the work of Shriberg (2001), who described the prosodic features of the interruption point and the reparandum based on an analysis of three English conversational and task-based dialogue corpora the Switchboard Corpus (which we use a subset of), ATIS (Hirschman, 1992), and AMEX (Kowtko and Price, 1989).",
"Pauses.",
"Although pauses may be the most intuitive potential cue to SU boundaries, previous work suggests that long pauses also characterize interruption points (Wagner and Watson, 2010; Shriberg, 2001).",
"Indeed, our analysis shows that longer pauses ( > 0 . 05 s ) are over-represented in both locations.",
"If pause types were distributed uniformly, 16 percent of both SU boundaries and interruption points would have a longer pause.",
"Instead, we find that 33 percent of SUs boundaries and 37 percent of interruption points have such pauses.",
"This explains why the pause-only model tends to confuse SU boundaries and interruption points.",
"Duration.",
"Shriberg (2001) found that both interruptions and SU boundaries are associated with lengthening of the immediately preceding syllable.",
"Lengthening before the interruption point may occur even if there are no other prosodic cues to the disfluency, and can be far greater than at SU boundaries (Shriberg, 2001, 161).",
"This type of lengthening is captured by our first duration feature, which measures the token duration normalized by the mean duration for its word type.",
"Like Shriberg (2001), we find that words preceding SU boundaries are lengthened on average (normalized duration: 1.18), and those preceding interruption points even more so (normalized duration: 1.41).",
"In prin-ciple, this extra lengthening could help the duration-only model distinguish SU boundaries from interruptions, but in practice the model is nearly as bad at distinguishing them as the text-only model.",
"The second duration feature is the token length normalized by the maximum length of any token in the input, to normalize for speaking rate.",
"Initially, this feature looks helpful: SU-final words have mean value of 0.86, while words directly before the interruption point have a mean of 0.50.",
"However, the feature mainly captures the number of phones in a word, since words with fewer phones including English function words tend to have shorter normalized duration.",
"It turns out that function words occur more often before interruption points than before SU boundaries: using NLTK's stopwords as a heuristic for function words, only 21.9 percent of development set SUs end in a function word, while the word before an interrutption point is a function word 51.6 percent of the time (Bird and Klein, 2009).",
"Since the second duration feature captures a lexical distinction that is already signalled in the text, it cannot help the duration-only model outperform the text-only model.",
"Pitch.",
"Based on previous work, our finding that pitch features are useful is not a surprise: the pitch contour before an interruption point is generally flat or slowly falling (Shriberg, 2001, 161), while SU boundaries are characterized by a boundary tone , generally corresponding to a fall or rise.",
"veals differences between boundaries and interruptions for two of the three features.",
"In particular, the mean warped NCCF value for pre-interruption point words is significantly higher than the value for SU-final words ( p < 0 . 001 ), though somewhat lower than the overall average value across the development set.",
"Meanwhile, the log-pitch with POV-weighted mean subtraction is significantly lower at interruption points than at SU boundaries ( p < 0 . 01 ).",
"These differences allow the pitch-only model to distinguish SU boundaries and interruption points much better than the pauseor duration-only models can (see Table 5).",
"Of these two pitch features, log-pitch is a more direct indicator of fundamental frequency (F0), which suggests that average perceived pitch is likely lower before disfluencies than before SU boundaries.",
"There could be several reasons for this difference.",
"For example, it could be that the flat or slowly falling tone of disfluencies that Shriberg (2001) describes has a lower average value than SU boundaries which can have either a fall or a rise (e.g., for certain kinds of questions).",
"However, examining pitch features across the whole corpus obscures more subtle distinctions such as different types of pitch contours.",
"Intensity.",
"We find that intensity features alone are enough to distinguish SU boundaries from interruption points, which is interesting because intensity has not been previously identified as an important cue: Shriberg (2001) doesn't note any particularly distinctive intensity features of the reparandum or interruption point, and work by Kim et al. (2006) on the Switchboard Corpus suggests that SU boundaries are correlated to lower intensity in some speakers, but that this isn't consistent across speakers.",
"The three intensity features correspond to overall energy, energy in the lower half of frequencies, and energy in the higher frequencies.",
"SU-final words have a significantly higher mean value for lower-frequency intensity than all other words ( p < 0 . 001 ), while words before the interruption point do not.",
"This systematic difference in one intensity feature seems to be part of how intensity features allow the model to consistently tell SU boundaries apart from disfluencies.",
"RETRACTED This paper was retracted.",
"Overall performance.",
"Given our claim that the main issue facing the text-only turn-based parser is distinguishing disfluencies from SU boundaries, it is not surprising that the two features that do best at this, pitch and intensity, also yield the highest overall performance.",
"Results are shown in Table 6.",
"7 Conclusion Our experiments show that parsing English speech transcriptions without gold SU boundaries is difficult for our parser: Its F1 score drops by about 4 percentage points compared to a model with gold SU boundaries.",
"Incorrect SU segmentation causes a large part of this damage, though other errors in tree construction also play a role.",
"We show that we can undo this damage by giving our parser prosodic information.",
"Importantly, prosody helps by allowing the parser to distinguish disfluencies from SU boundaries.",
"These results argue for giving prosodic information to parsers in deployed applications, where no SU boundary annotations are available, including dialog agents.",
"Furthermore, our experiments show that even limited prosodic features help a great deal: for our English data, pitch information alone is not significantly worse than pitch, intensity, pause, and word duration information combined.",
"This means that incorporating the right kind of prosodic information can potentially lead to significant gains.",
"We are very grateful to Trang Tran for help answering questions related to her code and to Mari Ostendorf for conversations that helped inspire this paper.",
"We would like to thank Korin Richmond, the ACL reviewers, and members of the AGORA research group at the University of Edinburgh for their feedback.",
"This work was supported by funding from Huawei and the project SEMANTAX, which received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 742137).",
"988 References Tanvirul Alam, Akib Khan, and Firoj Alam.",
"Edward Loper Bird, Steven and Ewan Klein.",
"2009.",
"Natural Language Processing with Python .",
"O'Reilly Media Inc.",
"Eugene Charniak and Mark Johnson.",
"2001.",
"Edit detection and parsing for transcribed speech.",
"In Second Meeting of the North American Chapter of the Association for Computational Linguistics .",
"Anne Cutler, Delphine Dahan, and Wilma van Donse-laar.",
"1997.",
"Prosody in the comprehension of spoken language: A literature review.",
"Language and Speech , 40(2):141201.",
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
"2019.",
"BERT: Pre-training of deep bidirectional transformers for language understanding.",
"In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 41714186, Minneapolis, Minnesota.",
"Association for Computational Linguistics.",
"Marcello Federico, Sebastian Stuker, Luisa Bentivogli, Michael Paul, Mauro Cettolo, Teresa Herrmann, Jan Niehues, and Giovanni Moretti.",
"2012.",
"The iwslt 2011 evaluation campaign on automatic talk translation.",
"In International Conference on Language Resources and Evaluation (LREC) , pages 35433550.",
"RETRACTED This paper was retracted."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another.",
"This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks.",
"PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding.",
"Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.",
"Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding.",
"Furthermore, analysis reveals that PLBART learns program syntax, style ( e.g., identifier naming convention), logical flow ( e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.",
"Engineers and developers write software programs in a programming language (PL) like Java, Python, etc., and often use natural language (NL) to communicate with each other.",
"Use of NL in software engineering ranges from writing documentation, commit messages, bug reports to seeking help in different forums ( e.g., Stack Overflow), etc.",
"Automating different software engineering applications, such as source code summarization, generation, and translation, heavily rely on the understanding of PL and NLwe collectively refer them as PLUG (stands for, Program and Language Understanding and Generation) applications or tasks.",
"Equal contribution.",
"Note that the use of NL in software development is quite different than colloquially written and spoken language.",
"For example, NL in software development often contains domain-specific jargon, e.g., when software engineers use Code Smell 1 , it means a potential problem in code (something other than Smell in regular English language).",
"In this work, our goal is to develop a general-purpose model that can be used in various PLUG applications.",
"Recent advancements in deep learning and the availability of large-scale PL and devel-opers' NL data ushered in the automation of PLUG applications.",
"One important aspect of PLUG applications is that they demand a profound understanding of program syntax and semantics and mutual dependencies between PL and NL.",
"For example, Figure 1 shows two implementations of the same algorithm (sorting) in two PL and corresponding NL summary.",
"An automatic translation tool must understand that function sorted in Python acts similar to Arrays.sort in Java and the lambda 1 https://en.wikipedia.org/wiki/Code_smell operation in Python is equivalent to instantiating a Comparator object in Java.",
"Similarly, a tool that summarizes either of these code must understand that x[0] in Python or Tuple.get(0) in Java refers to the first element in the tuple list.",
"Most of the available data in PL and NL are unlabeled and cannot be trivially used to acquire PLUG task-specific supervision.",
"However, PLUG tasks have a common prerequisite understanding PL and NL syntax and semantics.",
"Leveraging unlabelled data to pretrain a model to learn PL and NL representation can be transferred across PLUG tasks.",
"This approach reduces the requirement of having large-scale annotations for task-specific fine-tuning.",
"In recent years we have seen a colossal effort to pretrain models on a massive amount of unlabeled data ( e.g., text, images, videos) (Devlin et al., 2019; Liu et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020; Li et al., 2019; Sun et al., 2019) to transfer representation encoders across a wide variety of applications.",
"There are a few research effort in learning general purpose PL-NL representation encoders, such as CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2021) that are pretrained on a small-scale bimodal data (code-text pairs).",
"Such models have been found effective for PLUG tasks, including code search, code completion, etc.",
"Language generation tasks such as code summarization is modeled as sequence-to-sequence learning, where an encoder learns to encode the input code and a decoder generates the target summary.",
"Despite the effectiveness of existing methods, they do not have a pretrained decoder for language generation.",
"Therefore, they still require a large amount of parallel data to train the decoder.",
"To overcome this limitation, Lewis et al. (2020) proposed denoising sequence-to-sequence pre-training where a Transformer (Vaswani et al., 2017) learns to reconstruct an original text that is corrupted using an arbitrary noise function.",
"Very recently, Lachaux et al. (2020) studied denoising pre-training using a large-scale source code collection aiming at unsupervised program translation and found the approach useful.",
"This raises a natural question, can we unify pretraining for programming and natural language?",
"Presumably, to facilitate such pre-training, we need unlabeled NL text that is relevant to software development.",
"Note that unlike other bimodal scenarios ( e.g., vision and language), PL and associated NL text share the same alphabet or uses anchor tokens Java Python NL All Size 352 GB 224 GB 79 GB All Nb of tokens 36.4 B 28 B 6.7 B All Nb of documents 470 M 210 M 47 M Table 1: Statistics of the data used to pre-train PLBART.",
"( e.g., sort, list, tuple as shown in Figure 1) that can help to learn alignment between semantic spaces across languages.",
"We introduce PLBART (Program and Language BART), a bidirectional and autoregressive transformer pre-trained on unlabeled data across PL and NL to learn multilingual representations applicable to a broad spectrum of PLUG applications.",
"We evaluate PLBART on code summarization, generation, translation, program repair, clone detection, and vulnerability detection tasks.",
"Experiment results show that PLBART outperforms or rivals state-of-the-art methods, e.g., CodeBERT and GraphCodeBERT, demonstrating its promise on program understanding and generation.",
"We perform a thor-ough analysis to demonstrate that PLBART learns program syntax, logical data flow that is indispensable to program semantics, and excels even when limited annotations are available.",
"We release our code 2 to foster future research.",
"PLBARTPLBART uses denoising sequence-to-sequence pretraining to utilize unlabeled data in PL and NL.",
"Such pre-training lets PLBART reason about language syntax and semantics.",
"At the same time, PLBART learns to generate language coherently.",
"Data & pre-processing We pre-train PLBART on a large-collection of Java and Python functions and natural language descriptions from Github and StackOverflow, respectively.",
"We download all the GitHub repositories associated with Java and Python languages available on Google BigQuery.",
"3 We extract the Java and Python functions following the pre-processing pipeline from Lachaux et al. (2020).",
"We collect the StackOverflow posts (in-clude both questions and answers, exclude code 2 https://github.com/wasiahmad/PLBART 3 https://console.cloud.google.com/ marketplace/details/github/github-repos PLBART Encoder Input PLBART Decoder Output Is 0 the [MASK] Fibonacci [MASK] ?",
"snippets) by downloading the data dump (date: 7th September 2020) from stackexchange.",
"4 Statistics of the pre-training dataset are presented in Table",
"1. We tokenize all the data with a sentencepiece model (Kudo and Richardson, 2018) learned on 1/5'th of the pre-training data.",
"We train sentencepiece to learn 50,000 subword tokens.",
"One key challenge to aggregate data from different modalities is that some modalities may have more data, such as we have 14 times more data in PL than NL.",
"Therefore, we mix and up/down sample the data following Conneau and Lample (2019) to alleviate the bias towards PL.",
"We sample instances for pre-training according to a multinomial distribution with probabilities ( q 1 , q 2 , . . . , q N ) : q i = 1 p i p i Nj = 1 p j , p i = n i Nj = 1 n j , where N is the total number of languages and n i is the total number of instances in language i .",
"Architecture PLBART uses the same architecture as BART base (Lewis et al., 2020), it uses the sequence-to-sequence Transformer architecture (Vaswani et al., 2017), with 6 layers of encoder and 6 layers of decoder with model dimension of 768 and 12 heads ( 140M parameters).",
"The only exception is, we include an additional layer-normalization layer on top of both the encoder and decoder following Liu et al. (2020), which is found to stabilize training with FP16 precision.",
"Noise function, f In denoising autoencoding, a model learns to reconstruct an input text that is corrupted by a noise function.",
"Reconstruction of the original input requires the model to learn language syntax and semantics.",
"In this work, we use three noising strategies: token masking, token deletion, 4 https://archive.org/download/stackexchange and token infilling (Lewis et al., 2020).",
"According to the first two strategies, random tokens are sampled and replaced with a mask token or deleted from the input sequence.",
"In token infilling, a number of text spans are sampled and replaced with a single mask token.",
"The span lengths are drawn from a Poisson distribution ( = 3 . 5 ).",
"We mask 35% of the tokens in each instance.",
"Input/Output Format The input to the encoder is a noisy text sequence, while the input to the decoder is the original text with one position offset.",
"A language id symbol (e.g., <java>, <python>) is appended and prepended to the encoder and decoder inputs, respectively.",
"We provide a few examples in Table",
"2. The input instances are truncated if they exceed a maximum sequence length of 512.",
"Learning PLBART is pre-trained on N languages (in our case, N =3), where each language N i has a collection of unlabeled instances D i = { x 1 , . . . , x n i } .",
"Each instance is corrupted using the noise function f and we train PLBART to predict the original instance x from f ( x ) .",
"Formally, PLBART is trained to maximize L : L = N i = 1 m i j = 1 log P ( x j f ( x j ) ; ) where m i is the number of sampled instances in language i and the likelihood P is estimated following the standard sequence-to-sequence decoding.",
"Optimization We train PLBART on 8 Nvidia GeForce RTX 2080 Ti GPUs for 100K steps.",
"The effective batch size is maintained at 2048 instances.",
"We use Adam ( (cid:15) = 1e-6, 2 = 0.98) with a linear learning rate decay schedule for optimization.",
"We started the training with dropout 0.1 and reduced it to 0.05 at 50K steps and 0 at 80K steps.",
"This is done to help the model better fit the data (Liu et al., 2020).",
"The total training time was approximately PLBART Encoder Input PLBART Decoder Input S def maximum (a , b , c) : NEW_LINE INDENT return max ( [ a , b , c ] ) <python> <En> Find the maximum of three numbers G Find the maximum of three numbers <En> <java> public int maximum ( int a , int b , int c ) { return Math .",
"code summarization (S), generation (G), and translation (T).",
"276 hours (11.5 days).",
"All experiments are done using the Fairseq library (Ott et al., 2019).",
"We fine-tune PLBART for two broad categories downstream applications.",
"Sequence Generation PLBART has an encoder-decoder architecture where the decoder is capable of generating target sequences autoregressively.",
"Therefore, we can directly fine-tune PLBART on sequence generation tasks, such as code summarization, generation, and translation.",
"Unlike denoising pre-training, the source sequence is given as input to the encoder during fine-tuning, and the decoder generates the target sequence.",
"The source and target sequence can be a piece of code or text sequence.",
"Table 3 shows a few examples of input and output to and for PLBART for different generation tasks.",
"Note that PLBART prepends a language id to the decoded sequence; it enables fine-tuning PLBART in a multilingual setting ( e.g., code generation in multiple languages).",
"5 Sequence Classification We fine-tune PLBART on sequence classification tasks following Lewis et al. (2020).",
"The input sequence is fed into both the encoder and decoder.",
"For a pair of inputs, we concatenate them but insert a special token (</s>) between them.",
"A special token is added at the end of the input sequence.",
"This last token's representation from the final decoder layer is fed into a linear classifier for prediction.",
"Optimization We fine-tune PLBART for a maximum of 100K steps on all the downstream tasks with 2500 warm-up steps.",
"We set the maximum learning rate, effective batch size, and dropout rate to 3e-5, 32 and 0.1, respectively.",
"The final models are selected based on the validation BLEU (in generation task) or accuracy (in classification tasks).",
"5 We do not perform multilingual fine-tuning in this work.",
"Fine-tuning PLBART is carried out in one Nvidia GeForce RTX 2080 Ti GPU.",
"To understand PLBART's performance in a broader context, we evaluate PLBART on several tasks.",
"Our evaluation focuses on assessing PLBART's ability to capture rich semantics in source code and associated natural language text.",
"We divide the evaluation tasks into four categories.",
"The evaluation task datasets are summarized in Table 4.",
"We use CodeXGLUE (Lu et al., 2021) provided public dataset and corresponding train-validation-test splits for all the tasks.",
"Code Summarization refers to the task of generating a natural language (English) summary from a piece of code.",
"We fine-tune PLBART on summarizing source code written in six different programming languages, namely, Ruby, Javascript, Go, Python, Java, and PHP.",
"Code Generation is exactly the opposite of code summarization.",
"It refers to the task of generating a code (in a target PL) from its NL description.",
"We fine-tune PLBART on the Concode dataset (Iyer et al., 2018), where the input is a text describing class member functions in Java and class environment, the output is the target function.",
"Code Translation requires a model to generate an equivalent code in the target PL from the input code written in the source PL.",
"Note that the source and target PL can be the same.",
"Hence, we consider two types of tasks in this category.",
"The first task is a typical PL translation task, translating a code i.e., from Java code to C#, and vice versa.",
"In this task, the semantic meaning of the translated code should exactly match the input Task Dataset Language Train Valid Test Summarizaion Husain et al. (2019) Ruby 24,927 1,400 1,261 Javascript 58,025 3,885 3,291 Go 167,288 7,325 8,122 Python 251,820 13,914 14,918 Java 164,923 5,183 10,955 PHP 241,241 12,982 14,014 Generation Iyer et al. (2018) NL to Java 100,000 2,000 2,000 Translation Code-Code (Lu et al., 2021) Java to C# 10,300 500 1,000 C# to Java 10,300 500 1,000 Program Repair Java small 46,680 5,835 5,835 (Tufano et al., 2019) Java medium 52,364 6,545 6,545 Classification Vulnerability Detection C/C++ 21,854 2,732 2,732 (Zhou et al., 2019) Clone Detection Java 100,000 10,000 415,416 (Wang et al., 2020) Table 4: Statistics of the downstream benchmark datasets.",
"code.",
"Thus, this task evaluates PLBART's understanding of program semantics and syntax across PL.",
"The second task we consider is program repair.",
"In this task, the input is a buggy code, and the output is a modified version of the same code which fixes the bug.",
"This task helps us understand PLBART's ability to understand code semantics and apply semantic changes in the code.",
"Code Classification aims at predicting the target label given a single or a pair of source code.",
"We evaluate PLBART on two classification tasks.",
"The first task is clone detection, where given a pair of code, the goal is to determine whether they are clone of each other (similar to paraphrasing in NLP).",
"The second task is detecting whether a piece of code is vulnerable.",
"This task help us gauging PLBART's effectiveness in program understanding in an unseen PL since the code examples in this task are written in C/C++.",
"BLEU computes the n-gram overlap between a generated sequence and a collection of references.",
"We use corpus level BLEU (Papineni et al., 2002) score for all the generation tasks, except code summarization where we use smoothed BLEU-4 score (Lin and Och, 2004) following Feng et al. (2020).",
"CodeBLEU is a metric for measuring the quality of the synthesized code (Ren et al., 2020).",
"Unlike BLEU, CodeBLEU also considers grammatical and logical correctness based on the abstract syntax tree and the data-flow structure.",
"Exact Match (EM) evaluates if a generated sequence exactly matches the reference.",
"We compare PLBART with several state-of-the-art models and broadly divide them into two categories.",
"First, the models that are trained on the evaluation tasks from scratch, and second, the models that are pre-trained on unlabeled corpora and then finetuned on the evaluation tasks.",
"Seq2Seq (Luong et al., 2015) is an LSTM based Seq2Seq model with attention mechanism.",
"Vocabulary is constructed using byte-pair encoding.",
"Transformer (Vaswani et al., 2017) is the base architecture of PLBART and other pre-trained models.",
"Transformer baseline has the same number of parameters as PLBART.",
"Hence, a comparison with this baseline demonstrates the direct usefulness of pre-training PLBART.",
"As described in section 2, PLBART consists of an encoder and autoregressive decoder.",
"We compare PLBART on two categories of pre-trained models.",
"First, the encoder-only models ( e.g., RoBERTa, CodeBERT, and GraphCodeBERT) that are combined with a randomly initialized decoder for task-specific fine-tuning.",
"The second category of baselines include decoder-only models (CodeGPT) that can perform generation autoregressively.",
"RoBERTa, RoBERTa (code) are RoBERTa (Liu et al., 2019) model variants.",
"While RoBERTa is pre-trained on natural language, RoBERTa (code) is pre-trained on source code from CodeSearch-Net (Husain et al., 2019).",
"CodeBERT (Feng et al., 2020) combines masked language modeling (MLM) (Devlin et al., 2019) with replaced token detection objective (Clark et al., 2020) to pretrain a Transformer encoder.",
"GraphCodeBERT (Guo et al., 2021) is a concurrent work with this research which improved CodeBERT by modeling the data flow edges between code tokens.",
"We report GraphCodeBERT's performance directly from the paper since their implementation is not publicly available yet.",
"GPT-2, CodeGPT-2, and CodeGPT-adapted are GPT-style models.",
"While GPT-2 (Radford et al., 2019) is pretrained on NL corpora, CodeGPT-2 and CodeGPT-adapted are pretrained on CodeSearch-Net (Lu et al., 2021).",
"Note that, CodeGPT-adapted starts from the GPT-2 checkpoint for pre-training.",
"We aim to address the following questions.",
"1. Does PLBART learn strong program and language representations from unlabeled data?",
"2. Does PLBART learn program characteristics, e.g., syntax, style, and logical data flow?",
"3. How does PLBART perform in an unseen language with limited annotations?",
"Table 5 shows the result of code summarization.",
"PLBART outperforms the baseline methods in five out of the six programming languages with an overall average improvement of 0.49 BLEU-4 over CodeBERT.",
"The highest improvement ( 16%) is in the Ruby language, which has the smallest amount of training examples.",
"Unlike CodeBERT, PLBART is not pretrained on the Ruby language; however, Methods EM BLEU CodeBLEU Seq2Seq 3.05 21.31 26.39 Guo et al. (2019) 10.05 24.40 29.46 Iyer et al. (2019) 12.20 26.60 -GPT-2 17.35 25.37 29.69 CodeGPT-2 18.25 28.69 32.71 CodeGPT-adapted 20.10 32.79 35.98 PLBART 18.75 36.69 38.52 PLBART 10 K 17.25 31.40 33.32 PLBART 20 K 18.45 34.00 35.75 PLBART 50 K 17.70 35.02 37.11 Table 6: Results on text-to-code generation task using the CONCODE dataset (Iyer et al., 2018).",
"the significant performance improvement indicates that PLBART learns better generic program semantics.",
"In contrast, PLBART performs poorly in the PHP language.",
"The potential reason is syntax mismatch between the pre-trained languages and PHP.",
"Surprisingly, RoBERTa performs better than PLBART on the PHP language.",
"We suspect that since RoBERTa is pre-trained on natural language only, it does not suffer from the syntax mismatch issue.",
"Overall in comparison to the Transformer baseline, PLBART improves with an average of 2.76 BLEU-4, and we credit this improvement to the pre-training step.",
"Table 6 shows the evaluation result on code generation from NL description.",
"PLBART outperforms all the baselines in terms of BLEU and CodeBLEU.",
"While CodeGPT-adapted (Lu et al., 2021) achieves the best Exact Match (EM) score, PLBART outperforms CodeGPT-adapted by a large margin in terms of CodeBLEU.",
"This result implies that PLBART generates significantly more syntactically and logically correct code than all the baselines.",
"Figure 2 shows an example of code generated by PLBART.",
"The difference between the reference code and the generated code is in line 6 onward.",
"In the reference code, loc0 is returned, however Methods Java to C# C# to Java BLEU EM CodeBLEU BLEU EM CodeBLEU Naive Copy 18.54 0 34.20 18.69 0 43.04 PBSMT 43.53 12.50 42.71 40.06 16.10 43.48 Transformer 55.84 33.00 63.74 50.47 37.90 61.59 RoBERTa (code) 77.46 56.10 83.07 71.99 57.90 80.18 CodeBERT 79.92 59.00 85.10 72.14 58.80 79.41 GraphCodeBERT 80.58 59.40 -72.64 58.80 PLBART 83.02 64.60 87.92 78.35 65.00 85.27 Table 7: Results on source code translation using Java and C# language dataset introduced in (Lu et al., 2021).",
"Input text: returns the count to which the specified key is mapped in this frequency counter , or 0 if the map contains no mapping for this key .",
"that is syntactically and semantically valid, but does not match the reference.",
"same loc0 is returned in an else block in the generated code.",
"If we look closely, in the reference code, line 6 will be executed only if the condition in line 3 ( i.e., loc0 == null ) is false .",
"In the generated code, loc0 will be returned only if the condition in line 3 is false , making the generated code semantically equivalent to the reference code.",
"To study whether PLBART learns code syntax and logical flow during pre-training or fine-tuning, we perform an ablation study where we use subset of the training examples (10K, 20K, and 50K) to fintune PLBART in this task.",
"As table 6 shows, with only 10K examples, PLBART outperforms all baselines in terms of CodeBLUE.",
"This ablation shows that PLBART learns program syntax and data flow during pre-training, resulting in effective performance on downstream tasks even when finetuned on small number of examples.",
"As shown in prior works (Yin and Neubig, 2017; Chakraborty et al., 2020), generating syntactically and logically correct code has been a big challenge in program generation.",
"We conjecture that PLBART's large-scale denoising sequence-to-sequence pre-training helps understand program syntax and logical flow; therefore enables PLBART to generate syntactically and logically valid code.",
"Table 7 presents the evaluation results on code translation.",
"PLBART outperforms all the baselines w.r.t. EM, BLEU, and CodeBLEU.",
"PLBART improves over CodeBERT by 9.5% and 10.5% when translating from Java to C# and C# to Java, respectively.",
"Although PLBART is not pretrained on C# language, there is a significant syntactic and semantic similarity between Java and C#.",
"Thus PLBART understands C# language syntax and semantics.",
"However, such similarities are non-trivial, making the Naive copy and PBSMT perform very poorly in both the translation tasks.",
"Figure 3 shows an example where PLBART's generated C# code does not exactly match the reference; however, they are semantically equivalent.",
"In the reference, the else block (line 4-9) is equivalent to the else if block (line 4-7) in the generated code.",
"In addition, start is generated as function parameter and used in the function body, equivalent to start_1 in the reference code.",
"This further corroborates the syntactic understanding of PLBART and its ability to reason about the data flow in source code.",
"We present more qualitative examples in Appendix.",
"In the program repair task, both the input and the output are in the same language.",
"While the input is a buggy code, the output should be the target bug-free code.",
"Thus in this task, the exact match is the critical metric.",
"Nevertheless, as shown in table 8, PLBART can generate 17.13%, and 74.03% more correct bug fixes than CodeBERT in Java small and Java medium datasets, respectively.",
"On the other hand, PLBART performs comparably to GraphCodeBERT that uses structure-aware pre-training to learn program syntax and semantics.",
"In both clone detection and the vulnerability detection tasks, PLBART outperforms CodeBERT.",
"We present the results in Table 9.",
"In the vulnerability detection task, code semantics is the most critical feature (Zhou et al., 2019; Chakraborty et al., 2020).",
"Since PLBART is not pretrained on C/C++ language, its improved performance compared to the Transformer baseline is the testament that PLBART can identify semantics beyond the language syn-tax's specifics.",
"Moreover, PLBART's improved performances over CodeBERT and GraphCodeBERT confirms its effectiveness in program understanding in addition to its generation ability.",
"We acknowledge that neither PLBART nor CodeBERT is state-of-the-art in vulnerability detection, as graph-based models perform best in this task.",
"(ac-curacy) and clone detection (F1 score) tasks.",
"In this evaluation, our goal is to study how well PLBART understands program semantics in an unseen language for a different type of task (other than the generation, i.e., classification).",
"Pre-training for Language Understanding and Generation Transformer (Vaswani et al., 2017), a sequence-to-sequence architecture that includes an encoder and decoder, has shown tremendous promise in natural language processing (NLP), computer vision, software engineering, and more.",
"Devlin et al. (2019) first proposed to pre-train a large Transformer architecture, called BERT, to learn representations of natural language using large-scale unlabeled data in a self-supervised fashion.",
"Later, BERT's task-independent pre-training approach is rigorously studied (Devlin et al., 2019; Liu et al., 2019; Solaiman et al., 2019; Feng et al., 2020; Sun et al., 2019; Li et al., 2020).",
"While BERT-like models have shown effectiveness in learning contextualized representation, it is not very useful in generation tasks.",
"GPT (Radford et al., 2018) style models improve upon BERT for generative tasks with autoregressive pre-training; however, unlike BERT, they are not bidirectional.",
"Lewis et al. (2020) introduced BART, a denoising autoencoder that uses a bidirectional encoder and an auto-regressing decoder.",
"Similar to BART, PLBART uses denoising pre-training to cope with generative tasks and learns multilingual representations of programming and natural language jointly.",
"Deep Learning in Software Engineering There is a growing interest in automating software engineering (SE) using deep learning in the last few years.",
"Vast sources of code in open source repositories and forums make deep learning feasible for SE tasks.",
"Code Summarization (Movshovitz-Attias and Cohen, 2013; Allamanis et al., 2016; Iyer et al., 2016; Alon et al., 2019a; Hu et al., 2018; Harer et al., 2019; Ahmad et al., 2020), Bug Detection (Ray et al., 2016; Li et al., 2018b; Russell et al., 2018; Zhou et al., 2019; Chakraborty et al., 2020), Program Repair (Chen et al., 2019; Chakraborty et al., 2020; Lutellier et al., 2020), Code Translation (Chen et al., 2018; Drissi et al., 2018; Xu et al., 2020), Clone Detection (Zhang et al., 2019; Yu et al., 2019; Wang et al., 2020), Code completion (Li et al., 2018a; Hellendoorn and Devanbu, 2017; Parvez et al., 2018) are some of the tasks that are addressed with deep neural solution.",
"While most of the prior approaches use task-specific representation learning, a few works (Alon et al., 2019b; Feng et al., 2020; Guo et al., 2021; Lachaux et al., 2020; Clement et al., 2020) attempted to learn transferable representations in an unsupervised fashion.",
"More closely to our work, CodeBERT (Feng et al., 2020) is pre-trained on bimodal data to capture the semantic interaction between the input modalities ( i.e., program and natural languages).",
"More recently, GraphCodeBERT (Guo et al., 2021) improves upon CodeBERT by leveraging data flow in source code.",
"In contrast, PLBART is pre-trained on large-scale data using denoising autoencoding to learn the program and natural language representations that make it effective for a broad spectrum of software engineering tasks.",
"This paper presents PLBART, a sizeable pre-trained sequence-to-sequence model that can perform program and language understanding and generation tasks.",
"PLBART achieves state-of-the-art performance on various downstream software engineering tasks, including code summarization, code generation, and code translation.",
"Furthermore, experiments on discriminative tasks establish PLBART's effectiveness on program understanding.",
"We also show that PLBART learns crucial program characteristics due to pre-training, such as syntax, identifier naming conventions, data flow.",
"In the future, we want to explore ways to fine-tune PLBART on all the downstream tasks jointly.",
"Automation in software engineering is paramount in increasing programmers' productivity.",
"A reduced workload of tedious works at the part of de-velopers' daily routine would give them more time to solve significant problems for society's wellbeing.",
"There are numerous program-and-language applications in the software development lifecycle, such as code documentation/summarization, code synthesis, translating code across languages, etc that can be automated to facilitate software engineering.",
"The availability of large-scale data (thanks to open source repositories, forums, and millions of contributors worldwide) opens up the opportunity to solve many of those problems in a data-driven fashion.",
"PLBART aims at program-and-language applications that demand a complete syntactic and semantic understanding of source code and associated textual data.",
"For the tasks we have shown evaluation, PLBART will serve as a solid and replicable baseline to guide future research.",
"We also believe our work could be an excellent starting point for future works aim at solving a variety of software engineering problems.",
"We thank anonymous reviewers for their helpful feedback.",
"We also thank UCLA-NLP group for helpful discussions and comments.",
"This work was supported in part by National Science Foundation Grant OAC 1920462, CCF 1845893, CCF 1822965, CNS 1842456.",
"Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors, and do not necessarily reflect those of the US Government or NSF."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"NER model has achieved promising performance on standard NER benchmarks.",
"However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition.",
"In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective.",
"The proposed approach contains two mutual information-based training objectives:",
"i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms;",
"ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data.",
"Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.",
"Named Entity Recognition (NER) aims to identify and classify entity mentions from unstructured text, e.g., extracting location mention \"Berlin\" from the sentence \"Berlin is wonderful in the winter\".",
"NER is a key component in information retrieval (Tan et al., 2021), question answering (Min et al., 2021), dialog systems (Wang et al., 2020), etc.",
"Traditional NER models are feature-engineering and machine learning based (Zhou and Su, 2002; Takeuchi and Collier, 2002; Agerri and Rigau, 2016).",
"Benefiting from the development of deep learning, neural-network-based NER models have achieved state-of-the-art results on several public benchmarks (Lample et al., 2016; Peters et al., 2018; Devlin et al., 2018; Yamada et al., 2020; Yan et al., 2021).",
"of NER models, but the main factor driving high performance is learning the named tokens themselves.",
"Consequently, NER models underperform when predicting entities that have not been seen during training (Fu et al., 2020; Lin et al., 2020), which is referred to as an Out-of-Vocabulary (OOV) problem.",
"There are three classical strategies to alleviate the OOV problem: external knowledge, OOV word embedding, and contextualized embedding.",
"The first one is to introduce additional features, e.g., entity lexicons (Zhang and Yang, 2018), part-of-speech tags (Li et al., 2018), which alleviates the model's dependence on word embeddings.",
"However, the external knowledge is not always easy to obtain.",
"The second strategy is to get a better OOV word embedding (Peng et al., 2019; Fukuda et al., 2020).",
"The strategy is learning a static OOV embedding representation, but not directly utilizing the context.",
"Last one is fine-tune pre-trained models, e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2018), which provide contextualized word representations.",
"Unfortunately, Agarwal et al. (2021) shows that the higher performance of pretrained models could be the results of learning the subword structure better.",
"information to tackle the OOV problem?",
"Motivated by the information bottleneck principle (Tishby et al., 2000), we propose a novel learning framework Mutual Information based Named Entity Recognition (MINER).",
"The proposed method provides an information-theoretic perspective to the OOV problem by training an encoder to minimize task-irrelevant nuisances while keeping predictive information.",
"Specifically, MINER contains two mutual information based learning objectives:",
"i) generalizing information maximization, which aims to maximize the mutual information between representations and well-generalizing features, i.e., context and entity surface forms;",
"ii) superfluous information minimization, which prevents the model from rote memorizing the entity names or exploiting biased cues via eliminating entity name information.",
"Our codes 1 are publicly available.",
"1. We propose a novel learning framework, i.e., MINER, from an information theory perspective, aiming to improve the robustness of entity changes by eliminating entity-specific and maximizing well-generalizing information.",
"2. We show its effectiveness on several settings and benchmarks, and suggest that MINER is a reliable approach to better OOV entity recognition.",
"In this section, we highlight the information bottleneck principle.",
"Subsequently, the analysis of possible issues was provided when applying it to OOV entity recognition.",
"Furthermore, we review related techniques in deriving our framework.",
"Information Bottleneck (IB) principle originated in information theory, and provides a theoretical framework for analyzing deep neural networks.",
"It formulates the goal of representation learning as an information trade-off between predictive power and representation compression.",
"Given the input dataset (X,Y), it seeks to learn the internal representation Z of some intermediate layers by: LIB = I ( Z ; Y ) + I ( Z ; X ) , where I represents the mutual information(MI), a measure of the mutual dependence between the two variables.",
"The trade-off between the two MI terms 1 https://github.com/BeyonderXX/MINER is controlled by the Lagrange multiplier .",
"A low loss indicates that representation Z does not keep too much information from X while still retaining enough information to predict Y. Section 5 suggests that directly applying IB to NER can not bring obvious improvement.",
"We argue that IB cannot guarantee well-generalizing representation.",
"On the one hand, it has been shown that it is challenging to find a trade-off between high compression and high predictive power (Tishby et al., 2000; Wang et al., 2019; Piran et al., 2020).",
"When compressing task-irrelevant nuisances, however, useful information will inevitably be left out.",
"On the other hand, it is unclear for the IB principle which parts of features are well-generalizing and which are not, as we usually train a classifier to solely maximize accuracy.",
"Consequently, neural networks tend to use any accessible signal to do so (Ilyas et al., 2019), which is referred to as a shortcut learning problem (Geirhos et al., 2020).",
"For training sets with limited size, it may be easier for neural networks to memorize entity names rather than to classify them by context and common entity features (Agarwal et al., 2021).",
"In Section 4, we demonstrate how we extend IB to the NER task and address these issues.",
"In recent years, NER systems have undergone a paradigm shift from sequence labeling, which formulates NER as a token-level tagging task (Chiu and Nichols, 2016; Akbik et al., 2018; Yan et al., 2019), to span prediction (SpanNER), which regards NER as a span-level classification task (Mengge et al., 2020; Yamada et al., 2020; Fu et al., 2021).",
"We choose SpanNER as base architecture for two reasons: 1) SpanNER can yield the whole span representation, which can be directly used for optimize information.",
"2) Compared with sequence labeling, SpanNER does better in sentences with more OOV words (Fu et al., 2021).",
"Overall, SpanNER consists of three major modules: token representation layer, span representation layer, and span classification layer.",
"Besides, our method inserts a bottleneck layer to the architecture for information optimization.",
"Let X = { x 1 , x 2 , , x n } represents the input sentence, thus, the token representation h i is as follows:",
"u 1 , , u n = Embedding ( x 1 , , x n ) (1) h 1 , , h n = Encoder ( u 1 , , u n ) (2)",
"where Embedding () is the non-contextualized word embeddings, e.g., Glove (Pennington et al., 2014) or contextualized word embeddings, e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2018).",
"Encoder () can be any network structures with context encoding function, e.g., LSTM (Hochreiter and Schmidhuber, 1997), CNN (LeCun et al., 1995), transformer (Vaswani et al., 2017), and so on.",
"For all possible spans S = { s 1 , s 2 , , s m } of sentence X , we re-assign a label y Y for each span.",
"Take \"Berlin is wonderful\" as an example, its possible spans and labels are { (1 , 1) , (1 , 2) , (1 , 3) , (2 , 2) , (2 , 3) , (3 , 3) } and { LOC, O, O, O, O, O } , respectively.",
"Given the start index b i and end index e i , the representation of span s i can be calculated by two parts: boundary embedding and span length embedding.",
"bi i i",
"Span length embedding : In order to introduce the length feature, we additionally provide the length embedding t li , which can be obtained by a learnable look-up table.",
"In order to optimize the information in the span representation, our method additionally adds an information bottleneck layer of the form:",
"where f e is an MLP which outputs both the K-dimensional mean of z as well as the K K covariance matrix .",
"Then we can use the reparam-eterization trick ((Kingma and Welling, 2013)) to get the compressed representation z i .",
"Once the information bottleneck layer is finished, z i is fed into the classifier to obtain the probability of its label y i .",
"Based on the probability, the basic loss function can be calculated as follows: L base = score ( z i , y i ) (cid:80) y Y score ( z i , y ) , (4) where score () is a function that measures the compatibility between a specified label and a span representation: score ( z i , y k ) = exp ( z Ti y k ) , (5) where y k is a learnable representation of class k.",
"Heuristic Decoding A heuristic decoding solution for the flat NER is provided to avoid the prediction of over-lapped spans.",
"For those overlapped spans, we keep the span with the highest prediction probability and drop the others.",
"It's worth noting that our method is flexible and can be used with any other NER model based on span classification.",
"In next section, we will introduce two additional objectives to tackle the OOV problem of NER.",
"Motivated by IB (Tishby et al., 2000; Federici et al., 2020), we can subdivide I ( X ; Z ) into two components by using the chain rule of mutual information(MI):",
"The first term determines how much information about Y is accessible from Z. While the second term, conditional mutual information term I ( X ; Z | Y ) , denotes the information in Z that is not predictive of Y .",
"For NER, which parts of the information retrieved from input are useful and which are redundant?",
"From human intuition, text context should be the main predictive information for NER.",
"For example, \"The CEO of X resigned\", the type of X in each of these contexts should always be \"ORG\".",
"Besides, entity mentions also provide much information for entity recognition.",
"For example, nearly all person names capitalize the first letter and follow the \"firstName lastName\" or \"lastName 5592 ! ! \" Encoder Encoder Shared IB Shared !",
"firstName\" patterns.",
"However, entity name is not a well-generalizing features.",
"By simply memorizing the fact which span is an entity, it may be possible for it to fit the training set, but it is impossible to predict entities that have never been seen before.",
"We convert the targets of Eq.",
"(6) into a form that is easier to solve via a contrastive strategy.",
"Specifically, consider x 1 and x 2 are two contrastive samples of similar context, and contains different entity mentions of the same entity category, i.e., s 1 and s 2 , respectively.",
"Assuming both x 1 and x 2 are both sufficient for inferring label y .",
"The mutual information between x 1 and z 1 can be factorized to two parts.",
"where z 1 and z 2 are span representations of s 1 and s 2 , respectively, I ( z 1 ; x 2 ) denotes the information that isn't entity-specific.",
"And I ( x 1 ; z 1 | x 2 ) represents the information in z 1 which is unique to x 1 but is not predictable by sentence x 2 , i.e., entity-specific information.",
"Thus any representation z containing all information shared from both sentences would also contain the necessary label information, and sentence-specific information is superfluous.",
"So Eq.",
"(6) can be approximated by Eq.",
"(7) by: maximize I ( z 1 ; y ) I ( z 1 ; x 2 ) , (8) minimize I ( x 1 ; z 1 | y ) I ( x 1 ; z 1 | x 2 ) , (9) The target of Eq.",
"generalizing information maximization.",
"We proved that I ( z 1 ; z 2 ) is a lower bound of I ( z 1 ; x 2 ) (proof could be found in appendix 7).",
"InfoNCE (Oord et al., 2018) was used as a lower bound on MI and can be used to approximate I ( z 1 ; z 2 ) .",
"Subsequently, it can be optimized by: L gi = E p (cid:34) g w ( z 1 , z 2 ) E p log (cid:88) z exp g w ( z 1 , z ) (cid:35) , (10) where g w ( , ) is a compatible score function approximated by a neural network, z 2 are the positive entity representations from the joint distribution p of original sample and corresponding generated sample, z are the negative entity representations drawn from the joint distribution of the original sample and other samples.",
"The target of Eq.",
"(9) is defined as superfluous information minimization.",
"To restrict this term, we can minimize an upper bound of I ( x 1 ; z 1 | x 2 ) (proofs could be found in appendix 7) as follows: L si = E x 1 ,x 2 E z 1 ,z 2 [ DJS [ p z 1 || p z 2 ]] , (11) where DJS means Jensen-Shannon divergence, p z 1 and p z 2 represent p ( z 1 | x 1 ) and p ( z 2 | x 2 ) , respectively.",
"In practice, Eq.",
"(11) encourage z to be invariant to entity changes.",
"The resulting Mutual Information based Named Entity Recognition model is visualized in Figure",
"1. 4.1 Contrastive sample generation It is difficult to obtain samples with similar contexts but different entity words.",
"contrastive samples by the mention replacement mechanism(Dai and Adel, 2020).",
"For each mention in the sentence, we replace it by another mention from the original training set, which has the same entity type.",
"The corresponding span label can be changed accordingly.",
"For example, \"LOC\" mention \"Berlin\" in sentence \"Berlin is wonderful in the winter\" is replaced by \"Iceland\".",
"where and are the weights of the generalizing information loss and superfluous information loss, respectively.",
"In this section, we verify the performance of the proposed method on five OOV datasets, and compared it with other methods.",
"In addition, We tested the universality of the proposed method in various pre-trained models.",
"1. WNUT2017 (Derczynski et al., 2017), a dataset focus on unusual, previous-unseen entities in training data, and is collected from social media.",
"2. TwitterNER (Zhang et al., 2018), an English NER dataset created from Tweets.",
"3. BioNER (Kim et al., 2004), the JNLPBA 2004 Bio-NER dataset focus on technical terms in the biology domain.",
"4. Conll03-Typos (Wang et al., 2021), which is generated from Conll2003 (Sang and De Meulder, 2003).",
"The entities in the test set are replaced by typos version(character modify, insert, and delete operation).",
"5. Conll03-OOV (Wang et al., 2021), which is generated from Conll2003 (Sang and De Meulder, 2003).",
"The entities in the test set are replaced by another out-of-vocabulary entity in test set.",
"Table 2 reports the statistic results of the OOV problem on the test sets of each dataset.",
"As shown in the table, the test set of these datasets comprises a substantial amount of OOV entities.",
"Metrics We measured the entity-level micro average F1 score on the test set to compare the results of different models.",
"Li et al. (2020) share the same intuition as us, enriching word representations with context.",
"However, the work is neither open source nor reported on the same dataset, so this method cannot be compared with MINER.",
"We compare our method with baselines as follows: Fu et al. (2021) (SpanNER), which is trained by original SpanNER framework, without any constraint and extra data processing.",
"Vanilla information bottleneck(VaniIB), a method employs the original information bottleneck constraint to the SpanNER, which is optimized based on Alemi et al. (2016).",
"Compared with our method, it directly compresses all the information from the input.",
"Dai and Adel (2020) (DataAug) , which trains model with data augmentation strategy, while keeps the same model architecture as SpanNER.",
"This model is trained by 1:1 original training set and entity replacement training set, which keeps the same input as the proposed method.",
"Shahzad et al. (2021) (InferNER), a method focus on word-, character-, and sentence-level information for NER in short-text, without recurring to external sources.",
"In addition, it is able to incorporate visual information and introduce an attention component which computes attention weight probabilities over textual and text-relevant visual contexts separately.",
"Li et al. (2021) (MIN), which utilizes both segment-level information and word-level dependencies, and incorporates an interaction mechanism to support information sharing between boundary detection and type prediction, enhancing the performance for the NER task.",
"Fukuda et al. (2020) (CoFEE), which refer to pre-trained word embeddings for known words with similar surfaces to target OOV words.",
"Nie et al. (2020) (SA-NER), which utilize semantic enhancement methods to reduce the negative impact of data sparsity problems.",
"Specifically, the method obtains the augmented semantic information from a large-scale corpus, and proposes an attentive semantic augmentation module and a gate module to encode and aggregate such information, respectively.",
"To verify the universality of our method, we measured its performance on various pre-trained models, i.e., Bert (Devlin et al., 2018), Roberta (Liu et al., 2019), Albert (Lan et al., 2019).",
"Bert-large released by Devlin et al. (2018) is selected as our base encoder.",
"The learning rate is set to 5e-5, and the dropout is set to 0.2.",
"The output dim of the information bottleneck layer is 50.",
"In order to make a trade-off for the performance and efficiency, on the one hand, we truncate the part of the sentence whose tokens exceeds 128.",
"On the other hand, we count the length distribution of entity length in different datasets, and finally choose 4 as the maximum enumerated entity length.",
"The values of and differ for different datasets.",
"Empirically, 1e-5 for and 0.01 for can get promised results.",
"The model is trained in an NVIDIA GeForce RTX 2080Ti GPU.",
"Checkpoints with top-3 performance are finally evaluated on the test set to report averaged results.",
"We demonstrate the effectiveness of MINER against other state-of-the-art models.",
"As shown in table 3, we conducted the following comparison and analysis: 1) Our baseline model, i.e., SpanNER, does an excellent job of predicting OOV entities.",
"Compared with sequence labeling, the span classification could model the relation of entity tokens directly;2) The performance of SpanNER is further boosted with our proposed approach, which proved the effectiveness of our method.",
"As shown in table 3, MINER almost outperforms all other SOTA methods without any external resource;3) Compared with Typos data transformation, it is more difficult for models to predict OOV words.",
"To pre-trained model, typos word may not appear in training set, but they share most subwords with the original token.",
"Moreover, the subword of OOV entity may be rare; 4) It seems that the traditional information bottleneck will not significantly improve the OOV prediction ability of the model.",
"We argue that the traditional information bottlenecks will indiscriminately compress the information in the representation, leading to underfitting; 5) Our model has significantly improved the performance of the model on the entity perturbed methods of typos and OOV, demonstrating that MI improve the robustness substantially in the face of noise; 6) It is clear that our proposed method is universal and can further improve OOV prediction performance for different embedding models, as we get improvements on Bert, Roberta, and Albert stably.",
"We also perform ablation studies to validate the effectiveness of each part in MINER.",
"Table 4 Dataset OOV MI F1 WNUT 2017 -51.83 -52.57 53.91 54.52 BioNER -73.78 -75.23 74.22 77.03 Twitter-NER -71.57 -73.78 73.32 75.26 Table 4: Ablation study results on three datasets.",
"demonstrates the results of different settings for the proposed training strategy equipped with BERT.",
"After only adding the L gi loss to enhance context and entity surface form information, we find that the results are better than the original PLMs.",
"A similar phenomenon occurs in L si , too.",
"It reflects that both L gi and L si are beneficial to improve the generalizing ability on OOV entities recognition.",
"Moreover, the results on the three datasets are significantly improved by adding both L gi and L si learning objectives.",
"It means L gi and L si can boost each over, which proves that our method enhances representation via deep understanding of context and entity surface forms and discourages representation from rote memorizing entity names or exploiting biased cues in data.",
"To show the different influence of our proposed training objectives L gi and L si , we conduct sensitivity analysis of the coefficient and .",
"Figure 2 shows the performance change under different settings of the two coefficients.",
"The yellow line denotes ablation results without the corresponding loss functions (with =0 or =0).",
"From Figure 2 we can observe that the performance is significantly enhanced with a small rate of or , where the best performance is achieved when =1e-3 and =1e-4, respectively.",
"It probes the effectiveness of our proposed training objectives that enhances representation via deep understanding of context and entity surface forms and discourages representation from rote memorizing entity names or exploiting biased cues in data.",
"As the coefficient rate increases continuously, the performance shows a declining trend, which means the over-constraint of L gi or L si will hurt the generalizing ability of predicting 5596 Chicago piling Street but Chicago but SpanNER fans are p i ling fans are i n to i n to State State Street in the r a in in the r a in MINERLOC Figure 4: Visualization of attention weights over entities and context.",
"The above experiments show the promising performance of MINER on predicting the unseen entities.",
"To further investigate which part of the sentence MINER focuses on, we visualize the attention weights over entities and contexts.",
"We demonstrate an example in Figure 4 , where is selected from TwitterNER.",
"The attention score is calculated by averaging the attention weight of the 0th layer of BERT.",
"Take the attention weights of the entity \"State Street\" as an example, it is obvious that baseline model, i.e., SpanNER, focus on entity words themselves.",
"While the scores of our model are more average, it means that our method concerns more context information.",
"This group of methods makes it easier to predict OOV entities using external knowledge.",
"Zhang and Yang (2018) utilize a dictionary to list numerous entity mentions.",
"It is possible to get stronger \"look-up\" models by integrating dictionary information, but there is no guarantee that entities outside the training set and vocabulary will be correctly identified.",
"To diminish the model's dependency on OOV embedding, Li et al. (2018) introduce part-of-speech tags.",
"External resources are not always available, which is a limitation of this strategy.",
"The OOV problem can be alleviated by improving the OOV word embedding.",
"The character ngram of each word is used by Bojanowski et al. (2017) to represent the OOV word embedding.",
"Pinter et al. (2017) captures morphological features using character-level RNN.",
"Another technique is to first match the OOV words with the words that have been seen in training, then replace the OOV words' embedding with the seen words' embedding.",
"Peng et al. (2019) trains a student network to predict the closest word representation to the OOV term.",
"Fukuda et al. (2020) referring to pre-trained word embeddings for known words with similar surfaces to target OOV words.",
"This kind of method is learning a static OOV embedding representation, and does not directly utilize the context.",
"Contextual information is used to enhance the representation of OOV words in this strategy.",
"(Hu et al., 2019) formulate the OOV problem as a K-shot regression problem and learns to predict the OOV embedding by aggregating only K contexts and morphological features.",
"Pre-trained models contextualized word embeddings via pretraining on large background corpora.",
"Furthermore, contextualized word embeddings can be provided by the pre-trained models, which are pre-trained on large background corpora (Peters et al., 2018; Devlin et al., 2018; Liu et al., 2019).",
"Yan et al. (2021) shows that BERT is not always better at capturing context as compared to Gloe-based BiLSTM-CRFs.",
"Their higher performance could be the result of learning the subword structure better.",
"Based on the recent studies of NER, we analyze how to improve the OOV entity recognition.",
"In this work, we propose a novel and flexible learning framework MINER, to tackle OOV entities recognition issue from an information-theoretic perspective.",
"On the one hand, this method can enhance the context information of the output of the encoder.",
"On the other hand, it can safely eliminate task-irrelevant nuisances and prevents the model from rote memorizing the entities.",
"Specifically, the proposed approach contains two mutual information based training objectives: generalizing information maximization, and superfluous information minimization.",
"Experiments on various datasets demonstrate that MINER achieves much better performance in predicting out-of-vocabulary entities.",
"The authors would like to thank the anonymous reviewers for their helpful comments, Ting Wu and Yiding Tan for their early contribution.",
"This work was partially funded by China National Key RD Program (No. 2018YFB1005104), National Natural Science Foundation of China (No. 62076069, 61976056).",
"This research was sponsored by Hikvision Cooperation Fund, Beijing Academy of Artificial Intelligence(BAAI), and CAAI-Huawei MindSpore Open Fund."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Style transfer is the task of rewriting a sentence into a target style while approximately preserving content.",
"While most prior literature assumes access to a large style-labelled corpus, recent work (Riley et al., 2021) has attempted few-shot style transfer using just 3-10 sentences at inference for style extraction.",
"In this work, we study a relevant low-resource setting: style transfer for languages where no style-labelled corpora are available.",
"We notice that existing few-shot methods perform this task poorly, often copying inputs verbatim .",
"We push the state-of-the-art for few-shot style transfer with a new method modeling the stylistic difference between paraphrases.",
"When compared to prior work, our model achieves 2-3x better performance in formality transfer and code-mixing addition across seven languages.",
"Moreover, our method is better at controlling the style transfer magnitude using an input scalar knob.",
"We report promising qualitative results for several attribute transfer tasks (sentiment transfer, simplification, gender neutralization, text anonymiza-tion) all without retraining the model .",
"Finally, we find model evaluation to be difficult due to the lack of datasets and metrics for many languages.",
"To facilitate future research we crowdsource formality annotations for 4000 sentence pairs in four Indic languages, and use this data to design our automatic evaluations.",
"1 1 Introduction Style transfer is a natural language generation task in which input sentences need to be re-written into a target style, while preserving semantics.",
"It has many applications such as writing assistance (Hei-dorn, 2000), controlling generation for attributes 1 Please visit the project page for the paper resources: https://martiansideofthemoon.github.io/2022/03/03/acl22.html .",
"like simplicity, formality or persuasion (Xu et al., 2015; Smith et al., 2020; Niu and Carpuat, 2020), data augmentation (Xie et al., 2020; Lee et al., 2021), and author obfuscation (Shetty et al., 2018).",
"Most prior work either assumes access to supervised data with parallel sentences between the two styles (Jhamtani et al., 2017), or access to a large corpus of unpaired sentences with style labels (Prabhumoye et al., 2018; Subramanian et al., 2019).",
"Models built are style-specific and cannot generalize to new styles during inference, which is needed for applications like real-time adaptation to a user's style in a dialog or writing application.",
"Moreover, access to a large unpaired corpus with style labels is a strong assumption.",
"Most standard unpaired style transfer datasets have been carefully curated (Shen et al., 2017) or were originally parallel (Xu et al., 2012; Rao and Tetreault, 2018).",
"This is especially relevant in settings outside En-7439 glish, where NLP tools and labelled datasets are largely underdeveloped (Joshi et al., 2020).",
"In this work, we take the first steps studying style transfer in seven languages 2 with nearly 1.5 billion speakers in total.",
"Since no training data exists for these languages, we analyzed the current state-of-the-art in few-shot multilingual style transfer, the Universal Rewriter ( UR ) from Garcia et al. (2021).",
"Unfortunately, we find it often copies the inputs verbatim (Section 3.1), without changing their style .",
"We propose a simple inference-time trick of style-controlled translation through English, which improves the UR output diversity (Section 4.1).",
"To further boost performance we propose DIFFUR , 3 a novel algorithm using the recent finding that paraphrasing leads to stylistic changes (Krishna et al., 2020).",
"DIFFUR extracts edit vectors from paraphrase pairs, which are used to condition and train the model (Figure 2).",
"On formality transfer and code-mixing addition, our best performing DIFFUR variant significantly outperforms UR across all languages (by 2-3x) using automatic & human evaluation.",
"Besides better rewriting, our system is better able to control the style transfer magnitude (Figure 1).",
"A scalar knob ( ) can be adjusted to make the output text reflect the target style (pro-vided by exemplars) more or less.",
"We also observe promising qualitative results in several attribute transfer directions (Section 6.2) including sentiment transfer, simplification, gender neutralization and text anonymization, all without retraining the model and using just 3-10 examples at inference.",
"Finally, we found it hard to precisely evaluate models due to the lack of evaluation datasets and style classifiers (often used as metrics) for many languages.",
"To facilitate further research in Indic formality transfer, we crowdsource formality annotations for 4000 sentence pairs in four Indic languages (Section 5.1), and use this dataset to design the automatic evaluation suite (Section 5).",
"In summary, our contributions provide an end-to-end recipe for developing and evaluating style transfer models and evaluation in a low-resource setting.",
"Few-shot methods are a recent development in English style transfer, with prior work using variational autoencoders (Xu et al., 2020), or prompting large pretrained language models at inference (Reif",
"et al., 2021).",
"Most related is the state-of-the-art TextSETTR model from Riley et al. (2021), who use a neural style encoder to map exemplar sentences to a vector used to guide generation.",
"To train this encoder, they use the idea that adjacent sentences in a document have a similar style.",
"Recently, the Universal Rewriter (Garcia et al., 2021) extended TextSETTR to 101 languages, developing a joint model for translation, few-shot style transfer and stylized translation.",
"This model is the only prior few-shot system we found outside English, and our main baseline.",
"We discuss its shortcomings in Section 3.1, and propose fixes in Section",
"4. Multilingual style transfer is mostly unexplored in prior work: a 35 paper survey by Briakou et al. (2021b) found only one work in Chinese, Russian, Latvian, Estonian, French.",
"They further introduced XFORMAL, the first formality transfer evaluation dataset in French, Brazilian Portugese and Italian.",
"4 To the best of our knowledge, we are the first to study style transfer for the languages we consider.",
"More related work from Hindi linguistics and on style transfer control is provided in Appendix B. 3 The Universal Rewriter ( UR ) model We will start by discussing the Universal Rewriter ( UR ) model from Garcia et al. (2021), upon which our proposed DIFFUR model is built.",
"At a high level, the UR model extracts a style vector s from an exemplar sentence e , which reflects the desired target style.",
"This style vector is used to style transfer an input sentence x .",
"Concretely, consider f enc , f dec to be encoder & decoder Transformers initialized with mT5 (Xue et al., 2021b), which are composed to form the model f ur .",
"The UR model extracts the style vector using the encoder representation of a special [CLS] token prepended to e , and adds it to the input x representations for style transfer, f style ( e ) = s = f enc ( [CLS] e )[0] f ur ( x, s ) = f dec ( f enc ( x ) + s ) where is string concatenation, + vector addition.",
"Learning Style Transfer by Exemplar-driven Denoising : To learn a style extractor, the Universal Rewriter uses the idea that two non-overlapping spans of text in the same document are likely to have the same style.",
"Concretely, let x 1 and x 2 be 4 We do not use this data since it does not cover Indian languages, and due to Yahoo! L6 corpus restrictions for industry researchers (confirmed via author correspondence).",
"two non-overlapping spans.",
"Style extracted from one span ( x 1 ) is used to denoise the other ( x 2 ), x 2 = f ur ( noise ( x 2 ) , f style ( x 1 )) L denoise = LCE ( x 2 , x 2 ) where LCE is the standard next-word prediction cross entropy loss function and noise( ) refers to 20-60% random token dropping and token replacement.",
"This objective is used on the mC4 dataset (Xue et al., 2021b) with 101 languages.",
"To build a general-purpose rewriter which can do translation as well as style transfer, the model is additionally trained on two objectives : (1) supervised machine translation using the OPUS-100 parallel dataset (Zhang et al., 2020), and (2) a self-supervised objective to learn effective style-controlled translation; more details in Appendix C. During inference (Figure 1), consider an input sentence x and a transformation from style A to B (say informal to formal ).",
"Let SA , SB to be exemplar sentences in each of the styles (typically 3-10 sentences).",
"The output y is computed as, s A = 1 | SA | (cid:88) y SA f style ( y ) s B = 1 | SB | (cid:88) y SB f style ( y ) y = f ur ( x, ( s B s A )) where acts as a control knob to determine the magnitude of style transfer, and the vector subtraction helps remove confounding style information.",
"5 3.1 Shortcomings of the Universal Rewriter We experimented with the UR model on Hindi formality transfer, and noticed poor performance.",
"We noticed that UR has a strong tendency to copy sentences verbatim 45.5% outputs were copied exactly from the input (and hence not style transferred) for the best performing value of .",
"The copying increase for smaller , making magnitude control harder.",
"We identify the following issues:",
"1. Random token noise leads to unnatural inputs & transformations: The Universal Rewriter uses 20-60% uniformly random token dropping or replacement to noise inputs, which leads to ungrammatical inputs during training.",
"We hypothesize models tend to learn grammatical error correction, which encourages verbatim copying during 5 Garcia et al. (2021) also recommend adding the style vectors from the input sentence x , but we found this increased the amount of verbatim copying and led to poor performance.",
"inference where fluent inputs are used and no error correction is needed.",
"Moreover, token-level noise does not differentiate between content or function words, and cannot do syntactic changes like content reordering (Goyal and Durrett, 2020).",
"Too much noise could distort semantics and encourage hallucination, whereas too little will encourage copying.",
"2. Style vectors may not capture the precise style transformation: The Universal Rewriter extracts the style vector from a single sentence during training, which is a mismatch from the inference where a difference between vectors is taken.",
"Without taking vector differences at inference, we observe semantic preservation and overall performance of the UR model is much lower.",
"6 3. mC4 is noisy : On reading training data samples, we noticed noisy samples with severe language identification errors in the Hindi subset of mC4.",
"This has also been observed recently in Kreutzer et al. (2022), who audit 100 sentences in each language, and report 50% sentences in Marathi and 20% sentences in Hindi have the wrong language.",
"4. No translation data for several languages: We notice worse performance for languages which did not get parallel translation data (for the translation objective in Section 3).",
"In Table 1 we see UR gets a score 7 of 30.4 for Hindi and Bengali, languages for which it got translation data.",
"However, the scores are lower for Kannada, Telugu & Gujarati (25.5, 22.8, 23.7), for which no translation data was used.",
"We hypothesize translation data encourages learning language-agnostic semantic representations needed for translation from the given language, which in-turn improves style transfer.",
"While the Universal Rewriter model has a strong tendency to exactly copy input sentences while rewriting sentences in the same language (Sec-tion 3.1), we found it is an effective style-controlled translation system.",
"This motivates a simple inference-time trick to improve model outputs and reduce copying translate sentences to English ( en ) in a style-agnostic manner with a zero style 6 This difference possibly helps remove confounding information (like semantic properties, other styles) and focus on the specific style transformation.",
"Since two spans in the same document will share aspects like article topic / subject along with style, we expect these semantic properties will confound the style vector space obtained after the UR training.",
"7 Using the rAGG style transfer metric from Section 5.5.",
"vector 0 , and translate back into the source language ( lx ) with stylistic control.",
"where x is the input sentence, s A , s B are the styles vectors we want to transfer between, en, lx are language codes prepended to indicate the output language (Appendix C).",
"Prior work has shown that backtranslation is effective for paraphrasing (Wieting and Gimpel, 2018; Iyyer et al., 2018) and style transfer (Prabhumoye et al., 2018).",
"While style-controlled backtranslation is an effective strategy, it needs two translation steps.",
"This is 2x slower than UR , and semantic errors increase with successive translations.",
"To learn effective style transfer systems needing only a single generation step we develop DIFFUR , a new few-shot style transfer training objective (overview in Figure 2).",
"DIFFUR tackles the issues discussed in Section 3.1 using paraphrases and style vector differences.",
"Paraphrases as a noise function : Instead of using random token-level noise (Issue #1 in Section 3.1), we paraphrase sentences to noise them during training.",
"Paraphrasing modifies the lexical & syntactic properties of sentences, while preserving fluency and input semantics.",
"Prior work (Kr-ishna et al., 2020) has shown that paraphrasing leads to stylistic changes, and denoising can be considered a style re-insertion process.",
"To create paraphrases, we backtranslate sentences from the UR model 8 with no style control (zero vectors used as style vectors).",
"To increase diversity, we use random sampling in both translation steps, pooling generations obtained using temperature values [0 . 4 , 0 . 6 , 0 . 8 , 1 . 0] .",
"Finally, we discard paraphrase pairs from the training data where the semantic similarity score 9 is outside the range [0 . 7 , 0 . 98] .",
"This removes backtransation errors (score < 0.7), and exact copies (score > 0.98).",
"In Appendix K we confirm that our backtranslated paraphrases are lexically diverse from the input.",
"Using style vector differences for control : To fix the training / inference mismatch for style extraction (Issue #2 in Section 3.1), we propose using style vector differences between the output and input as the stylistic control.",
"Concretely, let x be an input sentence and x para its paraphrase.",
"s diff = f style ( x ) f style ( x para ) x = f ur ( x para , stop-grad ( s diff )) L = LCE ( x, x ) where stop-grad( ) stops gradient flow through s diff , preventing the model from learning to copy x exactly.",
"To ensure f style extracts meaningful style representations, we fine-tune a trained UR model.",
"Vector differences have many advantages,",
"8 Specifically, an Indic variant of the UR model is used, described in Section 4.3.",
"Note it is not necessary to use UR for backtranslation, any good translation model can be used.",
"9 Calculated using LaBSE, discussed in Section 5.3.",
"and its paraphrase removes confounding features (like semantics) present in the vectors.",
"2. The vector difference focuses on the precise transformation that is needed to reconstruct the input from its paraphrase.",
"3. The length of s diff acts as a proxy for the amount of style transfer, which is controlled using during inference (Section 3).",
"DIFFUR is related to neural editor models (Guu et al., 2018; He et al., 2020), where language models are decomposed into a probabilistic space of edit vectors over prototype sentences.",
"We justify the DIFFUR design with ablations in Appendix G.1.",
"To address the issue of no translation data (Issue #4 in Section 3.1), we train Indic variants of our models.",
"We replace the OPUS translation data used for training the Universal Rewriter (Section 3) with Samanantar (Ramesh et al., 2021), which is the largest publicly available parallel translation corpus for 11 Indic languages.",
"We call these variants UR-INDIC and DIFFUR-INDIC .",
"This process significantly up-samples the parallel data seen between English / Indic languages, and gives us better performance (Table 1) and lower copy rates, especially for languages with no OPUS translation data.",
"One issue with our DIFFUR-INDIC setup is usage of a stop-grad( ) to avoid verbatim copying from the input.",
"This prevents gradient flow into the style extractor f style , and as we see in Appendix H, a degradation of the style vector space.",
"To prevent this we simply multi-task between the exemplar-driven denoising UR objective (Section 3) and the DIFFUR objective.",
"We initialize the model with the UR-INDIC checkpoint, and fine-tune it on these two losses together, giving each loss equal weight.",
"Automatic evaluation of style transfer is challenging (Pang, 2019; Mir et al., 2019; Tikhonov et al., 2019), and the lack of resources (such as evaluation datasets, style classifiers) make evaluation trickier for Indic languages.",
"To tackle this issue, we first collect a small dataset of formality and semantic similarity annotations in four Indic languages (Section 5.1).",
"We use this dataset to guide the design of an evaluation suite (Section 5.2-5.6).",
"Since automatic metrics in generation are imperfect (Celikyilmaz et al., 2020), we complement our results with human evaluation (Section 5.7).",
"Since no public datasets exist for formality transfer in Indic languages, it is hard to measure the extent to which automatic metrics (such as style classifiers) are effective.",
"To tackle this issue, we build a dataset of 1000 sentence pairs in each of four Indic languages (Hindi, Bengali, Kannada, Telugu) with formality and semantic similarity annotations.",
"We first style transfer held-out Samanantar sentences using our UR-INDIC + BT model (Sec-tion 4.1, 4.3) to create sentence pairs with different formality.",
"We then asked three crowdworkers to 1) label the more formal sentence in each pair; 2) rate semantic similarity on a 3-point scale.",
"Our crowdsourcing is conducted on Task Mate, 10 where we hired native speakers from India with at least a high school education and 90% approval rating on the platform.",
"To ensure crowdworkers understood formality, we provided instructions following advice from professional Indian linguists, and asked two qualification questions in their native language.",
"More details (agreement, compensation, instructions) are provided in Appendix E.4.",
"Our first metric checks whether the output sentence reflects the target style.",
"This is measured by an external classifier's predictions on system outputs.",
"We use two variants of transfer accuracy: (1) Relative Accuracy (rACC ): does the target style classifier score the output sentence higher than the input sentence?",
"(2) Absolute Accuracy (aACC ): does the classifier score the output higher than 0.5?",
"Building multilingual classifiers : Unfortunately, no large style classification datasets exist for most languages, preventing us from building classifiers from scratch.",
"We resort to zero-shot cross lingual transfer techniques (Conneau and Lample, 2019), where large multilingual pretrained models are first fine-tuned on English classification data, and then applied to other languages at inference.",
"We experiment with three such techniques, and find MAD-X classifiers with language adapters (Pfeiffer et al., 2020b) have the highest accuracy of 81% on our Hindi data from Section 5.1.",
"However, MAD-X classifiers were only available for Hindi, so we use 10 https://taskmate.google.com 7443 the next best XLM RoBERTa-base (Conneau et al., 2020) for other languages, which has 75%-82% accuracy on annotated data; details in Appendix E.1.",
"Our second evaluation criteria is semantic similarity between the input and output.",
"Following recent recommendations (Marie et al., 2021; Krishna et al., 2020), we avoid n -gram overlap metrics like BLEU (Papineni et al., 2002).",
"Instead, we use LaBSE (Feng et al., 2020), a language-agnostic semantic similarity model based on multilingual BERT (Devlin et al., 2019).",
"LaBSE supports 109 languages, and is the only similarity model we found supporting all the Indic languages in this work.",
"We also observed LaBSE had greater correlation with our annotated data (Section 5.1) compared to alternatives; details in Appendix E.2.",
"Qualitatively, we found that sentence pairs with LaBSE scores lower than 0.6 were almost never paraphrases.",
"To avoid rewarding partial credit for low LaBSE scores, we use a hard threshold 11 ( L = 0 . 75 ) to determine whether pairs are paraphrases, SIM ( x, y (cid:48) ) = 1 if (cid:8) LaBSE ( x, y (cid:48) ) > L (cid:9) else 0 5.4 Other Metrics ( LANG , COPY , 1-g) Additionally, we measure whether the input and output sentences are in the same language ( LANG ), the fraction of outputs copied verbatim from the input ( COPY ), and the 1-gram overlap between input / output (1-g).",
"High LANG and low COPY / 1-g (more diversity) is better; details in Appendix E.6.",
"To get a sense of overall system performance, we combine individual metrics into one score.",
"Similar to Krishna et al. (2020) we aggregate metrics as, AGG ( x, y (cid:48) ) = ACC ( x, y (cid:48) ) SIM ( x, y (cid:48) ) LANG ( y (cid:48) ) AGG ( D ) = 1 |D| (cid:88) x,y (cid:48) D AGG ( x, y (cid:48) ) Where ( x, y (cid:48) ) are input-output pairs, and D is the test corpus.",
"Since each of our individual metrics can only take values 0 or 1 at an instance level, our aggregation acts like a Boolean AND operation.",
"In other words, we are measuring the fraction of outputs which simultaneously transfer style, have 11 Roughly 73% pairs annotated as paraphrases (from dataset in Section 5.1) had L > 0 .",
"a semantic similarity of at least L (our threshold in Section 5.3), and have the same language as the input.",
"Depending on the variant of ACC (relative / absolute), we can derive rAGG / aAGG .",
"An ideal system should not only be able to style transfer sentences, but also control the magnitude of style transfer using the scalar input .",
"To evaluate this, for every system we first determine a max value and let [0 , max ] be the range of control values.",
"While in our setup is an unbounded scalar, we noticed high values of significantly perturb semantics (also noted in Garcia et al., 2021), with systems outputting style-specific n -grams unfaithful to the output.",
"We choose max to be the largest from the list [0 . 5 , 1 . 0 , 1 . 5 , 2 . 0 , 2 . 5 , 3 . 0] whose outputs have an average semantic similarity score ( SIM , Section 5.3) of at least 0.75 12 with the validation set inputs.",
"For each system we take three evenly spaced values in its control range, denoted as = [ 13 max , 23 max , max ] .",
"We then compute the style calibration to ( CALIB ), or how often does increasing lead to a style score increase?",
"We measure this with a statistic similar to Kendall's (Kendall, 1938), counting concordant pairs in , CALIB ( x ) = 1 n (cid:88) b > a { style ( y b ) > style ( y a ) } where x is input, CALIB ( x ) is the average over all possible n ( = 3 ) pairs of values ( a , b ) in .",
"Automatic metrics are usually insufficient for style transfer evaluation according to Briakou et al. (2021a), 69 / 97 surveyed style transfer papers used human evaluation.",
"We adopt the crowd-sourcing setup from Section 5.1, which was used to build our formality evaluation datasets.",
"We presented 200 generations from each model and the corresponding inputs in a random order, and asked three crowdworkers two questions about each pair of sentences: (1) which sentence is more formal/code-mixed?",
"(2) how similar are the two sentences in meaning?",
"This lets us evaluate rACC , SIM , rAGG , CALIB with respect to human annotations instead of classifier predictions.",
"More experiment details (inter-annotator agreement, compensation, instructions) are provided in Appendix E.4.",
"Our models are evaluated on (1) formality transfer (Rao and Tetreault, 2018); (2) code-mixing addition, a task where systems attempt to use English words in non-English sentences, while preserving the original script.",
"13 Since we do not have access to any formality evaluation dataset, 14 we hold out 22K sentences from Samanantar in each Indic language 13 Hinglish is common in India, examples in Figure",
"5. 14 We do not use GYAFC (Rao and Tetreault, 2018) and XFORMAL (Briakou et al., 2021b) due to reasons in footnote",
"4. Our dataset from Section 5.1 has already been used for classifier selection, and has machine generated sentences.",
"for validation / testing.",
"For Swahili / Spanish, we use mC4 / WMT2018 sentences.",
"These sets have similar number of formal / informal sentences, as marked by our formality classifiers (Section 5.2), and are transferred to the opposite formality.",
"We re-use the hi/bn formality transfer splits for code-mixing addition, evaluating unidirectional transfer.",
"Seven languages with varying scripts and morphological richness are used for evaluation ( hi,es,sw,bn,kn,te,gu ).",
"The UR model only saw translation data for hi,es,bn , whereas UR-INDIC sees translation data for all Indic languages (Section 4.3).",
"To test the generalization capability of the DIFFUR , no Gujarati paraphrase training data for is used.",
"Note that no paired/unpaired data with style labels is used during training : models determine the target style at inference using 3-10 exemplars sentences.",
"For few-shot formality transfer, we use the English exemplars from Garcia et al. (2021).",
"We follow their setup and use English exemplars to guide non-English transfer zero-shot.",
"For code-mixing addition, we use Hindi/English code-mixed exemplars in Devanagari (shown in Appendix D).",
"automatic evaluation results for formality transfer across languages in Table 1, Table",
"3. Overall we find that each of our proposed methods ( DIFFUR , *INDIC , + BT ) helps improve performance over the baseline UR model (71.1, 58.3, 54.2 vs 30.4 rAGG on Hindi).",
"Combining these ideas with multitask learning ( DIFFUR-MLT ) gives us the best performance across all languages (78.1 on Hindi).",
"On Gujarati, the DIFFUR-INDIC fails to get good performance (36.0 rAGG ) since it did not see Gujarati paraphrase data, but this performance is recovered using DIFFUR-MLT (75.0).",
"In Table 4 we see human evaluations support our automatic evaluation for formality transfer.",
"In Table 5 we perform human evaluation on a subset of models for code-mixing addition and see similar trends, with DIFFUR-MLT significantly outperforming UR , UR Model Hindi Bengali ACC / SIM / AGG ACC / SIM / AGG UR (2021) 4.5 / 93.8 / 3.6 0.0 / 96.4 / 0.0 UR-INDIC , BT 18.5 / 79.2 / 15.3 18.0 / 68.3 / 12.7 DIFFUR-MLT , BT 62.5 / 69.9 / 41.5 79.0 / 57.1 / 43.5 Table 5: Human evaluation on code-mixing addition.",
"DIFFUR-MLT and DIFFUR-INDIC are best at controlling magnitude of style transfer : In Table 6, we compare the extent to which models can control the amount of style transfer using .",
"We find that all our proposed methods outperform the UR model, which gets only 29.2 CALIB .",
"+ BT models are not as effective at control (43.4 CALIB ), while DIFFUR-INDIC and DIFFUR-MLT perform best (69.6, 69.0 CALIB ).",
"This is graphically illustrated in Figure",
"3. DIFFUR-MLT performs consistently well across different values (left plot), and gives a high style change without much drop in content similarity to the input as is varied (right plot); more control experiments in Appendix F. In Table 2 we provide a breakdown by individual metrics .",
"In the baseline Hindi UR model, we notice high COPY rates (45.4%), resulting in lower 7446 Input Generations Analysis Informal .",
"ACC scores.",
"COPY reduces in our proposed models (4.4% for DIFFUR-MLT ), which boosts overall performance.",
"We find the lowest COPY (and lowest 1-g) for models with + BT (1%), which is due to two translation steps.",
"However, this lowers semantic similarity (also seen in Table 4) lowering the overall score (60.0 vs 78.1) compared to DIFFUR-MLT .",
"In Appendix G we show ablations studies justifying the DIFFUR design, decoding scheme, etc.",
"In Appendix I we show a breakdown by individual metrics for other languages and plot variations with .",
"We also analyze the style encoder f style in Appendix H, finding it is an effective style classifier.",
"We analyze several qualitative outputs from DIFFUR-MLT in Figure",
"4. Besides formality transfer and code-mixing addition, we transfer several other attributes: sentiment (Li et al., 2018), simplicity (Xu et al., 2015), anonymity (Anandan et al., 2012) and gender neutrality (Reddy and Knight, 2016).",
"More outputs are provided in Appendix J. 7 Conclusion We present a recipe for building & evaluating controllable few-shot style transfer systems needing only 3-10 style examples at inference, useful in low-resource settings.",
"Our methods outperform prior work in formality transfer & code-mixing for 7 languages, with promising qualitative results for several other attribute transfer tasks.",
"Future work includes further improving systems for some attributes, and studying style transfer for languages where little / no translation data is available.",
"We are very grateful to the Task Mate team (es-pecially Auric Bonifacio Quintana) for their support and helping us crowdsource data and evaluate models on their platform.",
"We thank John Wieting, Timothy Dozat, Manish Gupta, Rajesh Bhatt, Esha Banerjee, Yixiao Song, Marzena Karpinska, Ar-avindan Raghuveer, Noah Constant, Parker Riley, Andrea Schioppa, Artem Sokolov, Mohit Iyyer and Slav Petrov for several useful discussions during the course of this project.",
"We are also grateful to Rajiv Teja Nagipogu, Shachi Dave, Bhuthesh R, Parth Kothari, Bhanu Teja Gullapalli and Simran Khanuja for helping us annotate model outputs in several Indian languages during pilot experiments.",
"This work was mostly done during Kalpesh Krishna (KK)'s internship at Google Research India, hosted by Bidisha Samanta and Partha Talukdar.",
"KK was partly supported by a Google PhD Fellowship.",
"Recent work has highlighted issues of stylistic bias in text generation systems, specifically machine translation systems (Hovy et al., 2020).",
"We acknowledge these issues, and consider style transfer and style-controlled generation technology as an opportunity to work towards fixing them (for instance, gender neutralization as presented in Section 6.2).",
"Note that it is important to tread down this path carefully In Chapter 9, Blodgett (2021) argue that style is inseparable from social meaning (as originally noted by Eckert, 2008), and humans may perceive automatically generated text very differently compared to automatic style classifiers.",
"Our models were trained on 32 Google Cloud TPUs.",
"As discussed in Appendix A, the UR & UR-INDIC model take roughly 18 hours to train.",
"The DIFFUR -* and DIFFUR-MLT models are much cheaper to train (2 hours) since we finetune the pretrained UR -* models.",
"The Google 2020 environment report mentions, 15 TPUs are highly efficient chips which have been specifically designed for machine learning applications.",
"These accelerators run on Google Cloud, which is carbon neutral today, and is aiming to run on carbon-free energy, 24/7, at all of Google's data centers by 2030 ( https://cloud.google. com/sustainability ).",
"gumdrop/sustainability/google-2020-environmental-report.pdf"
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"method",
"result",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"result",
"result",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain"
] |
[
"Natural language understanding (NLU) and natural language generation (NLG) are two fundamental and related tasks in building task-oriented dialogue systems with opposite objectives: NLU tackles the transformation from natural language to formal representations, whereas NLG does the reverse.",
"A key to success in either task is parallel training data which is expensive to obtain at a large scale.",
"In this work, we propose a generative model which couples NLU and NLG through a shared latent variable.",
"This approach allows us to explore both spaces of natural language and formal representations, and facilitates information sharing through the latent space to eventually benefit NLU and NLG.",
"Our model achieves state-of-the-art performance on two dialogue datasets with both flat and tree-structured formal representations.",
"We also show that the model can be trained in a semi-supervised fashion by utilising unlabelled data to boost its performance.",
"Natural language understanding (NLU) and natural language generation (NLG) are two fundamental tasks in building task-oriented dialogue systems.",
"In a modern dialogue system, an NLU module first converts a user utterance, provided by an automatic speech recognition model, into a formal representation.",
"The representation is then consumed by a downstream dialogue state tracker to update a belief state which represents an aggregated user goal.",
"Based on the current belief state, a policy network decides the formal representation of the system response.",
"This is finally used by an NLG module to generate the system response(Young et al., 2010).",
"It can be observed that NLU and NLG have opposite goals: NLU aims to map natural language Work done while the author was an intern at Apple.",
"to formal representations, while NLG generates utterances from their semantics.",
"In research literature, NLU and NLG are well-studied as separate problems.",
"State-of-the-art NLU systems tackle the task as classification (Zhang and Wang, 2016) or as structured prediction or generation (Damonte et al., 2019), depending on the formal representations which can be flat slot-value pairs (Henderson et al., 2014), first-order logical form (Zettlemoyer and Collins, 2012), or structured queries (Yu et al., 2018; Pasupat et al., 2019).",
"On the other hand, approaches to NLG vary from pipelined approach subsuming content planning and surface realisation (Stent et al., 2004) to more recent end-to-end sequence generation (Wen et al., 2015; Duek et al., 2020).",
"natural language to formal language while NLG does the reverse.",
"Both tasks require a substantial amount of utterance and representation pairs to succeed, and such data is costly to collect due to the complexity of annotation involved.",
"Although unannotated data for either natural language or formal representations can be easily obtained, it is less clear how they can be leveraged as the two languages stand in different space.",
"In this paper, we propose a generative model for J oint natural language U nderstanding and G eneration ( JUG ), which couples NLU and NLG with a latent variable representing the shared intent between natural language and formal representations.",
"We aim to learn the association between two discrete spaces through a continuous latent variable which facilitates information sharing between two tasks.",
"Moreover, JUG can be trained in a semi-supervised fashion, which enables us to explore each space of natural language and formal representations when unlabelled data is accessible.",
"We examine our model on two dialogue datasets with different formal representations: the E2E dataset (Novikova et al., 2017) where the semantics are represented as a collection of slot-value pairs; and a more recent weather dataset (Balakrish-nan et al., 2019) where the formal representations are tree-structured.",
"Experimental results show that our model improves over standalone NLU/NLG models and existing methods on both tasks; and the performance can be further boosted by utilising unlabelled data.",
"Our key assumption is that there exists an abstract latent variable z underlying a pair of utterance x and formal representation y .",
"In our generative model, this abstract intent guides the standard conditional generation of either NLG or NLU (Figure 1a).",
"Meanwhile, z can be inferred from either utterance x , or formal representation y (Figure 1b).",
"That means performing NLU requires us to infer the z from x , after which the formal representation y is generated conditioning on both z and x (Fig-ure 1c), and vice-versa for NLG (Figure 1d).",
"In the following, we will explain the model details, starting with NLG.",
"both z and y .",
"We choose the posterior distribution q ( z | y ) to be Gaussian.",
"The task of inferring z can then be recast to computing mean and standard deviation of the Gaussian distribution using an NLG encoder.",
"To do this, we use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) to encode formal representation y .",
"which is linearised and represented as a sequence of symbols.",
"After encoding, we obtain a list of hidden vectors H , with each representing the concatenation of forward and backward LSTM states.",
"These hidden vectors are then average-pooled and passed through two feed-forward neural networks to compute mean y,z and standard deviation y,z vectors of the posterior q ( z | y ) .",
"H = Bi-LSTM ( y ) h = Pooling ( H ) y,z = W h + b y,z = W h + b (1) where W and b represent neural network weights and bias.",
"Then the latent vector z can be sampled from the approximated posterior using the re-parameterisation trick of Kingma and Welling (2013): (cid:15)(cid:15)(cid:15) N (0 , I ) z = y,z + y,z (cid:15)(cid:15)(cid:15) (2) The final step is to generate natural language x based on latent variable z and formal representation y .",
"We use an LSTM decoder relying on both z and y via attention mechanism (Bahdanau et al., 2014).",
"At each time step, the decoder computes: g xi = LSTM ( g xi 1 , x i 1 ) c i = attention ( g xi , H ) p ( x i ) = softmax ( W v [ c i g xi z ] + b v ) (3) where denotes concatenation.",
"x i 1 is the word vector of input token; g x i is the corresponding decoder hidden state and p ( x i ) is the output token distribution at time step i .",
"NLUNLU performs the reverse procedures of NLG.",
"First, an NLU encoder infers the latent variable z from utterance x .",
"The encoder uses a bi-directional LSTM to convert the utterance into a list of hidden states.",
"These hidden states are pooled and passed through feed-forward neural networks to compute the mean x,z and standard deviation x,z of the posterior q ( z | x ) .",
"This procedure follows Equation 1 in NLG.",
"However, note that a subtle difference between natural language and formal language is that the former is ambiguous while the later is precisely defined.",
"This makes NLU a many-to-one mapping problem but NLG is one-to-many.",
"To better reflect the fact that the NLU output requires less variance, when decoding we choose the latent vector z in NLU to be the mean vector x,z , instead of sampling it from q ( z | x ) like Equation",
"2. 1 After the latent vector is obtained, the formal representation y is predicted from both z and x using an NLU decoder.",
"Since the space of y depends on the formal language construct, we consider two common scenarios in dialogue systems.",
"In the first scenario, y is represented as a set of slot-value pairs, e.g., { food type = British , area = north } in restaurant search domain (Mrkic et al., 2017).",
"The decoder here consists of several classifiers, one for each slot, to predict the corresponding values.",
"2 Each classi-fier is modelled by a 1-layer feed-forward neural network that takes z as input: p ( y s ) = softmax ( W s z + b s ) (4) where p ( y s ) is the predicted value distribution of slot s .",
"In the second scenario, y is a tree-structured formal representation (Banarescu et al., 2013).",
"We then generate y as a linearised token sequence using an LSTM decoder relying on both z and x via the standard attention mechanism (Bahdanau et al., 2014).",
"The decoding procedure follows exactly Equation",
"3. 2.3 Model Summary One flexibility of the JUG model comes from the fact that it has two ways to infer the shared latent variable z through either x or y ; and the inferred z can aid the generation of both x and y .",
"In this next section, we show how this shared latent variable enables the JUG model to explore unlabelled x and y , while aligning the learned meanings inside the latent space.",
"We now describe how JUG can be optimised with a pair of x and y (3.1), and also unpaired x or",
"deviation x,z in NLU, since the term is needed for optimisation.",
"See more details in Section",
"3. 2 Each slot has a set of corresponding values plus a special one not_mention .",
"y (3.2).",
"We specifically discuss the prior choice of JUG objectives in 3.3.",
"A combined objective can be thus derived for semi-supervised learning: a practical scenario when we have a small set of labelled data but abundant unlabelled ones (3.4).",
"Given a pair of utterance x and formal representation y , our objective is to maximise the log-likelihood of the joint probability p ( x, y ) :",
"The optimisation task is not directly tractable since it requires us to marginalise out the latent variable z .",
"However, it can be solved by following the standard practice of neural variational inference (Kingma and Welling, 2013).",
"An objective based on the variational lower bound can be derived as L x,y = E q ( z | x ) log p ( y | z, x ) + E q ( z | x ) log p ( x | z, y ) KL [ q ( z | x ) || p ( z )] (6) where the first term on the right side is the NLU model; the second term is the reconstruction of x ; and the last term denotes the Kullback Leibler divergence between the approximate posterior q ( z | x ) with the prior p ( z ) .",
"We defer the discussion of prior to Section 3.3 and detailed derivations to Appendix.",
"The symmetry between utterance and semantics offers an alternative way of inferring the posterior through the approximation q ( z | y ) .",
"Analogously we can derive a variational optimisation objective: L y,x = E q ( z | y ) log p ( x | z, y ) + E q ( z | y ) log p ( y | z, x ) KL [ q ( z | y ) || p ( z )] (7) where the first term is the NLG model; the second term is the reconstruction of y ; and the last term denotes the KL divergence.",
"Additionally, when we have access to unlabelled utterance x (or formal representation y ), the optimisation objective of JUG is the marginal likelihood p ( x ) (or p ( y ) ): (cid:90) (cid:90)",
"Note that both z and y are unobserved in this case.",
"We can develop an objective based on the variational lower bound for the marginal: L x = E q ( y | z,x ) E q ( z | x ) log p ( x | z, y ) KL [ q ( z | x ) || p ( z )] (9) where the first term is the auto-encoder reconstruction of x with a cascaded NLU-NLG path.",
"The second term is the KL divergence which regularizes the approximated posterior distribution.",
"Detailed derivations can be found in Appendix.",
"When computing the reconstruction term of x , it requires us to first run through the NLU model to obtain the prediction on y , from which we run through NLG to reconstruct x .",
"The full information flow is ( x z y z x ).",
"3 Connections can be drawn with recent work which uses back-translation to augment training data for machine translation (Sennrich et al., 2016; He et al., 2016).",
"Unlike back-translation, the presence of latent variable in our model requires us to sample z along the NLU-NLG path.",
"The introduced stochasticity allows the model to explore a larger area of the data manifold.",
"The above describes the objectives when we have unlabelled x .",
"We can derive a similar objective for leveraging unlabelled y : L y = E q ( x | z,y ) E q ( z | y ) log p ( y | z, x ) KL [ q ( z | y ) || p ( z )] (10) where the first term is the auto-encoder reconstruction of y with a cascaded NLG-NLU path.",
"The full information flow here is ( y z x z y ).",
"The objectives described in 3.1 and 3.2 require us to match an approximated posterior (either q ( z | x ) or q ( z | y ) ) to a prior p ( z ) that reflects our belief.",
"A common choice of p ( z ) in the research literature is the Normal distribution (Kingma and Welling, 2013).",
"However, it should be noted that even if we match both q ( z | x ) and q ( z | y ) to the same prior, it does not guarantee that the two inferred posteriors are close to each other; this is a desired property of the shared latent space.",
"To better address the property, we propose a novel prior choice: when the posterior is inferred 3 This information flow requires us to sample both z and y in reconstructing x .",
"Since y is a discrete sequence, we use REINFORCE (Williams, 1992) to pass the gradient from NLG to NLU in the cascaded NLU-NLG path.",
"from x (i.e., q ( z | x ) ), we choose the parameterised distribution q ( z | y ) as our prior belief of p ( z ) .",
"Similarly, when the posterior is inferred from y (i.e., q ( z | y ) ), we have the freedom of defining p ( z ) to be q ( z | x ) .",
"This approach directly pulls q ( z | x ) and q ( z | y ) closer to ensure a shared latent space.",
"Finally, note that it is straightforward to compute both q ( z | x ) and q ( z | y ) when we have parallel x and y .",
"However when we have the access to unlabelled data, as described in Section 3.2, we can only use the pseudo x y pairs that are generated by our NLU or NLG model, such that we can match an inferred posterior to a pre-defined prior reflecting our belief of the shared latent space.",
"In general, JUG subsumes the following three training scenarios which we will experiment with.",
"When we have fully labelled x and y , the JUG jointly optimises NLU and NLG in a supervised fashion with the objective as follows: L basic = (cid:88) ( x,y ) ( X,Y ) ( L x,y + L y,x ) (11) where ( X, Y ) denotes the set of labelled examples.",
"Additionally in the fully supervised setting, JUG can be trained to optimise both NLU, NLG and auto-encoding paths.",
"This corresponds to the following objective: L marginal = L basic + (cid:88) ( x,y ) ( X,Y ) ( L x + L y ) (12) Furthermore, when we have additional unlabelled x or y , we optimise a semi-supervised JUG objective as follows: L semi = L basic + (cid:88) x XL x + (cid:88) y YL y (13) where X denotes the set of utterances and Y denotes the set of formal representations.",
"We experiment on two dialogue datasets with different formal representations to test the generality of our model.",
"The first dataset is E2E (Novikova et al., 2017), which contains utterances annotated with flat slot-value pairs as their semantic representations.",
"The second dataset is the recent weather dataset (Balakrishnan et al., 2019), where both utterances and semantics are represented in tree structures.",
"Examples of the two datasets are provided in tables 1 and",
"2. Natural Language \"sousa offers british food in the low price range. it is family friendly with a 3 out of 5 star rating. you can find it near the sunshine vegetarian cafe.\"",
"We primarily evaluated our models on the raw splits of the original datasets, which enables us to fairly compare fully-supervised JUG with existing work on both NLU and NLG.",
"4 Statistics of the two datasets can be found in Table",
"3. In addition, we set up an experiment to evaluate semi-supervised JUG with a varying amount of labelled training data (5%, 10%, 25%, 50%, 100%, with the rest being unlabelled).",
"Note that the original E2E test set is designed on purpose with unseen slot-values in the test set to make it difficult (Duek et al., 2018, 2020); we remove the distribution bias by randomly re-splitting the E2E dataset.",
"On the contrary, utterances in the weather dataset contains extra tree-structure annotations which make the NLU task a toy problem.",
"We therefore remove these annotations to make NLU more realistic, as shown in the second row of Table",
"2. As described in Section 3.4, we can optimise our proposed JUG model in various ways.",
"We investigate the following approaches: JUG basic : this model jointly optimises NLU 4 Following Balakrishnan et al. (2019), the evaluation code https://github.com/tuetschek/e2e-metrics provided by the E2E organizers is used here for calculating BLEU in NLG.",
"and NLG with the objective in Equation",
"11. This uses labelled data only.",
"JUG marginal : jointly optimises NLU, NLG and auto-encoders with only labelled data, per Equation",
"12. JUG semi : jointly optimises NLU and NLG with labelled data and auto-encoders with unlabelled data, per Equation",
"13. 4.2 Baseline Systems We compare our proposed model with some existing methods as shown in Table 4 and two designed baselines as follows: Decoupled : The NLU and NLG models are trained separately by supervised learning.",
"Both of the individual models have the same encoder-decoder structure as JUG.",
"However, the main difference is that there is no shared latent variable between the two individual NLU and NLG models.",
"Augmentation : We pre-train Decoupled models to generate pseudo label from the unlabelled corpus (Lee, 2013) in a setup similar to back-translation (Sennrich et al., 2016).",
"The pseudo data and labelled data are then used together to fine-tune the pre-trained models.",
"Among all systems in our experiments, the number of units in LSTM encoder/decoder are set to {150, 300} and the dimension of latent space is 150.",
"The optimiser Adam (Kingma and Ba, 2014) is used with learning rate 1e-3.",
"Batch size is set to {32, 64}.",
"All the models are fully trained and the Model / Data 5% 10% 25% 50% 100% Decoupled 52.77 (0.874) 62.32 (0.902) 69.37 (0.924) 73.68 (0.935) 76.12 (0.942) Augmentation 54.71 (0.878) 62.54 (0.902) 68.91 (0.922) 73.84 (0.935) JUG basic 60.30 (0.902) 67.08 (0.918) 72.49 (0.932) 74.74 (0.937) 78.05 (0.945) JUG marginal 62.96 (0.907) 68.43 (0.920) 73.35 (0.933) 75.74 (0.939) 78.93 (0.948) JUG semi 68.09 (0.921) 70.33 (0.925) 73.79 (0.935) 75.46 (0.939) Table 5: NLU results on E2E dataset.",
"best model is picked by the average of NLU and NLG results on validation set during training.",
"We start by comparing the JUG basic performance with existing work following the original split of the datasets.",
"The results are shown in Table 4.",
"On E2E dataset, we follow previous work to use F1 of slot-values as the measurement for NLU, and BLEU-4 for NLG.",
"For weather dataset, there is only published results for NLG.",
"It can be observed that the JUG basic model outperforms the previous state-of-the-art NLU and NLG systems on the E2E dataset, and also for NLG on the weather dataset.",
"The results prove the effectiveness of introducing the shared latent variable z for jointly training NLU and NLG.",
"We will further study the impact of the shared z in Section 4.4.2.",
"We also evaluated the three training scenarios of JUG in the semi-supervised setting, with different proportion of labelled and unlabelled data.",
"The results for E2E is presented in Table 5 and 6.",
"We computed both F1 score and joint accuracy (Mrkic Model / Data 5% 10% 25% 50% 100% Decoupled 0.632 0.667 0.703 0.719 0.725 Augmentation 0.635 0.677 0.703 0.727 JUG basic 0.634 0.673 0.701 0.720 0.726 JUG marginal 0.627 0.671 0.711 0.721 0.722 JUG semi 0.670 0.701 0.725 0.733 Table 8: NLG results with BLEU on weather dataset. et al., 2017) of slot-values as a more solid NLU measurement.",
"Joint accuracy is defined as the proportion of test examples whose slot-value pairs are all correctly predicted.",
"For NLG, both BLEU-4 and semantic accuracy are computed.",
"Semantic accuracy measures the proportion of correctly generated slot values in the produced utterances.",
"From the results, we observed that Decoupled can be improved with techniques of generating pseudo data ( Augmentation ), which forms a stronger baseline.",
"However, all our model variants perform better than the baselines on both NLU and NLG.",
"When using only labelled data, our model JUG marginal can surpass Decoupled across all the four measurements.",
"The gains mainly come from the fact that the model uses auto-encoding objectives to help learn a shared semantic space.",
"Compared to Augmentation , JUG marginal also has a built-in mechanism' to bootstrap pseudo data on the fly of training (see Section 3.4).",
"When adding extra unlabelled data, our model JUG semi gets further performance boosts and outperforms all baselines by a significant margin.",
"With the varying proportion of unlabelled data in Figure 2: Visualisation of latent variable z .",
"the training set, we see that unlabelled data is helpful in almost all cases.",
"Moreover, the performance gain is the more significant when the labelled data is less.",
"This indicates that the proposed model is especially helpful for low resource setups when there is a limited amount of labelled training examples but more available unlabelled ones.",
"The results for weather dataset are presented in Table 7 and 8.",
"In this dataset, NLU is more like a semantic parsing task (Berant et al., 2013) and we use exact match accuracy as its measurement.",
"Meanwhile, NLG is measured by BLEU.",
"The results reveal a very similar trend to that in E2E.",
"The generated examples can be found in Appendix.",
"In this section we further analyse the impact of the shared latent variable and also the impact of utilising unlabelled data.",
"As mentioned in Section 2.1, the latent variable z can be sampled from either posterior approximation q ( z | x ) or q ( z | y ) .",
"We inspect the latent space in Figure 2 to find out how well the model learns intent sharing.",
"We plot z with the E2E dataset on 2-dimentional space using t-SNE projection (Maaten and Hinton, 2008).",
"We observe two interesting properties.",
"First, for each data point ( x , y ), the z values sampled from q ( z | x ) and q ( z | y ) are close to each other.",
"This reveals that the meanings of x and y are tied in the latent space.",
"Second, there exists distinct clusters in the space of z .",
"By further inspecting the actual examples within each cluster, we found that a cluster represents a similar meaning composition.",
"For instance, the cluster cen-Model NLU NLG JUG basic 90.55 0.726 JUG basic (feed random z) 38.13 0.482 Table 9: A comparative study to evaluate the contribution of the learned latent variable z in NLU/NLG decoding.",
"tered at (-20, -40) contains { name , foodtype , price , rating , area , near }, while the cluster centered at (45, 10) contains { name , eattype , foodtype , price }.",
"This indicates that the shared latent serves as conclusive global feature representations for NLU and NLG.",
"One novelty of our model is the introduction of shared latent variable z for natural language x and formal representations y .",
"A common problem in neural variational models is that when coupling a powerful autogressive decoder, the decoder tends to learn to ignore z and solely rely on itself to generate the data (Bowman et al., 2016; Chen et al., 2017; Goyal et al., 2017).",
"In order to examine to what extent does our model actually rely on the shared variable in both NLU and NLG, we seek for an empirical answer by comparing the JUG basic model with a model variant which uses a random value of z sampled from a normal distribution N ( 0 , 1 ) during testing.",
"From Table 9, we can observe that there exists a large performance drop if z is assigned with random values.",
"This suggests that JUG indeed relies greatly on the shared variable to produce good-quality x or y .",
"We further analyse the various sources of errors to understand the cases which z helps to improve.",
"On E2E dataset, wrong prediction in NLU comes from either predicting not_mention label for certain slots in ground truth semantics; predicting arbitrary values on slots not present in the ground truth semantics; or predicting wrong values com-E2E Weather Method NLU NLG NLU NLG JUG basic 60.30 0.685 73.62 0.634 +unlabelled x 62.89 0.765 74.97 0.654 +unlabelled y 59.55 0.815 76.98 0.621 +unlabelled x and y 68.09 0.814 79.19 0.670 Table 11: Comparison on sources of unlabelled data for semi-supervised learning using only utterances ( x ), only semantic representations ( y ) or both ( x and y ).",
"paring to ground truth.",
"Three types of error are referred to Missing (Mi), Redundant (Re) and Wrong (Wr) in Table 10.",
"For NLG, semantic errors can be either missing or generating wrong slot values in the given semantics (Wen et al., 2015).",
"Our model makes fewer mistakes in all these error sources comparing to the baseline Decoupled .",
"We believe this is because the clustering property learned in the latent space provides better feature representations at a global scale, eventually benefiting NLU and NLG.",
"In Section 4.3, we found that the performance of our model can be further enhanced by leveraging unlabelled data.",
"As we used both unlabelled utterances and unlabelled semantic representations together, it is unclear if both contributed to the performance gain.",
"To answer this question, we start with the JUG basic model, and experimented with adding unlabelled data from 1) only unlabelled utterances x ; 2) only semantic representations y ; 3) both x and y .",
"As shown in Table 11, when adding any uni-sourced unlabelled data ( x or y ), the model is able to improve to a certain extent.",
"However, the performance can be maximised when both data sources are utilised.",
"This strengthens the argument that our model can leverage bi-sourced unlabelled data more effectively via latent space sharing to improve NLU and NLG at the same time.",
"Natural Language Understanding (NLU) refers to the general task of mapping natural language to formal representations.",
"One line of research in the dialogue community aims at detecting slot-value pairs expressed in user utterances as a classification problem (Henderson et al., 2012; Sun et al., 2014; Mrkic et al., 2017; Vodoln et al., 2017).",
"Another line of work focuses on converting single-turn user utterances to more structured meaning representations as a semantic parsing task (Zettlemoyer and Collins, 2005; Jia and Liang, 2016; Dong and Lap-ata, 2018; Damonte et al., 2019).",
"In comparison, Natural Language Generation (NLG) is scoped as the task of generating natural utterances from their formal representations.",
"This is traditionally handled with a pipelined approach (Reiter and Dale, 1997) with content planning and surface realisation (Walker et al., 2001; Stent et al., 2004).",
"More recently, NLG has been formulated as an end-to-end learning problem where text strings are generated with recurrent neural networks conditioning on the formal representation (Wen et al., 2015; Duek and Jurcicek, 2016; Duek et al., 2020; Balakrishnan et al., 2019; Tseng et al., 2019).",
"There has been very recent work which does NLU and NLG jointly.",
"Both Ye et al. (2019) and Cao et al. (2019) explore the duality of semantic parsing and NLG.",
"The former optimises two sequence-to-sequence models using dual information maximisation, while the latter introduces a dual learning framework for semantic parsing.",
"Su et al. (2019) proposes a learning framework for dual supervised learning (Xia et al., 2017) where both NLU and NLG models are optimised towards a joint objective.",
"Their method brings benefits with annotated data in supervised learning, but does not allow semi-supervised learning with unlabelled data.",
"In contrast to their work, we propose a generative model which couples NLU and NLG with a shared latent variable.",
"We focus on exploring a coupled representation space between natural language and corresponding semantic annotations.",
"As proved in experiments, the information sharing helps our model to leverage unlabelled data for semi-supervised learning, which eventually benefits both NLU and NLG.",
"We proposed a generative model which couples natural language and formal representations via a shared latent variable.",
"Since the two space is coupled, we gain the luxury of exploiting each unpaired data source and transfer the acquired knowledge to the shared meaning space.",
"This eventually benefits both NLU and NLG, especially in a low-resource scenario.",
"The proposed model is also suitable for other translation tasks between two modalities.",
"As a final remark, natural language is richer and more informal.",
"NLU needs to handle ambiguous or erroneous user inputs.",
"However, formal representations utilised by an NLG system are more precisely-defined.",
"In future, we aim to refine our generative model to better emphasise this difference of the two tasks.",
"Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan.",
"This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1.."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Supervised learning assumes that a ground truth label exists.",
"However, the reliability of this ground truth depends on human annotators, who often disagree.",
"Prior work has shown that this disagreement can be helpful in training models.",
"We propose a novel method to incorporate this disagreement as information: in addition to the standard error computation, we use soft labels (i.e., probability distributions over the annotator labels) as an auxiliary task in a multi-task neural network.",
"We measure the divergence between the predictions and the target soft labels with several loss-functions and evaluate the models on various NLP tasks.",
"We find that the soft-label prediction auxiliary task reduces the penalty for errors on ambiguous entities and thereby mitigates overfitting.",
"It significantly improves performance across tasks beyond the standard approach and prior work.",
"Usually, the labels used in NLP classification tasks are produced by sets of human annotators.",
"As disagreement between annotators is common, many methods aggregate the different answers into a supposedly correct one (Dawid and Skene, 1979; Carpenter, 2008; Hovy et al., 2013; Raykar et al., 2010; Paun et al., 2018; Ruiz et al., 2019).",
"However, the aggregated labels obtained in this way mask the world's real complexity: instances can be intrinsically ambiguous (Poesio and Artstein, 2005; Zeman, 2010; Plank et al., 2014; Pavlick and Kwiatkowski, 2019), or so challenging to evaluate that considerable disagreement between different annotators is unavoidable.",
"In those cases, it is reasonable to wonder whether the ambiguity is indeed harmful to the models or whether it carries valuable information about the relative difficulty of each instance (Aroyo and Welty, 2015).",
"Several authors followed that intuition, trying ways to incorporate the information about the level of annotator agreement in their models (Sheng et al., 2008; Plank et al., 2014, 2016; Jamison and Gurevych, 2015; Rodrigues and Pereira, 2018; Lalor et al., 2017).",
"Usually, Deep Learning models compute the error as the divergence between the predicted label distribution and a one-hot encoded gold distribution (i.e., nothing but the gold label has any probability mass).",
"However, for complex tasks, this binary black-and-white notion of truth is not plausible and can lead to overfitting.",
"Instead, we can use a more nuanced notion of truth by comparing against soft labels : we collect the probability distributions over the labels given by the annotators, rather than using one-hot encodings with a single correct label.",
"To measure the divergence between probability distributions, we can use well-known measures like the Kullback-Leibler divergence (Kullback and Leibler, 1951), the Jensen-Shannon divergence (Lin, 1991), and the Cross-Entropy, which is also used to quantify the error with one-hot encoded labels.",
"The main impediment to the direct use of soft labels as targets, though, is the lack of universally accepted performance metrics to evaluate the divergence between probability distributions.",
"(Most metrics lack an upper bound, making it difficult to assess prediction quality).",
"Usually, annotations are incorporated into the models without soft labels (Plank et al., 2014; Rodrigues and Pereira, 2018).",
"Where soft labels are used, they are variously filtered according to their distance from the correct labels and then used to weight the training instances rather than as prediction targets.",
"These models still predict only true labels (Jamison and Gurevych, 2015).",
"In contrast to previous approaches, we use MultiTask Learning (MTL) to predict a probability distribution over the soft labels as additional output.",
"We jointly model the main task of predicting standard gold labels and the novel auxiliary task of predicting the soft label distributions.",
"Due to the difficulty of interpreting its performance, we do not directly evaluate the distance between the target and the predicted probability distributions.",
"However, the MTL framework allows us to indirectly evaluate its effect on the main task.",
"Exploiting the standard metrics for gold labels, we can also compare the effect of different loss functions for the soft label task.",
"In particular, we propose a standard and an inverse version of the KL-divergence and Cross-Entropy.",
"In previous work (Jamison and Gurevych, 2015), filtering and weighting the training instances according to soft labels did not lead to consistent performance improvements.",
"In contrast, we find that the information carried by MTL soft labels does significantly improve model performance on several NLP tasks.",
"Contributions",
"1) We show that MTL models, trained with soft labels, consistently outperform the corresponding Single-Task Learning (STL) networks, and",
"2) we evaluate the use of different loss functions for soft labels.",
"For the experiments, we use different types of neural networks, depending on the type of task.",
"However, we create two versions of each model architecture: an STL model and an MTL model.",
"In STL, we predict the one-hot encoded labels.",
"In MTL, we add the auxiliary task of predicting the soft label distributions to the previous main task.",
"In both cases, we use Adam optimization (Kingma and Ba, 2014).",
"The loss function for the main task is standard cross-entropy.",
"For the auxiliary task, we have different options.",
"The KL-divergence is a natural choice to measure the difference between the prediction distribution Q and the distribution of soft labels P .",
"However, there are two ways we can do that, depending on what we want to capture.The standard KL-divergence is: DKL ( P || Q ) = (cid:88) i P ( i ) log 2 (cid:18) P ( i ) Q ( i ) (cid:19) , (1) This measures the divergence from Q to P and encourages a wide Q , because if the model overestimates the regions of small mass from P it will be heavily penalised.",
"The inverse KL-divergence is: DKL ( Q || P ) = (cid:88) i Q ( i ) log 2 (cid:18) Q ( i ) P ( i ) (cid:19) (2) This measures the divergence from P to Q and encourages a narrow Q distribution because the model will try to allocate mass to Q in all the places where P has mass; otherwise, it will get a strong penalty.",
"Considering that we use the auxiliary task to reduce overfitting on the main task, we expect equation 2 to be more effective because it encourages the model to learn a distribution that pays attention to the classes where the annotations possibly agree.",
"A third option is to directly apply Cross-Entropy.",
"H ( P || Q ) = H ( P ) + (cid:88) i P ( i ) log 2 (cid:18) P ( i ) Q ( i ) (cid:19) (3) = (cid:88) i P ( i ) log 2 ( Q ( i )) .",
"(4) Therefore, regular KL-divergence and Cross-Entropy tend to lead to the same performance.",
"For completeness, we report the results of Cross-Entropy as well.",
"As overall loss of the main and of the auxiliary task, we compute the two's sum.",
"We do not apply any normalization method to the two losses, as unnecessary.",
"We use LogSoftmax activation function for the main task, which is a standard choice for one-hot encoded labels, and standard Softmax for the auxiliary task.",
"Against the distributions of gold (one-hot encoded) and soft labels, both summing up to one, the errors are on the same scale.",
"We evaluate our approach on two NLP tasks: POS tagging and morphological stemming.",
"We use the respective data sets from Plank et al. (2014) and Jamison and Gurevych (2015) (where data sets are sufficiently large to train a neural model).",
"In both cases, we use data sets where both one-hot (gold) and probabilistic (soft) labels (i.e., distributions over labels annotations) are available.",
"The code for all models in this paper will be available on github.com/fornaciari.",
"Data set For this task, we use the data set released by Gimpel et al. (2010) with the crowdsourced labels provided by Hovy et al. (2014).",
"The same data set was used by Jamison and Gurevych (2015).",
"Similarly, we use the CONLL Universal POS tags (Petrov et al., 2012) and 5-fold cross-validation.",
"The soft labels come from the annotation of 177 annotators, with at least five annotations for each instance.",
"Differently from Jamison and Gurevych (2015), however, we also test the model on a completely independent test set, released by Plank et al. (2014).",
"This data set does not contain soft labels.",
"However, they are not necessary to test our models.",
"Model We use a tagging model that takes two kinds of input representations, at the character and the word level (Plank et al., 2016).",
"At the character level, we use character embeddings trained on the same data set; at the word level, we use Glove embeddings (Pennington et al., 2014).",
"We feed the word representation into a context bi-RNN', selecting the hidden state of the RNN at the target word's position in the sentence.",
"The character representation is then fed into a sequence bi-RNN', whose output is its final state.",
"The two outputs are concatenated and passed to an attention mechanism, as proposed by Vaswani et al. (2017).",
"In the STL models, the attention mechanisms' output is passed to a last attention mechanism and to a fully connected layer that gives the output.",
"In the MTL models, the last two components of the STL network (attention + fully connected layer) are duplicated and used for the auxiliary task, providing softmax predictions.",
"Data set We use the data set used in Jamison and Gurevych (2015), which was originally created by Carpenter et al. (2009).",
"It consists of (word, stem) pairs, and the task is a binary classification task of whether the stem belongs to the word.",
"The soft labels come from 26 unique annotators, and each instance received at least four labels.",
"Model We represent each (word, stem) -pair with the same character embeddings trained for the previous task.",
"Each representation passes to two convolutional/max-pooling layers.",
"We use two convolutional layers with 64 and 128 channels and three windows of 3, 4, and 5 characters size.",
"Their outputs are connected with two independent attention mechanisms (Vaswani et al., 2017).",
"Their output is concatenated and passed directly to the fully connected layers one for each task -, which provide the prediction.",
"In the MTL models, the concatenation of the attention mechanisms is passed to another fully connected layer, which predicts the soft labels.",
"To account for the effects of random initializations, we run ten experiments for each experimental condition.",
"During the training, we select the models relying on the F-measure observed on the development set.",
"We report the averaged results for accuracy and F-measure, the metrics used by the studies we compare to.",
"For each task, we compare the STL and MTL models.",
"Where possible, we compare model performance with previous work.",
"Table 1 shows the results.",
"The MTL models significantly outperform the STL ones, and in most cases, the previous state-of-the-art as well.",
"We evaluate STL and MTL's significance via bootstrap sampling, following Berg-Kirkpatrick et al. (2012); Sgaard et al. (2014).",
"However, we verified that the gold labels do not correspond to the classes resulting from the majority voting of the annotations used for the soft labels.",
"Consequently, the MTL models exploit an additional source of information that is not provided to the STL ones.",
"To validate our hypothesis, we need to exclude that the reason for the MTL's success is not simply that the soft labels inject more information into the models, We ran a set of experiments where the main task was trained on the majority voting (silver) labels from the annotations, rather than on the gold labels.",
"We still performed the tests on the gold labels.",
"In these conditions, both tasks rely on the same source of (imperfect) information, so MTL has no potential advantage over STL.",
"While overall performance drops compared to the results of Table 1, Table 2 shows that the MTL models still maintain a significant advantage over the STL ones.",
"As before, results are averaged over ten independent runs for each condition.",
"We perform the following analysis for the POS and the stemming tasks, and for each kind of loss function in the MTL models.",
"In particular, we consider four-conditions of the predictions:",
"1) where both STL and MTL gave the correct answer,",
"2) where both gave the wrong answer,",
"3) where STL was correct and MTL incorrect, and",
"4) where MTL was correct and STL incorrect (see confusion matrix in Table",
"For each of these categories, we compute the relative kurtosis of the soft labels.",
"We choose this measure as it describes how uniform the probability distribution is: whether the annotators agree on a single class, or whether they disperse their votes among different classes.",
"Not surprisingly, we find the highest average kurtosis where both STL and MTL models give the correct prediction.",
"Both kinds of models find it easier to predict the instances that are also unambiguous for the annotators.",
"The opposite holds as well: the instances where both MTL and STL models are wrong show the lowest mean kurtosis.",
"More interesting is the outcome where MTL models are correct and STL wrong, and vice-versa.",
"In these cases, the average kurtosis lies between the two previous extremes.",
"Also, we find a consistent trend across data sets and MTL loss-functions: the instances where only the MTL models are correct show a slightly higher kurtosis than those instances where only the STL models give the right answer.",
"To measure the significance of this trend, we apply the Mann-Whitney rank test (Mann and Whitney, 1947).",
"We use a non-parametric test because the kurtoses' distribution is not normal.",
"We find two significant results: when we use Cross-Entropy as MTL loss-function in the POS data set, and with the KL inverse on the Stemming data set.",
"We report the POS results in table 3.",
"Similarly to the previous sections 1 and 2, the results refer to 10 runs of each experimental condition.",
"This finding suggests that, when dealing with ambiguous cases, the soft labels tend to provide a qualified hint.",
"It is training the models to predict the classes that seem to be the most probable for the annotators.",
"One line focuses on the aggregation of multiple annotations before model training.",
"Seminal work includes the proposal by Dawid and Skene (1979), who proposed an Expectation-Maximization (EM) based aggregation model.",
"This model has since influenced a large body of work on annotation aggregation, and modeling annotator competence (Carpenter et al., 2009; Hovy et al., 2013; Raykar et al., 2010; Paun et al., 2018; Ruiz et al., 2019).",
"In our experiments on POS-tagging, we evaluated the possibility of testing Dawid-Skene labels rather than Majority Voting, finding that the performance of the two against the gold standard was mostly the same.",
"Some of these methods also evaluate the annotators' expertise (Dawid and Skene, 1979; Raykar et al., 2010; Hovy et al., 2013; Ruiz et al., 2019).",
"Others just penalize disagreement (Pan et al., 2019).",
"The second line of work focuses on filtering out presumably low quality data to train on the remaining data (Beigman Klebanov and Beigman, 2014; Jamison and Gurevych, 2015).",
"However, such filtering strategies require an effective filtering threshold, which is non-trivial; relying only on high-agreement cases also results in worse performance (Jamison and Gurevych, 2015).",
"Some studies (Goldberger and Ben-Reuven, 2016; Han et al., 2018b,a) treat disagreement as a corruption of a theoretical gold standard.",
"Since the robustness of machine learning models is affected by the data annotation quality, reducing noisy labels generally improves the models' performance.",
"The closest to our work are the studies of Cohn and Specia (2013) and Rodrigues and Pereira (2018), who both use MTL.",
"In contrast to our approach, though, each of their tasks represents an annotator.",
"We instead propose to learn from both the gold labels and the distribution over multiple annotators, which we treat as soft label distributions in a single auxiliary task.",
"Compared to treating each annotator as a task, our approach has the advantage that it requires fewer output nodes, which reduces the number of parameters.",
"To our knowledge, the only study that directly uses soft labels is the one by Lalor et al. (2017).",
"Different from our study, they assume that soft labels are available only for a subset of the data.",
"Therefore they use them to fine-tune STL networks.",
"Despite this methodological difference, their findings support this paper's intuition that soft labels carry signal rather than noise.",
"In a broad sense, our study belongs to the research area of regularization methods for neural networks.",
"Among them, label smoothing (Pereyra et al., 2017) penalizes the cases of over-confident network predictions.",
"Both label smoothing and soft labels reduce overfitting regulating the loss size.",
"However, label smoothing relies on the gold labels' distribution, not accounting for the instances' inherent ambiguity, while soft labels selectively train the models to reduce the confidence when dealing with unclear cases, not affecting the prediction of clear cases.",
"Disagreement also relates to the issue of annotator biases (Shah et al., 2020; Sap et al., 2019; Hovy and Yang, 2021), and our method can provide a possible way to address it.",
"We propose a new method for leveraging instance ambiguity, as expressed by the probability distribution over label annotations.",
"We set up MTL models to predict this label distribution as an auxiliary task in addition to the standard classification task.",
"This setup allows us to incorporate uncertainty about the instances' class membership into the model.",
"Across two NLP tasks, three data sets, and three loss functions, we always find that our method significantly improves over the STL performance.",
"While the performance difference between the loss functions is not significant, we find that the inverse version of KL gives the best results in all the experimental conditions but one.",
"This finding supports our idea of emphasizing the coders' disagreement during training.",
"We conjecture that predicting the soft labels acts as a regularizer, reducing overfitting.",
"That effect is especially likely for ambiguous instances, where annotators' label distributions differ especially strongly from one-hot encoded gold labels.",
"DH and TF are members of the Data and Marketing Insights Unit at the Bocconi Institute for Data Science and Analysis.",
"AU, MP and SP were funded by the DALI project, ERC Grant 695662.",
"We would like to thank the various reviewers who suggested valuable edits and additions."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"objective",
"abstain",
"objective",
"abstain",
"result",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"result",
"objective",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"We consider the distinction between intended and perceived sarcasm in the context of textual sarcasm detection.",
"The former occurs when an utterance is sarcastic from the perspective of its author, while the latter occurs when the utterance is interpreted as sarcastic by the audience.",
"We show the limitations of previous labelling methods in capturing intended sarcasm and introduce the iSarcasm dataset of tweets labeled for sarcasm directly by their authors.",
"Examining the state-of-the-art sarcasm detection models on our dataset showed low performance compared to previously studied datasets, which indicates that these datasets might be biased or obvious and sarcasm could be a phenomenon under-studied computationally thus far.",
"By providing the iSarcasm dataset, we aim to encourage future NLP research to develop methods for detecting sarcasm in text as intended by the authors of the text, not as labeled under assumptions that we demonstrate to be sub-optimal.",
"Sarcasm is a form of irony that occurs when there is some discrepancy between the literal and intended meanings of an utterance.",
"This discrepancy is used to express dissociation towards a previous proposition, often in the form of contempt or derogation (Wilson, 2006).",
"Sarcasm is omnipresent in social media text and can be highly disruptive of systems that harness this data for sentiment and emotion analysis (Maynard and Greenwood, 2014).",
"It is therefore imperative to devise models for sarcasm detection.",
"The effectiveness of such models depends on the availability and quality of labelled data used for training.",
"Collecting such data is challenging due to the subjective nature of sarcasm.",
"For instance, Dress et al. (2008) notice a lack of consistence in how sarcasm is used by people of different socio-cultural backgrounds.",
"As a result, an utterance intended sarcastic by its author might not be perceived as such by audiences of different backgrounds (Rockwell and Theriot, 2001; Oprea and Magdy, 2020).",
"There are two methods used so far to label texts for sarcasm: distant supervision, where texts are considered sarcastic if they meet predefined criteria, such as including specific hashtags; and manual labelling by human annotators.",
"We believe both methods are sub-optimal for capturing the sarcastic intention of the authors of the texts.",
"As a result, existing models trained on such datasets might be optimized to capture the noise induced by these labelling methods.",
"In this paper, we present the iSarcasm dataset of tweets labelled for sarcasm by their authors.",
"To our knowledge, this is the first attempt to create noise-free examples of intended sarcasm.",
"In a survey, we asked Twitter users to provide both sarcastic and non-sarcastic tweets that they had posted in the past.",
"For each sarcastic tweet, we asked them to explain why it was sarcastic and how they would convey the same meaning non-sarcastically.",
"Labels were thus implicitly specified by the authors themselves.",
"We implemented restrictive quality control to exclude spurious survey responses.",
"We then asked a trained linguist to manually check the sarcastic tweets and further label them into the subcategories of sarcasm defined by Leggitt and Gibbs Jr. (2000).",
"We further collected third-party sarcasm labels for the tweets in iSarcasm from workers on a crowd-sourcing platform.",
"Third-party annotation for sarcasm has been conducted before (Filatova, 2012; Riloff et al., 2013; Abercrombie and Hovy, 2016), but no studies checked the ability of the annotators to capture the actual sarcasm meant by the authors.",
"On iSarcasm, annotators recognise author labels with an F-score of 0.616.",
"This indicates that sarcasm is a subjective phenomenon, challenging even for humans to detect.",
"Further, it demonstrates that using third-party annotators to label texts for sarcasm can lead to inaccurate labels.",
"We implemented state-of-the-art sarcasm detection models (Tay et al., 2018; Hazarika et al., 2018; Van Hee et al., 2018) and tested them on iSarcasm, to investigate their effectiveness in capturing sarcasm as intended by the authors.",
"While these models achieve F-scores reaching 0.874 on existing datasets, they yield a maximum F-score of 0.364 on iSarcasm, suggesting that previous datasets might be biased or obvious.",
"This highlights the importance of developing new approaches for sarcasm detection that are more effective at capturing author intention.",
"iSarcasm contains 4,484 English tweets, each with an associated intended sarcasm label provided by its author, with a ratio of roughly 1:5 of sarcastic to non-sarcastic tweets.",
"Each sarcastic tweet has an extra label indicating the category of sarcasm it belongs to.",
"We publish the dataset publicly for research purposes 1 .",
"The way sarcasm is used can vary across sociocultural backgrounds.",
"Dress et al. (2008) notice that members of collectivist cultures tend to express sarcasm in a more subtle way than individualists.",
"They also point out gender differences.",
"Females seem to have a more self-deprecating attitude when using sarcasm than males.",
"Rockwell and Theriot (2001) find some cultures to associate sarcasm with humour more than others.",
"There are also cultures who do not use sarcasm at all, such as the Hua, a group of New Guinea Highlanders (Attardo, 2002).",
"Because of these differences, an utterance intended sarcastic by its author might not be perceived as such by the audience (Jorgensen et al., 1984).",
"Conversely, the audience could perceive the utterance as sarcastic, even if it was not intended as such.",
"The distinction between intended and perceived sarcasm, also referred to as encoded and decoded sarcasm, respectively, has been pointed out in previous research (Kaufer, 1981; Rockwell and Theriot, 2001).",
"However, it has not been considered in a computational context thus far when building datasets for textual sarcasm detection.",
"We believe accounting for it is essential, especially nowadays.",
"Consider social media posts that can reach audiences of unprecedented sizes.",
"It is important to consider both the communicative intention of the author, for tasks such as opinion mining, as well 1 https://github.com/silviu-oprea/iSarcasm as possible interpretations by audiences of different sociocultural backgrounds, for tasks such as hate-speech detection.",
"Two methods were used so far to label texts for sarcasm: distant supervision and manual labelling.",
"Distant supervision This is by far the most common method.",
"Texts are considered positive examples (sarcastic) if they meet predefined criteria, such as containing specific tags, such as #sarcasm for Twitter data (Ptacek et al., 2014), and /s for Reddit data (Khodak et al., 2018), or being posted by specific social media accounts (Barbieri et al., 2014a).",
"Negative examples are usually random posts that do not match the criteria.",
"Table 1 gives an overview of datasets constructed this way, along with tags or accounts they associate with sarcasm.",
"The main advantage of distant supervision is that it allows building large labelled datasets with no manual effort.",
"However, as we discuss in Section 3, the labels produced can be very noisy.",
"Manual labelling An alternative to distant supervision is collecting texts and presenting them to human annotators for labelling.",
"Filatova (2012) asks annotators to find pairs of Amazon reviews where one is sarcastic and the other one is not, collecting 486 positive and 844 negative examples.",
"Abercrombie and Hovy (2016) annotate 2,240 Twitter conversations, ending up with 448 positive and 1,732 negative labels, respectively.",
"Riloff et al. (2013) use a hybrid approach, where they collect a set of 1,600 tweets that contain #sarcasm or #sar-castic, and another 1,600 without these tags.",
"They remove such tags from all tweets and present the tweets to a group of human annotators for final labelling.",
"We call this the Riloff dataset .",
"A similar approach is employed by Van Hee et al. (2018) who recently presented their dataset as part of a SemEval shared task for sarcasm detection.",
"It is a balanced dataset of 4,792 tweets.",
"We call it the SemEval-2018 dataset .",
"Based on the information considered when classifying a text as sarcastic or non-sarcastic, we identify two classes of models across literature: text-based models and contextual models.",
"sified.",
"Most work in this direction considers linguistic incongruity (Campbell and Katz, 2012) to be a marker of sarcasm.",
"Riloff et al. (2013) look for a positive verb in a negative sentiment context.",
"Bharti et al. (2015) search for a negative phrase in a positive sentence.",
"(Hernandez Faras et al., 2015) measure semantic relatedness between words using WordNet-based similarity.",
"Joshi et al. (2016b) use the cosine similarity between word embeddings.",
"Recent work (Tay et al., 2018) uses a neural intra-attention mechanism to capture incongruity.",
"Contextual models These models utilize information from both the text and the context of its disclosure, such as author information.",
"There is a limited amount of work in this direction.",
"Using Twitter data, Bamman and Smith (2015a) represent author context as manually-curated features extracted from their historical tweets.",
"Amir et al. (2016) merge all historical tweets into one document and use the Paragraph Vector model (Le and Mikolov, 2014) to build an embedding of that document.",
"Building on this, Hazarika et al. (2018) extract additional personality features from the merged historical tweets with a model pre-trained on a personality detection corpus.",
"Using the same strategy, Oprea and Magdy (2019) build separate embeddings for each historical tweet and identify author context with their the weighted average.",
"Despite reporting encouraging results, all previous models are trained and tested on datasets annotated via manual labelling, distant supervision, or a mix between them.",
"We believe both labelling methods are limited in their ability to capture sarcasm in texts as intended by the authors of the texts without noise.",
"In this section, we discuss limitations of current labelling methods that make them sub-optimal for capturing intended sarcasm.",
"We demonstrate them empirically on the Riloff dataset (Riloff et al., 2013), which uses a hybrid approach for labelling.",
"Since it is based on signals provided by the authors, distant supervision might seem like a candidate for capturing intended sarcasm.",
"However, we identify a few fundamental limitations with it.",
"First, the tags may not mark sarcasm, but may constitute the subject or object of conversation, e.g. #sarcasm annoys me!",
".",
"This could lead to false positives.",
"Second, when using tags such as #politics and #ed-ucation (Barbieri et al., 2014b), there is a strong underlying assumption that these tags are accompanied by sarcasm, potentially generating further false positives.",
"The assumption that some accounts always generate sarcasm (Barbieri et al., 2014a) is similarly problematic.",
"In addition, the intended sarcasm that distant supervision does capture might be of a specific flavor, such that, for instance, the inclusion of a tag would be essential to ensure infer-ability.",
"Building a model trained on such a dataset might, therefore, be biased to a specific flavour of sarcasm, being unable to capture other flavours, with tag without tag annot.",
"increasing the risk of false negatives and limiting the ability of trained models to generalise.",
"Finally, if a text does not contain the predefined tags, it is considered non-sarcastic.",
"This is a strong and problematic assumption that can lead to false negatives.",
"The main limitation of manual labelling is the absence of evidence on the intention of the author of the texts that are being labelled.",
"Annotator perception may be different to author intention, in light of studies that point out how sarcasm perception varies across socio-cultural contexts (Rockwell and Theriot, 2001; Dress et al., 2008).",
"Joshi et al. (2016a) provide more insight into this problem on the Riloff dataset.",
"They present the dataset, initially labelled by Americans, to be labelled by Indians who are trained linguists.",
"They find higher disagreement between Indian and American annotators, than between annotators of the same nationality.",
"Furthermore, they find higher disagreement between pairs of Indian annotators, indicating higher uncertainty, than between pairs of American annotators.",
"They attribute these results to socio-cultural differences between India and the United States.",
"They conclude that sarcasm annotation expands beyond linguistic expertise and is dependent on considering such factors.",
"Labels provided by third-party annotators might therefore not reflect the sarcastic intention of the authors of the texts that are being labelled, making this labelling method sub-optimal for capturing intended sarcasm.",
"To investigate this further, we looked at the Riloff dataset, which is published as a list of labelled tweet IDs.",
"We could only retrieve 1,832 tweets, the others being removed from Twitter.",
"We looked at the agreement between the presence of tags and manual annotation.",
"Table 2 shows the results.",
"We notice that 58% of the tweets that contained the predefined hashtags were labeled non-sarcastic.",
"This disagreement between distant supervision and manual annotation provides further evidence to doubt the ability of the latter to capture intended sarcasm, at least not the flavor that distant supervision might capture.",
"We could not perform the same analysis on the SemEval-2018 dataset because only the text of the tweets is provided, hashtags are filtered out, and tweet IDs are not available.",
"As we have shown, both labelling methods use a proxy for labelling sarcasm, in the form of predefined tags, predefined sources, or third-party annotators.",
"As such, they are unable to capture the sarcastic intention of the authors of the texts they label, generating both false positives and false negatives.",
"Our objective is to create a noise-free dataset of texts labelled for sarcasm, where labels reflect the sarcastic intention of the authors.",
"We designed an online survey where we asked Twitter users to provide links to one sarcastic and three non-sarcastic tweets that they had posted in the past, on their timeline, or as replies to other tweets.",
"We made it clear that the tweets had to be their own and no retweets were allowed.",
"We further required that the tweets should not include references to multimedia content or, if such content was referred, it should not be informative in judging sarcasm.",
"For each sarcastic tweet, users had to provide, in full English sentences, an explanation of why it was sarcastic and a rephrase that would convey the same message non-sarcastically.",
"This way, we aimed to prevent them from misjudging the sarcastic nature of their previous tweets under experimental bias.",
"Finally, we asked for their age, gender, birth country and region, and current country and region.",
"We use the term response to refer to all data collected from one submission of the survey.",
"The provided links should point to tweets posted no sooner than 48 hours before the submission, to prevent users from posting and providing tweets on the spot; All tweets in a response should come from the same account; Tweets cannot be from verified accounts or accounts with more than 30K followers to avoid getting tweets from popular accounts and claiming to be personal tweets 2 .",
"2 The initial number was set to 5K, but some workers asked Tweets should contain at least 5 words, excluding any hashtags and URLs; Links to tweets should not have been submitted in a previous response; Responses submitted in less than three minutes are discarded.",
"Each contributor agreed on a consent form before entering the survey, which informed them that only the IDs of the tweets they provide will be made public, to allow them to delete a tweet anytime and thus be in control of their own privacy in the future.",
"They have agreed that we may collect public information from their profile, which is accessible via the Twitter API as long as the tweets pointed to by the provided IDs are not removed.",
"We published our survey on multiple crowd-sourcing platforms, including Figure-Eight (F8), Amazon Mechanical Turk (AMT) and Prolific Academic (PA) 3 .",
"We could not get any quality responses from F8.",
"In fact, most of our quality control steps were developed over multiple iterations on F8.",
"On AMT, we retrieved some high quality responses, but, unfortunately, AMT stopped our job, considering that getting links to personal tweets of participants violates their policy.",
"We collected the majority of responses on PA. 4.2 Labelling Sarcasm Categories We then asked a trained linguist to inspect each collected sarcastic tweet, along with the explanation provided by the author and the non-sarcastic rephrase, in order to validate the quality of the response and further assign the tweet to one of the following categories of ironic speech defined by Leggitt and Gibbs Jr. (2000):",
"1. sarcasm : tweets that contradict the state of affairs and are critical towards an addressee;",
"2. irony : tweets that contradict the state of affairs but are not obviously critical towards an addressee;",
"3. satire : tweets that appear to support an addressee, but contain underlying disagreement and mocking;",
"4. understatement : tweets that undermine the importance of the state of affairs they refer to;",
"5. overstatement : tweets that describe the state of affairs in obviously exaggerated terms; us to raise it since they had more followers.",
"3 AMT: www.mturk.com , PA: prolific.ac , F8: www.figure-eight.com",
"6. rhetorical question : tweets that include a question whose invited inference (implicature) is obviously contradicting the state of affairs;",
"7. invalid : tweets for which the explanation provided by their authors is unclear/unjustified.",
"These were excluded from the dataset.",
"In this part, we decided to replicate the manual annotation approach presented in previous research (Riloff et al., 2013; Abercrombie and Hovy, 2016; Van Hee et al., 2018) on part of our dataset, which we consider later as the test set, and compare the resulting perceived sarcasm labels to the intended sarcasm labels collected from the authors of the tweets.",
"Our aim was to estimate the human performance in detecting sarcasm as intended by the authors.",
"When collecting perceived sarcasm labels, we aimed to reduce noise caused by variations in how sarcasm is defined across socio-cultural backgrounds.",
"Previous studies have shown gender (Dress et al., 2008) and country (Joshi et al., 2016a) to be the variables that are most influen-tial on this definition.",
"Based on their work, we made sure all annotators shared the same values for these variables.",
"We used PA to collect three annotations for each tweet in the iSarcasm dataset, and considered the dominant one as the label, which follows the same procedure as with building the Riloff dataset (Riloff et al., 2013).",
"We received 1,236 responses to our survey.",
"Each response contained four tweets labelled for sarcasm by their author, one sarcastic and three non-sarcastic.",
"As such, we received 1,236 sarcastic and 3,708 non-sarcastic tweets.",
"We filtered tweets using the quality control steps described in Section 4, and further disregarded all tweets that fall under the invalid category.",
"The resulting dataset is what we call iSarcasm, containing 777 sarcastic and 3,707 non-sarcastic tweets.",
"For each sarcastic tweet, we have its author's explanation as to why it is sarcastic, as well as how they would rephrase the tweet to be non-sarcastic.",
"The average length of a tweet is around 20 words.",
"Figure 1 shows the tweet length distribution across iSarcasm.",
"The average length of explanations 21 words, and of rephrases 14 words.",
"Over 46% of the tweets were posted in 2019, over overall sarcasm category sarcastic non-sarcastic sarcasm irony satire underst.",
"Among the contributors who filled our survey and provided the tweets, 56% are from the UK and 41% from the US, while 3% are from other countries such as Canada and Australia.",
"51% are females, and over 72% are less than 35 years old.",
"Figure 2 shows the age and gender distributions across contributors.",
"In iSarcasm, we investigated the presence of the hashtags #sarcasm, #sarcastic, and others often used to mark sarcasm in previous distant supervision datasets.",
"None of our tweets contains any of those tags, which confirms one of our discussed limitations of this approach, that the lack of tags should not be associated with lack of sarcasm, and that these tags might capture only one flavor of sarcasm, not sarcasm present on social media in general.",
"Regarding the categories of sarcasm, assigned by the linguist to the sarcastic tweets, Table 3 shows the distribution of the tweets into these categories.",
"As shown, sarcasm and irony are the largest two categories (73%), while understatement is the smallest one (with only 12 tweets).",
"Table 4 shows examples of the sarcastic tweets, along with the explanations and rephrases provided by the authors.",
"iSarcasm is published as two files, a training set and a test set, containing 80% and 20% of the examples chosen at random, respectively.",
"Each file contains tweet IDs along with corresponding intended sarcasm labels.",
"For sarcastic tweets we also provide the category of ironic speech they be-0 20 40 60 Number of words in tweet 0 1 2 3 4 5 P e r c en t age o f t w ee t s Figure 1: Tweet length distribution across iSarcasm.",
"long to.",
"This is in accordance with the consent form that the contributors have agreed to, whose privacy we take seriously.",
"Nonetheless, we still offer the tweets text along with the explanations and rephrases of the sarcastic tweets provided by the authors for free for research purposes, under an agreement that protects the privacy of our contributors.",
"As we mentioned earlier, we collected three third-party labels for each tweet in the test set of iSarcasm.",
"Using Cohen's kappa ( ; Cohen (1960)) as a measure, the pairwise inter-annotator agreement (IAA) scores were 12 = 0 .",
"37 , 13 = 0 .",
"39 and 23 = 0 .",
"36 , which highlights the high subjectivity of the task.",
"We used majority voting to select the final perceived sarcasm label for each tweet.",
"Table 5 shows the disagreement between the intended and perceived labels.",
"As shown, 30% of the sarcastic tweets were unrecognised by the annotators, while 45% of the tweets perceived as sarcastic were actually not intended to be sarcastic perc.",
"by their authors.",
"This supports our argument that third-party annotation for sarcasm should not be trusted.",
"In the following, we examine the effectiveness of state-of-the-art sarcasm detection models on iSarcasm.",
"We aim to investigate their ability to detect intended sarcasm rather than sarcasm labeled using distant supervision or manual annotation.",
"As we have shown, these labelling methods could produce noisy labels.",
"We experiment with those models that have achieved state-of-the-art results on previous benchmark datasets for sarcasm detection.",
"We consider four previously published datasets.",
"Two of them, Riloff (Riloff et al., 2013) and SemEval-2018 (Van Hee et al., 2018), were labeled via a hybrid approach of distant supervision for initial collection and manual annotation for actual labelling.",
"The other two datasets, Ptacek (Ptacek et al., 2014) and SARC (Khodak et al., 2018), are labeled using distant supervision.",
"As mentioned earlier, we managed to collect 1,832 tweets from the Riloff dataset.",
"SemEval-2018 is a balanced dataset consisting of 4,792 tweets.",
"For the Ptacket dataset, we collected 27,177 tweets out of the 50K published tweet IDs.",
"Finally, The SARC datasets consists of Reddit comments.",
"In a setting similar to Hazarika et al. (2018) who publish state-of-the-art results on this dataset, we consider two variants of SARC.",
"SARC-balanced contains 154,702 comments with the same number of sarcastic and non-sarcastic comments, while SARC-imbalanced contains 103,135 comments with a ratio of about 20:80 between sarcastic and non-sarcastic comments.",
"Riloff and Ptacek datasets We replicate the models implemented in (Tay et al., 2018), who report state-of-the-art results on Riloff and Ptacek.",
"These models are: LSTM first encodes the tweet with a recurrent neural network with long-term short memory units (LSTM; Hochreiter and Schmidhuber (1997)), then adds a binary softmax layer to output a probability distribution over labels (sarcastic or non-sarcastic) and assigns the most probable label.",
"It has one hidden layer of dimension 100.",
"Att-LSTM adds an attention mechanism on top of the LSTM, in the setting specified by Yang et al. (2016).",
"In particular, it uses the attention mechanism introduced by Bahdanau et al. (2014) of dimension 100.",
"CNN encodes the tweet with a convolutional neural network (CNN) with 100 filters of size 3 and provides the result to feed-forward network with a final binary softmax layer, choosing the most probable label.",
"SIARN (Single-Dimension Intra-Attention Network; Tay et al. (2018)) is the model that yields the best published performance on the Riloff dataset.",
"It relies on the assumption that sarcasm is caused by linguistic incongruity between words.",
"It uses an intra-attention mechanism (Shen et al., 2018) between each pair or words to detect this incongruity.",
"MIARN (Multi-Dimension Intra-Attention Network; Tay et al. (2018)) reports the best results on the Ptacek dataset.",
"In addition to SIARN, MIARN allows multiple intra-attention scores for each pair of words to account for multiple possible meanings of a word when detecting incongruity.",
"We use an implementation of MIARN similar to that described by its authors.",
"We set the dimension of all hidden layers of SIARN and MIARN to 100.",
"SARC datasets Hazarika et al. (2018) report the best results on SARC-balanced and SARC-imbalanced, to our knowledge.",
"However, they model both the content of the comments as well as contextual information available about the authors.",
"In this paper we only focus on content modelling, using a convolutional network ( 3CNN ) in a setting similar to what they describe.",
"3CNN uses three filter types of sizes 3, 4, and 5, with 100 filters for each size.",
"SemEval-2018 dataset The SemEval dataset contains two types of labels for each tweet: binary labels that specify whether the tweet is sarcastic or not; and labels with four possible values, specifying the type of sarcasm present 4 .",
"Wu et al. (2018) report the best results on both tasks with their Dense-LSTM model.",
"Given a tweet, the 4 We use sarcasm to mean what they refer to as verbal irony.",
"model uses a sequence of four LSTM layers to compute a hidden vector H .",
"H is then concatenated with a tweet embedding S computed in advance by averaging embeddings of all words inside using the pre-trained embeddings provided by Bravo-Marquez et al. (2016).",
"H and S are further concatenated with a sentiment feature vector of the tweet computed in advance using the weka toolkit (Mo-hammad and Bravo-Marquez, 2017), by applying the TweetToLexiconFeatureVector (Bravo-Marquez et al., 2014) and TweetToSentiStrengthFeatureVec-tor (Thelwall et al., 2012) filters.",
"The authors of Dense-LSTM train the network in a multitask setting on the SemEval dataset (Van Hee et al., 2018) to predict three components: the binary sarcasm label, one of the four types of sarcasm, and the corresponding hashtag, if any, that was initially used to mark the tweet as sarcastic, out of #sarcasm, #sar-castic, #irony and #not.",
"Wu et al. (2018) report an F-score of 0.674 using a fixed dropout rate of 0.3 in all layers.",
"They further report an F-score of 0.705 by averaging the performance of 10 Dense-LSTM models, varying the dropout rate to random values between 0.2 and 0.4.",
"We implement and train it to only predict the binary sarcasm label, to make it applicable to iSarcasm and make the results on SemEval-2018 and iSarcasm comparable.",
"For each previous dataset, we implemented the models reported previously to achieve the best performance on that dataset, and made sure our implementations achieve similar performance to the published one.",
"This is confirmed in Table 6, providing confidence in the correctness of our implementations.",
"Table 7 reports precision, recall and f-score results on the test set of iSarcasm using the detection models discussed, alongside third-party annotator performance.",
"As shown, all the models perform sig-nificantly worse than humans, who achieve an F-score of only 0.616.",
"MIARN is the best performing model with a considerably low F-score of 0.364, compared to its performance on the Riloff and Ptacek datasets (0.741 and 0.874 F-scores respec-tively).",
"3CNN achieves the lowest performance on iSarcasm with an F-Score of 0.286 compared to 0.675 and 0.788 on SARC balanced and im-balanced, respectively.",
"Similarly, Dense-LSTM achieves 0.318, compared to 0.666 on SemEval-2018.",
"Previous models that achieved high performance in detecting sarcasm on datasets sampling perceived sarcasm (third-party labels) or hash-tagged sarcasm (distant supervision) have failed dramatically to detect sarcasm as meant by its author.",
"This motivates the need to develop more effective methods for detecting intended sarcasm.",
"Potentially, building models that account for sociocultural traits of the authors (available on, or inferred from, their Twitter profiles), or consider other contextual elements to judge the sarcasm in our dataset (Rock-well and Theriot, 2001).",
"Previous research has considered certain contextual elements (Bamman and Smith, 2015b; Amir et al., 2016; Hazarika et al., 2018; Oprea and Magdy, 2019), but only on sarcasm captured by previous labelling methods.",
"authors, shall revolutionise research in sarcasm detection in the future; and open the direction for new sub-tasks, such as sarcasm category prediction, and sarcasm decoding/encoding, using information found both in the tweets themselves, and in the explanations and rephrases provided by the authors, available with each sarcastic tweet in the dataset.",
"In this paper, we presented iSarcasm, a dataset of intended sarcasm consisting of 4,484 tweets labeled and explained by their authors, and further revised and categorised by an expert linguistic.",
"We believe this dataset will allow future work in sarcasm detection to progress in a setting free of the noise found in existing datasets.",
"We saw that computational models perform poorly in detecting sarcasm in the new dataset, indicating that the sarcasm detection task might be more challenging compared to how it was seen in earlier research.",
"We aim to promote research in sarcasm detection, and to encourage future investigations into sarcasm in general and how it is perceived across cultures.",
"This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1); the University of Edinburgh; and The Financial Times."
] | [
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"other"
] |
[
"Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-the-art performance in NLP.",
"Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available.",
"In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages.",
"Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not.",
"Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages.",
"We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks.",
"This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages.",
"1 1 Introduction Language models are now a new standard to build state-of-the-art Natural Language Processing (NLP) systems.",
"In the past year, monolingual language models have been released for more than 20 languages including Arabic, French, German, and Italian (Antoun et al., 2020; Martin et al., 2020; de Vries et al., 2019; Caete et al., 2020; Kuratov and Arkhipov, 2019; Schweter, 2020, inter alia).",
"Additionally, large-scale multilingual models covering more than 100 languages are now available (XLM-R by Conneau et al. (2020) and mBERT by Devlin et al. (2019)).",
"Still, most of the 6500+ spoken languages in the world (Hammarstrm, 2016) are not coveredremaining unseenby 1 Code available at https://github.com/benjami n-mlr/mbert-unseen-languages.git those models.",
"Even languages with millions of native speakers like Sorani Kurdish (about 7 million speakers in the Middle East) or Bambara (spoken by around 5 million people in Mali and neighboring countries) are not covered by any available language models at the time of writing.",
"Even if training multilingual models that cover more languages and language varieties is tempting, the curse of multilinguality (Conneau et al., 2020) makes it an impractical solution, as it would require to train ever larger models.",
"Furthermore, as shown by Wu and Dredze (2020), large-scale multilingual language models are sub-optimal for languages that are under-sampled during pretraining.",
"In this paper, we analyze task and language adaptation experiments to get usable language model-based representations for under-studied low resource languages.",
"We run experiments on 15 typologically diverse languages on three NLP tasks: part-of-speech (POS) tagging, dependency parsing (DEP) and named-entity recognition (NER).",
"Our results bring forth a diverse set of behaviors that we classify in three categories reflecting the abilities of pretrained multilingual language models to be used for low-resource languages.",
"We dub those categories Easy, Intermediate and Hard.",
"Hard languages include both stable and endangered languages, but they predominantly are languages of communities that are majorly under-served by modern NLP.",
"Hence, we direct our attention to these Hard languages.",
"For those languages, we show that the script they are written in can be a critical element in the transfer abilities of pretrained multilingual language models.",
"Transliterating them leads to large gains in performance outperforming non-contextual strong baselines.",
"To sum up, our contributions are the following: We propose a new categorization of the low-resource languages that are unseen by available language models: the Hard, the Intermediate and the Easy languages.",
"We show that Hard languages can be better addressed by transliterating them into a better-handled script (typically Latin), providing a promising direction towards making multilingual language models useful for a new set of unseen languages.",
"As Joshi et al. (2020) vividly illustrate, there is a large divergence in the coverage of languages by NLP technologies.",
"The majority of the 6500+ of the world's languages are not studied by the NLP community, since most have few or no annotated datasets, making systems' development challenging.",
"The development of such models is a matter of high importance for the inclusion of communities, the preservation of endangered languages and more generally to support the rise of tailored NLP ecosystems for such languages (Schmidt and Wiegand, 2017; Stecklow, 2018; Seddah et al., 2020).",
"In that regard, the advent of the Universal Dependencies project (Nivre et al., 2016) and the WikiAnn dataset (Pan et al., 2017) have greatly increased the number of covered languages by providing annotated datasets for more than 90 languages for dependency parsing and 282 languages for NER.",
"Regarding modeling approaches, the emergence of multilingual representation models, first with static word embeddings (Ammar et al., 2016) and then with language model-based contextual representations (Devlin et al., 2019; Conneau et al., 2020) enabled transfer from high to low-resource languages, leading to significant improvements in downstream task performance (Rahimi et al., 2019; Kondratyuk and Straka, 2019).",
"Furthermore, in their most recent forms, these multilingual models process tokens at the sub-word level (Kudo and Richardson, 2018).",
"As such, they work in an open vocabulary setting, only constrained by the pretraining character set.",
"This flexibility enables such models to process any language, even those that are not part of their pretraining data.",
"When it comes to low-resource languages, one direction is to simply train contextualized embedding models on whatever data is available.",
"Another option is to adapt/fine-tune a multilingual pretrained model to the language of interest.",
"We briefly discuss these two options.",
"of pretraining data seems to correlate with downstream task performance (e.g. compare BERT and RoBERTa (Liu et al., 2020)), several attempts have shown that training a model from scratch can be efficient even if the amount of data in that language is limited.",
"Indeed, Ortiz Surez et al. (2020) showed that pretraining ELMo models (Peters et al., 2018) on less than 1GB of text leads to state-of-the-art performance while Martin et al. (2020) showed that pretraining a BERT model on as few as 4GB of diverse enough data results in state-of-the-art performance.",
"Micheli et al. (2020) further demonstrated that decent performance was achievable with only 100MB of raw text data.",
"Adapting large-scale models for low-resource languages Multilingual language models can be used directly on unseen languages, or they can also be adapted using unsupervised methods.",
"For example, Han and Eisenstein (2019) successfully used unsupervised model adaptation of the English BERT model to Early Modern English for sequence labeling.",
"Instead of fine-tuning the whole model, Pfeiffer et al. (2020) recently showed that adapter layers (Houlsby et al., 2019) can be injected into multilingual language models to provide parameter efficient task and language transfer.",
"Still, as of today, the availability of monolingual or multilingual language models is limited to approximately 120 languages, leaving many languages without access to valuable NLP technology, although some are spoken by millions of people, including Bambara and Sorani Kurdish, or are an official language of the European Union, like Maltese.",
"What can be done for unseen languages?",
"Unseen languages strongly vary in the amount of available data, in their script (many languages use non-Latin scripts such as Sorani Kurdish and Min-grelian), and in their morphological or syntactical properties (most largely differ from high-resource Indo-European languages).",
"This makes the design of a methodology to build contextualized models for such languages challenging at best.",
"In this work, by experimenting with 15 typologically diverse unseen languages,",
"(i) we show that there is a diversity of behavior depending on the script, the amount of available data, and the relation to the pretraining languages;",
"(ii) Focusing on the unseen languages that lag in performance compared to their easier-to-handle counterparts, we show that the script plays a critical role in the transfer abilities of multilingual language models.",
"Transliterating such languages to a script which is used by a related language seen during pretraining can lead to significant improvement in downstream performance.",
"We will refer to any languages that are not covered by pretrained language models as unseen.",
"We select a small portion of those languages within a large scope of language families and scripts.",
"Our selection is constrained to 15 typologically diverse languages for which we have evaluation data for at least one of our three downstream tasks.",
"Our selection includes low-resource Indo-European and Uralic languages, as well as members of the Bantu, Semitic, and Turkic families.",
"None of these 15 languages are included in the pretraining corpora of mBERT.",
"Information about their scripts, language families, and amount of available raw data can be found in the Appendix in Table 12.",
"To perform pretraining and fine-tuning on monolingual data, we use the deduplicated datasets from the OSCAR project (Ortiz Surez et al., 2019).",
"OSCAR is a corpus extracted from a Common Crawl Web snapshot.",
"2 It provides a significant amount of data for all the unseen languages we work with, except for Buryat, Meadow Mari, Erzya and Livvi for which we use Wikipedia dumps and for Narabizi, Naija and Faroese, for which we use data collected by Seddah et al. (2020), Caron et al. (2019) and Biemann et al. (2007) respectively.",
"For parsing and POS tagging, we use the UDPipe future system (Straka, 2018) as our baseline.",
"This model is a LSTM-based (Hochreiter and Schmid-huber, 1997) recurrent architecture trained with pretrained static word embedding (Mikolov et al., 2013) (hence our non-contextual characterization) along with character-level embeddings.",
"This system was ranked in the very first positions for parsing and tagging in the CoNLL shared task 2018 (Zeman and Haji c, 2018).",
"For NER we use the LSTM-CRF model with character and word level embedding using Qi et al. (2020) implementation.",
"MLM from scratch The first approach we evaluate is to train a dedicated language model from scratch on the available raw data we have.",
"To do so, we train a language-specific SentencePiece tok-enizer (Kudo and Richardson, 2018) before training a Masked-Language Model (MLM) using the RoBERTa (base) architecture and objective functions (Liu et al., 2019).",
"As we work with significantly smaller pretraining sets than in the original setting, we reduce the number of layers to 6 layers in place of the original 12 layers.",
"Multilingual Language Models We want to assess how large-scale multilingual language models can be used and adapted to languages that are not in their pretraining corpora.",
"We work with the multilingual version of BERT (mBERT) trained on the concatenation of Wikipedia corpora in 104 languages (Devlin et al., 2019).",
"We also ran experiments with the XLM-R base version (Conneau et al., 2020) trained on 100 languages using data extracted from the Web.",
"As the observed behaviors are very similar between both models, we only report results using mBERT.",
"Note that mBERT is highly biased toward Indo-Europeans languages written in the Latin script.",
"More than 77% of the subword vocabulary are in the Latin script while only 1% are in the Georgian script (cs, 2019).",
"unseen languages with MLM-TUNING Following previous work (Han and Eisenstein, 2019; Karthikeyan et al., 2019; Pfeiffer et al., 2020), we adapt large-scale multilingual models by fine-tuning them with their Mask-Language-Model objective directly on the available raw data in the unseen target language.",
"We refer to this process as MLM-TUNING .",
"We will refer to a MLM-tuned mBERT model as mBERT+MLM.",
"We perform experiments on POS tagging, Dependency Parsing (DEP), and Name Entity Recognition (NER).",
"We use annotated data from the Universal Dependency project (Nivre et al., 2016) for POS tagging and parsing, and the WikiAnn dataset (Pan et al., 2017) for NER.",
"For POS tagging and NER, we append a linear classifier layer on top of UPOS LAS NER Model MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline Faroese 96.3 96.5 91.1 95.4 84.0 86.4 67.6 83.1 52.1 58.3 39.3 44.8 Naija 89.3 89.6 87.1 89.2 71.5 69.2 63.0 68.3 --Swiss German 76.7 78.7 65.4 75.2 41.2 69.6 30.0 32.2 --Mingrelian ----53.6 68.4 42.0 48.2 Table 1: Easy Languages POS, Parsing and NER scores comparing mBERT, mBERT+MLM and monolingual MLM to strong non-contextual baselines when trained and evaluated on unseen languages.",
"the language model.",
"For parsing, following Kondratyuk and Straka (2019), we append a Bi-Affine Graph prediction layer (Dozat and Manning, 2017).",
"We refer to the process of fine-tuning a language model in a task-specific way as TASK-TUNING .",
"3 3.5 Dataset Splits For each task and language, we use the provided training, validation and test dataset split except for the ones that have less than 500 training sentences.",
"In this case, we concatenate the training and test set and perform 8-folds cross-Validation and use the validation set for early stopping.",
"If no validation set is available, we isolate one of the folds for validation and report the test scores as the average of the other folds.",
"This enables us to train on at least 500 sentences in all our experiments (except for Swiss German for which we only have 100 training examples) and reduce the impact of the annotated dataset size on our analysis.",
"Since cross-validation results in training on very limited number of examples, we refer to training in this cross-validation setting as few-shot learning.",
"For each unseen language and each task, we experiment with our three modeling approaches:",
"(a) Training a language model from scratch on the available raw data and then fine-tuning it on any available annotated data in the target language.",
"(b) Fine-tuning mBERT with TASK-TUNING directly on the target language.",
"(c) Finally, adapting mBERT to the unseen language using MLM-TUNING before fine-tuning it in a supervised way on the target language.",
"We then compare all these experiments to our non-contextual strong baselines.",
"By doing so, we can assess if language models are 3 Details about optimization can be found in Appendix B Figure 1: Visualizing our Typology of Unseen Languages.",
"For Hard languages, mBERT under-performs the baselines in all settings.",
"Interestingly we find a large diversity of behaviors across languages regarding those language model training techniques.",
"As summarized in Figure 1, we observe three clear clusters of languages.",
"The first cluster, which we dub Easy\", corresponds to the languages that do not require extra MLM-TUNING for mBERT to achieve good performance. mBERT has the modeling abilities to process such languages without relying on raw data and can outperform strong non-contextual baselines as such. In the second cluster, the Interme-diate\" languages require MLM-TUNING . mBERT is not able to beat strong non-contextual baselines using only TASK-TUNING , but MLM-TUNING enables it to do so. Finally, Hard languages are those on which mBERT fails to deliver any decent per-UPOS LAS NER Model MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline Maltese 92.0 96.4 92.05 96.0 74.4 82.1 66.5 79.7 61.2 66.7 62.5 63.1 Narabizi 81.6 84.2 71.3 84.2 56.5 57.8 41.8 52.8 --Bambara 90.2 92.6 78.1 92.3 71.8 75.4 46.4 76.2 -Wolof 92.8 95.2 88.4 94.1 73.3 77.9 62.8 77.0 --Erzya 89.3 91.2 84.4 91.1 61.2 66.6 47.8 65.1 --Livvi 83.0 85.5 81.1 84.1 36.3 42.3 35.2 40.1 --Mari ----55.2 57.6 44.0 56.1 Table 2: Intermediate Languages POS, Parsing and NER scores comparing mBERT, mBERT+MLM and monolingual MLM to strong non-contextual baselines when trained and evaluated on unseen languages. Intermediate Languages are the ones for which mBERT requires MLM-TUNING to outperform the baselines. formance even after MLMand TASKfine-tuning. mBERT simply does not have the capacity to learn and process such languages. We emphasize that our categorization of unseen languages is only based on the relative performance of mBERT after fine-tuning compared to strong non-contextual baseline models. We leave for future work the analysis of the absolute performance of the model on such languages (e.g. analysing the impact of the fine-tuning data set size on mBERT's downstream performance). In this section, we present our results in detail in each of these language clusters and provide insights into their linguistic properties. 4.1 Easy Easy languages are the ones on which mBERT delivers high performance out-of-the-box, compared to strong baselines. We classify Faroese, Swiss German, Naija and Mingrelian as easy languages and report performance in Table 1. We find that those languages match two conditions: They are closely related to languages used during MLM pretraining These languages use the same script as such closely related languages. Such languages benefit from multilingual models, as cross-lingual transfer is easy to achieve and hence quite effective. More details about those languages can be found in Appendix C. 4.2 Intermediate The second type of languages (which we dub Intermediate) are generally harder to process for pretrained MLMs out-of-the-box.",
"In particular, pretrained multilingual language models are typically outperformed by a non-contextual strong baselines.",
"Still, MLM-TUNING has an important impact and leads to usable state-of-the-art models.",
"A good example of such an intermediate language is Maltese, a member of the Semitic language but using the Latin script.",
"Maltese has not been seen by mBERT during pretraining.",
"Other Semitic languages though, namely Arabic and Hebrew, have been included in the pretraining languages.",
"As seen in Table 2, the non-contextual baseline outperforms mBERT.",
"Additionally, a monolingual MLM trained on only 50K sentences matches mBERT performance for both NER and POS tagging.",
"However, the best results are reached with MLM-TUNING : the proper use of monolingual data and the advantage of similarity to other pretraining languages render Maltese a tackle-able language as shown by the performance gain over our strong non-contextual baselines.",
"Our Maltese dependency parsing results are in line with those of Chau et al. (2020), who also showed that MLM-TUNING leads to significant improvements.",
"They also additionally showed that a small vocabulary transformation allowed fine-tuning to be even more effective and gain 0.8 LAS points more.",
"We further discuss the vocabulary adaptation technique of Chau et al. (2020) in section 6.",
"We consider Narabizi (Seddah et al., 2020), an Arabic dialect spoken in North-Africa written in the Latin script and code-mixed with French, to fall in the same Intermediate category, because it follows the same pattern.",
"For both POS tagging and parsing, the multilingual models outperform the monolingual NarabiziBERT.",
"In addition, MLM-TUNING leads to significant improvements over the non-language-tuned mBERT baseline, also outperforming the non-contextual dependency parsing baseline.",
"We also categorize Bambara, a Niger-Congo Bantu language spoken in Mali and surrounding countries, as Intermediate, relying mostly on the UPOS LAS NER Model MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline MBERTM BERT+MLM MLM Baseline Uyghur 77.0 88.4 87.4 90.0 45.5 48.9 57.3 67.9 24.3 34.6 41.4 53.8 Sindhi ----42.3 47.9 45.2 51.4 Sorani Kurdish ----70.4 75.6 80.6 80.5 Table 3: Hard Languages POS, Parsing and NER scores comparing mBERT, mBERT+MLM and monolingual MLM to strong non-contextual baselines when trained and evaluated on unseen languages.",
"POS tagging results which follow similar patterns as Maltese and Narabizi.",
"We note that the Bam-baraBERT that we trained achieves notably poor performance compared to the non-contextual baseline, a fact we attribute to the extremely low amount of available data (1000 sentences only).",
"We also note that the non-contextual baseline is the best performing model for dependency parsing, which could also potentially classify Bambara as a Hard\" language instead. Our results in Wolof follow the same pattern. The non-contextual baseline achieves a 77.0 in LAS outperforming mBERT. However, MLM-TUNING achieves the highest score of 77.9. The importance of script We now turn our focus to Uralic languages. Finnish, Estonian, and Hungarian are high-resource representatives of this language family that are typically included in multilingual LMs, also having task-tuning data available in large quantities. However, for several smaller Uralic languages, task-tuning data are generally very scarce. We report in Table 2 the performance for two low-resource Uralic languages, namely Livvi and Erzya using 8-fold cross-validation, with each run only using around 700 training instances. Note the striking difference between the parsing performance (LAS) of mBERT on Livvi, written with the Latin script, and on Erzya that uses the Cyrillic script. This suggests that the script could be playing a critical role when transferring to those languages. We explore this hypothesis in detail in section 5.2. 4.3 Hard The last category of the hard unseen language is perhaps the most interesting one, as these languages are very hard to process. mBERT is outperformed by non-contextual baselines as well as by monolingual language models trained from scratch on the available raw data. At the same time, MLM-TUNING on the available raw data has a minimal impact on performance. Uyghur, a Turkic language with about 10-15 million speakers in central Asia, is a prime example of a hard language for current models. In our experiments, outlined in Table 3, the noncontextual baseline outperforms all contextual variants, both monolingual and multilingual, in all the tasks with up to 20 points difference compared to mBERT for parsing. Additionally, the monolingual UyghurBERT trained on only 105K sentences outperforms mBERT even after MLM-TUNING . We attribute this discrepancy to script differences: Uyghur uses the Perso-Arabic script, when the other Turkic languages that were part of mBERT pretraining use either the Latin (e.g. Turkish) or the Cyrillic script (e.g. Kazakh). Sorani Kurdish (also known as Central Kurdish) is a similarly hard language, mainly spoken in Iraqi Kurdistan by around 8 million speakers, which uses the Sorani alphabet, a variant of the Arabic script. We can solely evaluate on the NER task, where the non-contextual baseline and the monolingual SoraniBERT perform similarly around 80.5 F1-score outperforming significantly mBERT which only reaches 70.4 in F1-score. MLM-TUNING on 380K sentences of Sorani texts improves mBERT performance to 75.6 F1-score, but it is still lagging behind the baseline. Our results in Sindhi follow the same pattern. The non-contextual baseline achieves a 51.4 F1-score outperforming with a large margin our language models (a monolingual SindhiBERT achieves an F1-score of 45.2, and mBERT is worse at 42.3). 5 Tackling Hard Languages with Multilingual Language Models Our intermediate Uralic language results provide initial supporting evidence for our argument on the importance of having pretrained LMs on languages with similar scripts, even for generally high-resource language families. Our hypothesis is that the script is a key element for language models to Model POS LAS NER Model NER Uyghur (Arabic Latin) Sorani (Arabic Latin) UyghurBERT 87.4 86.2 57.3 54.6 41.4 41.7 SoraniBERT 80.6 78.9 mBERT 77.0 87.9 45.7 65.0 24.3 35.7 mBERT 70.5 77.8 mBERT+MLM 77.3 89.8 48.9 66.8 34.7 55.2 mBERT+MLM 75.6 82.7 Buryat (Cyrillic Latin) Meadow Mari (Cyrillic Latin) BuryatBERT 75.8 75.8 31.4 31.4 MariBERT 44.0 45.5 mBERT 83.9 81.6 50.3 45.8 mBERT 55.2 58.2 mBERT+MLM 86.5 84.6 52.9 51.9 mBERT+MLM 57.6 65.9 Erzya (Cyrillic Latin) Mingrelian (Georgian Latin) ErzyaBERT 84.4 84.5 47.8 47.8 MingrelianBERT 42.0 42.2 mBERT 89.3 88.2 61.2 58.3 mBERT 53.6 41.8 mBERT+MLM 91.2 90.5 66.6 65.5 mBERT+MLM 68.4 62.6 Table 4: Transliterating low-resource languages into the Latin script leads to significant improvements in languages like Uyghur, Sorani, and Meadow Mari. For languages like Erzya and Buryat transliteration, does not significantly influence results, while it does not help for Mingrelian. In all cases, mBERT+MLM is the best approach. correctly process unseen languages. To test this hypothesis, we assess the ability of mBERT to process an unseen language after transliterating it to another script present in the pretraining data. We experiment on six languages belonging to four language families: Erzya, Bruyat and Meadow Mari (Uralic), Sorani Kurdish (Iranian, Indo-European), Uyghur (Turkic) and Mingrelian (Kartvelian). We apply the following transliterations: Erzya/Buryat/Mari: Cyrillic Latin Script Uyghur: Arabic Script Latin Script Sorani: Arabic Script Latin Script Mingrelian: Georgian Script Latin Script 5.1 Linguistically-motivated transliteration The strategy we used to transliterate the above-listed language is specific to the purpose of our experiments. Indeed, our goal is for the model to take advantage of the information it has learned during training on a related language written in the Latin script. The goal of our transliteration is therefore to transcribe each character in the source script, which we assume corresponds to a phoneme, into the most frequent (sometimes only) way this phoneme is rendered in the closest related language written in the Latin script, hereafter the target language. This process is not a transliteration strictly speaking, and it needs not be reversible. It is not a phonetization either, but rather a way to render the source language in a way that maximizes the similarity between the transliterated source language and the target language. We have manually developed transliteration scripts for Uyghur and Sorani Kurdish, using respectively Turkish and Kurmanji Kurdish as target languages, only Turkish being one of the languages used to train mBERT. Note however that Turkish and Kurmanji Kurdish share a number of conventions for rendering phonemes in the Latin script (for instance, / S /, rendered in English by sh, is rendered in both languages by s; as a result, the Arabic letter M , used in both languages, is rendered as s by both our transliteration scripts). As for Erzya, Buryat and Mari, we used the readily available transliteration package transliterate , 4 which performs a standard transliteration. 5 We used the Russian transliteration module, as it covers the Cyrillic script. Finally, for our control experiments on Mingrelian, we used the Georgian transliteration module from the same package. 5.2 Transfer via Transliteration We train mBERT with MLM-TUNING and TASKTUNING as well as monolingual BERT model trained from scratch on the transliterated data. We also run controlled experiments on high-resource languages written in the Latin script on which mBERT was pretrained on, namely Arabic, Japanese and Russian (reported in Table 5). Our results with and without transliteration are listed in Table 4. Transliteration for Sorani and Uyghur has a noticeable positive impact. For instance, transliterating Uyghur to Latin leads to an improvement of 16 points in parsing and 20 points 4 https://pypi.org/project/transliterate/ 5 In future work, we intend to develop dedicated transliteration scripts using the strategy described above, and to compare the results obtained with it with those described here. in NER. For one of the low-resource Uralic languages, Meadow Mari, we observe an 8 F1-score points improvement on NER, while for other Uralic languages like Erzya the effect of transliteration is very minor. The only case where transliteration to the Latin script leads to a drop in performance for mBERT and mBERT+MLM is Mingrelian. We interpret our results as follows. When running MLM-TUNING and TASK-TUNING , mBERT associates the target unseen language to a set of similar languages seen during pretraining based on the script. In consequence, mBERT is not able to associate a language to its related language if they are not written in the same script. For instance, transliterating Uyghur enables mBERT to match it to Turkish, a language which accounts for a sizable portion of mBERT pretraining. In the case of Mingrelian, transliteration has the opposite effect: transliterating Mingrelian in the Latin script is harming the performance as mBERT is not able to associate it to Georgian which is seen during pretraining and uses the Georgian script. This is further supported by our experiments on high resource languages (cf. Table 5). When transliterating pretrained languages such as Arabic, Russian or Japanese, mBERT is not able to compete with the performance reached when using the script seen during pretraining. Transliterating the Arabic script and the Cyrillic script to Latin does not automatically improve mBERT performance as it does for Sorani, Uyghur and Meadow Mari. For instance, transliterating Arabic to the Latin script leads to a drop in performance of 1.5, 4.1 and 6.9 points for POS tagging, parsing and NER respectively. 6 Our findings are generally in line with previous work. Transliteration to English specifically (Lin et al., 2016; Durrani et al., 2014) and named entity transliteration (Kundu et al., 2018; Grundkiewicz and Heafield, 2018) has been proven useful for cross-lingual transfer in tasks like NER, entity linking (Rijhwani et al., 2019), morphological inflec-tion (Murikinati et al., 2020), and Machine Translation (Amrhein and Sennrich, 2020). The transliteration approach provides a viable path for rendering large pretrained models like mBERT useful for all languages of the world. Indeed, as reported in Table 4, transliterating both Uyghur and Sorani leads to matching or outper-6 Details and complete results on these controlled experiments can be found in Appendix E. Original Script Latin Script Model POS LAS NER Arabic 96.4 94.9 82.9 78.8 87.8 80.9 Russian 98.1 96.0 88.4 84.5 88.1 86.0 Japanese 97.4 95.7 88.5 86.9 61.5 55.6 Table 5: mBERT TASK-TUNED on high resource languages for POS tagging, parsing and NER. We compare fine-tuning done on data written the original language script with fine-tuning done on Latin transliteration. In all cases, transliteration degrades downstream performance. forming the performance of non-contextual strong baselines and deliver usable models (e.g. +12.5 POS accuracy in Uyghur). 6 Discussion and Conclusion Pretraining ever larger language models is a research direction that is currently receiving a lot of attention and resources from the NLP research community (Raffel et al., 2019; Brown et al., 2020). Still, a large majority of human languages are under-resourced making the development of monolingual language models very challenging in those settings. Another path is to build large scale multilingual language models. 7 However, such an approach faces the inherent zipfian structure of human languages, making the training of a single model to cover all languages an unfeasible solution (Con-neau et al., 2020). Reusing large scale pretrained language models for new unseen languages seems to be a more promising and reasonable solution from a cost-efficiency and environmental perspective (Strubell et al., 2019). Recently, Pfeiffer et al. (2020) proposed to use adapter layers (Houlsby et al., 2019) to build parameter efficient multilingual language models for unseen languages. However, this solution brings no significant improvement in the supervised setting, compared to a more simple Masked-Language Model finetuning. Furthermore, developing a language agnostic adaptation method is an unreasonable wish with regard to the large typological diversity of human languages. On the other hand, the promising vocabulary adaptation technique of Chau et al. (2020) which leads to good dependency parsing results on unseen languages when combined with task-tuning has 7 Even though we explore a different research direction, recent advances in small scale and domain specific language models suggest such models could also have an important impact for those languages (Micheli et al., 2020). so far been tested only on Latin script languages (Singlish and Maltese). We expect that it will be orthogonal to our transliteration approach, but we leave for future work the study of its applicability and efficacy on more languages and tasks. In this context, we bring empirical evidence to assess the efficiency of language models pretraining and adaptation methods on 15 low-resource and typologically diverse unseen languages. Our results show that the Hard\" languages are currently out-of-the-scope of any currently available language models and are therefore left outside of the current NLP progress. By focusing on those, we find that this challenge is mostly due to the script. Transliterating them to a script that is used by a related higher resource language on which the language model has been pretrained on leads to large improvements in downstream performance. Our results shed some new light on the importance of the script in multilingual pretrained models. While previous work suggests that multilingual language models could transfer efficiently across scripts in zero-shot settings (Pires et al., 2019; Karthikeyan et al., 2019), our results show that such cross-script transfer is possible only if the model has seen related languages in the same script during pretraining. Our work paves the way for a better understanding of the mechanics at play in cross-language transfer learning in low-resource scenarios. We strongly believe that our method can contribute to bootstrapping NLP resources and tools for low-resource languages, thereby favoring the emergence of NLP ecosystems for languages currently under-served by the NLP community. Acknowledgments The Inria authors were partly funded by two French Research National agency projects, namely projects PARSITI (ANR-16-CE33-0021) and SoSweet (ANR-15-CE38-0011), as well as by Benoit Sagot's chair in the PRAIRIE institute as part of the Investissements d'avenir programme under the reference ANR-19-P3IA-0001.",
"Antonios Anastasopoulos is generously supported by NSF Award 2040926 and is also thankful to Graham Neubig for very insightful initial discussions on this research direction."
] | [
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"other",
"abstain"
] |
[
"There has been considerable attention devoted to models that learn to jointly infer an ex-pression's syntactic structure and its semantics.",
"Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar.",
"In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task.",
"Our model is composed of two separated modules for syntax and semantics.",
"They are cooperatively trained with standard continuous and discrete optimisation schemes.",
"Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation.",
"Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis.",
"Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990).",
"However, most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997), process text without imposing a grammatical structure.",
"To bridge this gap between theory and practice models that process linguistic expressions in a tree-structured manner have been considered in recent work (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015; Bowman et al., 2016).",
"These tree-based models explicitly require access to the syntactic structure for the text, which is not entirely satisfactory.",
"Indeed, parse tree level supervision requires a significant amount of annotations from expert linWork done while the author was an intern at Facebook AI Research.",
"guists.",
"These trees have been annotated with different goals in mind than the tasks we are using them for.",
"Such discrepancy may result in a deterioration of the performance of models relying on them.",
"Recently, several attempts were made to learn these models without explicit supervision for the parser (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018).",
"However, Williams et al. (2018a) has recently shown that the structures learned by these models cannot be ascribed to discovering meaningful syntactic structure.",
"These models even fail to learn the simple context-free grammar of nested mathematical operations (Nangia and Bowman, 2018).",
"In this work, we present an extension of Choi et al. (2018), that successfully learns these simple grammars while preserving competitive performance on several standard linguistic tasks.",
"Contrary to previous work, our model makes a clear distinction between the parser and the compositional function.",
"These two modules are trained with different algorithms, cooperating to build a semantic representation that optimises the objective function.",
"The parser's goal is to generate a tree structure for the sentence.",
"The compositional function follows this structure to produce the sentence representation.",
"Our model contains a continuous component, the compositional function, and a discrete one, the parser.",
"The whole system is trained end-to-end with a mix of reinforcement learning and gradient descent.",
"Drozdov and Bowman (2017) has noticed the difficulty of mixing these two optimisation schemes without one dominating the other.",
"This typically leads to the coadaptation problem where the parser simply follows the compositional function and fails to produce meaningful syntactic structures.",
"In this work, we show that this pitfall can be avoided by synchronising the learning paces of the two optimisation schemes.",
"This is achieved by combining several recent advances in reinforcement learning.",
"First, we use input-dependent control variates to reduce the variance of our gradient estimates (Ross, 1997).",
"Then, we apply multiple gradient steps to the parser's policy while controlling for its learning pace using the Proximal Policy Optimization (PPO) of Schulman et al. (2017).",
"The code for our model is publicly available 1 .",
"In this section, we present existing works on Recursive Neural Networks and their training in the absence of supervision on the syntactic structures.",
"A Recursive Neural Network (RvNN) has its architecture defined by a directed acyclic graph (DAG) given alongside with an input sequence (Goller and Kuchler, 1996).",
"RvNNs are commonly used in NLP to generate sentence representation that leverages available syntactic information, such as a constituency or a dependency parse trees (Socher et al., 2011).",
"Given an input sequence and its associated DAG, a RvNN processes the sequence by applying a transformation to the representations of the tokens lying on the lowest levels of the DAG.",
"This transformation, or compositional function, merges these representations into representations for the nodes on the next level of the DAG.",
"This process is repeated recursively along the graph structure until the top-level nodes are reached.",
"In this work, we assume that the compositional function is the same for every node in the graph.",
"Tree-LSTM.",
"We focus on a specific type of RvNNs, the tree-based long short-term memory network (Tree-LSTM) of Tai et al. (2015) and Zhu et al. (2015).",
"Its compositional function generalizes the LSTM cell of Hochreiter and Schmidhu-ber (1997) to tree-structured topologies, i.e., z i f l f r o = tanh (cid:18) R (cid:20) h l h r (cid:21) + b (cid:19) , c p = z (cid:12) i + c l (cid:12) f l + c r (cid:12) f r , h p = tanh ( c p ) (cid:12) o , 1 https://github.com/facebookresearch/ latent-treelstm where and tanh are the sigmoid and hyperbolic tangent functions.",
"Tree-LSTM cell is differentiable with respect to its recursion matrix R , bias b and its input.",
"The gradients of a Tree-LSTM can thus be computed with backpropagation through structure (BPTS) (Goller and Kuchler, 1996).",
"A tree-based RvNN is a function f parameterized by a d dimensional vector that predicts an output y given an input x and a tree t .",
"Given a dataset D of N triplets ( x, t, y ) , the parameters of the RvNN are learned with the following minimisation problem: min R d 1 N (cid:88) ( x,t,y ) D (cid:96) ( f ( x, t ) , y ) , (1) where (cid:96) is a logistic regression function.",
"These models need an externally provided parsing tree for each input sentence during both training and evaluation.",
"Alternatives, such as the shift-reduce-based SPINN model of Bowman et al. (2016), learn an internal parser from the given trees.",
"While these solutions do not need external trees during evaluation, they still require tree level annotations for training.",
"More recent work has focused on learning a latent parser with no direct supervision.",
"Latent tree models aim at jointly learning the compositional function f and a parser without supervision on the syntactic structures (Yogatama et al., 2016; Maillard et al., 2017; Choi et al., 2018).",
"The latent parser is defined as a parametric probability distribution over trees conditioned on the input sequence.",
"The parameters of this tree distribution p ( . | x ) are represented by a vector .",
"Given a dataset D of pairs of input sequences x and outputs y , the parameters and are jointly learned by minimising the following objective function: min , L ( , ) = 1 N (cid:88) ( x,y ) (cid:96) ( E [ f ( x, t )] , y ) , (2) where E is the expectation with respect to the p ( . | x ) distribution.",
"Directly minimising this objective function is often difficult due to expensive marginalisation of the unobserved trees.",
"Hence, when (cid:96) is a convex function (e.g. cross entropy of an exponential family) usually an upper bound of Eq.",
"(2) can be derived by applying Jensen's inequality: L ( , ) = 1 N (cid:88) ( x,y ) E [ (cid:96) ( f ( x, t ) , y )] .",
"Learning a distribution over a set of discrete items involves a discrete optimisation scheme.",
"For example, the RL-SPINN model of Yogatama et al. (2016) uses a mix of gradient descent for and REINFORCE for (Williams et al., 2018a).",
"Drozdov and Bowman (2017) has recently observed that this optimisation strategy tends to produce poor parsers, e.g., parsers that only generate left-branching trees.",
"The effect, called the coadaptation issue, is caused by both bias in the parsing strategy and a difference in convergence paces of continuous and discrete optimisers.",
"Typically, the parameters are learned more rapidly than .",
"This limits the exploration of the search space to parsing strategies similar to those found at the beginning of the training.",
"In their Gumbel Tree-LSTM model, Choi et al. (2018) propose an alternative parsing strategy to avoid the coadaptation issue.",
"Their parser incrementally merges a pair of consecutive constituents until a single one remains.",
"This strategy reduces the bias towards certain tree configurations observed with RL-SPINN.",
"Each word i of the input sequence is represented by an embedding vector.",
"A leaf transformation maps this vector to pair of vectors r 0 i =( h 0 i , c 0 i ) .",
"We considered three types of leaf transformations: affine transformation, LSTM and bidirectional LSTM.",
"The resulting representations form the initial states of the Tree-LSTM.",
"In the absence of supervision, the tree is built in a bottom-up fashion by recursively merging consecutive constituents ( i, i + 1) based on merge-candidate scores.",
"On each level k of the bottom-up derivation, the merge-candidate score of the pair ( i, i +1) is computed as follow: s k ( i ) = (cid:104) q , Tree-LSTM ( r ki , r ki +1 ) (cid:105) , where q is a trainable query vector and r ki is the constituent representation at position i after k mergings.",
"We merge a pair ( i , i + 1) sampled from the Categorical distribution built on the merge-candidate scores.",
"The representations of the constituents are then updated as follow: r k +1 i = r ki , i < i , Tree-LSTM ( r ki , r ki +1 ) i = i , r ki +1 i > i .",
"This procedure is repeated until one constituent remains.",
"Its hidden state is the input sentence representation.",
"This procedure is non-differentiable.",
"Choi et al. (2018) use an approximation based on the Gumbel-Softmax distribution (Maddison et al., 2016; Jang et al., 2016) and the reparametrization trick (Kingma and Welling, 2013).",
"This relaxation makes the problem differentiable at the cost of a bias in the gradient estimates (Jang et al., 2016).",
"This difference between the real objective function and their approximation could explain why their method cannot recover simple context-free grammars (Nangia and Bowman, 2018).",
"We investigate this question by proposing an alternative optimisation scheme that directly aims for the correct objective function.",
"We consider the problem defined in Eq.",
"(3) to jointly learn a composition function and an internal parser.",
"Our model is composed of the parser of Choi et al. (2018) and the Tree-LSTM for the composition function.",
"As suggested in past work (Mnih et al., 2016; Schulman et al., 2017), we added an entropy H over the tree distribution to the objective function: min , L ( , ) (cid:88) x H ( t | x ) , (4) where > 0 .",
"This regulariser improves exploration by preventing early convergence to a suboptimal deterministic parsing strategy.",
"The new objective function is differentiable with respect to , but not , the parameters of the parser.",
"Learning follows the same procedure with BPTS as if the tree would be externally given.",
"In the rest of this section, we discuss the optimization of the parser and a cooperative training strategy to reduce the coadaptation issue.",
"We cast the training of the parser as a reinforcement learning problem.",
"The parser is an agent whose reward function is the negative of the loss function defined in Eq.",
"(3).",
"Its action space is the space of binary trees.",
"The agent's policy is a probability distribution over binary trees that decomposes as a sequence of K merging actions: p ( t | x ) = K (cid:89) k =0 ( a ik | r k ) , (5) where r k = ( r k 0 , . . . , r kK k ) .",
"The loss function is optimised with respect to with REINFORCE (Williams, 1992).",
"REINFORCE requires a considerable number of random samples to obtain a gradient estimate with a reasonable level of variance.",
"This number is positively correlated with the size of the search space, which is exponentially large in the case of binary trees.",
"We consider several extensions of REINFORCE to circumvent this problem.",
"Variance reduction.",
"An alternative solution to increasing the number of samples is the control variates method (Ross, 1997).",
"It takes advantage of random variables with known expected values and positive correlation with the quantity whose expectation is tried to be estimated.",
"Given an input-output pair ( x, y ) and tree t sampled from p ( t | x ) , let's define the random variable G as: G ( t ) = (cid:96) ( f ( x, t ) , y ) log p ( t | x ) .",
"According to REINFORCE, calculating the gradient with respect to for the pair ( x, y ) is then equivalent to determining the unknown mean of the random variable G ( t ) 2 .",
"Let's assume there is a control variate, i.e., a random variable b ( t ) that positively correlates with G and has known expected value with respect to p ( . | x ) .",
"Given N samples of the G ( t ) and the control variate b ( t ) , the new gradient estimator is: GCV = E p ( t | x ) [ b ( t )]+ 1 N (cid:34) N (cid:88) i =1 ( G ( t i ) b ( t i )) (cid:35) .",
"A popular control variate, or baseline, used in REINFORCE is the moving average of recent rewards multiplied by the score function (Ross, 1997): b ( t ) = c log p ( t | x ) .",
"2 Note that while we are computing the gradients using (cid:96) , we could also directly optimise the parser with respect to downstream accuracy.",
"where E t is the empirical average over a finite batch of samples and r ( t ) = p ( t | x ) p old ( t | x ) is the probability ratio with old standing for the parameters before the update.",
"Input-dependent baseline.",
"The moving average baseline cannot detect changes in rewards caused by structural differences in the inputs.",
"In our case, a long arithmetic expression is much harder to parse than a short one, systematically leading to their lower rewards.",
"This structural differences in the rewards aggravate the credit assignment problem by encouraging REINFORCE to discard actions sampled for longer sequences even though there might be some subsequences of actions that produce correct parsing subtrees.",
"A solution is to make the baseline input-dependent.",
"In particular, we use the self-critical training (SCT) baseline of Rennie et al. (2017), defined as: b ( t, x ) = c , ( x ) log p ( t | x ) , where c , is the reward obtained with the policy used at test time, i.e., t = arg max p ( t | x ) .",
"This control variate has a zero mean under the p ( t | x ) distribution and correlates positively with the gradients.",
"Computing the arg max of a policy among all possible binary trees has exponential complexity.",
"We replace it with a simpler greedy decoding, i.e, a tree t is selected by following a sequence of greedy actions a k : a k = arg max ( a k | r k ) .",
"This approximation is very efficient and computing the baseline requires only one additional for-ward pass.",
"Gradient normalization.",
"We empirically observe significant fluctuations in the gradient norms.",
"This creates instability that can not be reduced by additive terms, such as the input-dependent baselines.",
"A solution is to divide the gradients by a coarse approximation of their norm, e.g., a running estimate of the reward standard deviation (Mnih and Gregor, 2014).",
"This trick ensures that the rewards remain approximately in the unit ball, making the learning process less sensitive to steep changes in the loss.",
"The gradients of the loss function from the Eq.",
"(4) are calculated using two different schemes, BPST for the composition function parameters and REINFORCE for the parser parameters .",
"Then, both are updated with SGD.",
"The estimate of the gradient with respect to has higher variance compared to the estimate with respect to .",
"Hence, using the same learning rate schedule does not necessarily correspond to the same real pace of learning.",
"It is parameters that are harder to optimise, so to improve training stability and convergence it is reasonable to aim for such updates that does not change the policy too much or too little.",
"A simple yet effective solution is the Proximal Policy Optimization (PPO) of Schulman et al. (2017).",
"It considers the next surrogate loss: E t (cid:2) max (cid:8) r ( t ) (cid:96) ( f ( x, t ) , y ) , r c ( t ) (cid:96) ( f ( x, t ) , y ) (cid:9)(cid:3) , Where r c ( t ) = clip ( r ( t ) , 1 (cid:15), 1 + (cid:15) ) and (cid:15) is a real number in (0; 0 .",
"5] .",
"The first argument of the max is the surrogate loss for REINFORCE.",
"The clipped ratio in the second argument disincen-tivises the optimiser from performing updates resulting in large tree probability changes.",
"With this, the policy parameters can be optimised with repeated K steps of SGD to ensure a similar pace of learning between the parser and the compositional function.",
"Besides the works mentioned in Sec. 2 and Sec. 3, there is a vast literature on learning latent parsers.",
"Early connectionist work in inferring context-free grammars proposed stack-augmented models and relied on explicit supervision on the strings that belonged to the target language and those that did not (Giles et al., 1989; Sun, 1990; Das et al., 1992; Mozer and Das, 1992).",
"More recently, new stack-augmented models were shown to learn latent grammars from positive evidence alone (Joulin and Mikolov, 2015).",
"In parallel to these, other statistical approaches were proposed to automatically induce grammars from unparsed text (Sampson, 1986; Magerman and Marcus, 1990; Carroll and Charniak, 1992; Brill, 1993; Klein and Manning, 2002).",
"Our work departs from these approaches in that we aim at learning a latent grammar in the context of performing some given task.",
"Socher et al. (2011) uses a surrogate auto-encoder objective to search for a constituency structure, merging nodes greedily based on the re-construction loss.",
"Maillard et al. (2017) defines a relaxation of a CYK-like chart parser that is trained for a particular task.",
"A similar idea is introduced in Le and Zuidema (2015) where an automatic parser prunes the chart to reduce the overall complexity of the algorithm.",
"Another strategy, similar in nature, has been recently proposed by Corro and Titov (2018), where Gumbel noise is used with differentiable dynamic programming to generate dependency trees.",
"In contrast, Yogatama et al. (2016) learns a Shift-Reduce parser using reinforcement learning.",
"Maillard and Clark (2018) further proposes a beam search strategy to overcome learning trivial trees.",
"On a different vein, Vlad Niculae (2018) proposes a quadratic penalty term over the posterior distribution of nonprojective dependency trees to enforce sparsity of the relaxation.",
"Finally, there is a large body of work in Reinforcement Learning that aims at discovering how to combine elementary modules to solve complex tasks (Singh, 1992; Chang et al., 2018; Sahni et al., 2017).",
"Due to the limited space, we will not discuss them in further details.",
"We conducted experiments on three different tasks: evaluating mathematical expressions on the ListOps dataset (Nangia and Bowman, 2018), sentiment analysis on the SST dataset (Socher et al., 2013) and natural language inference task on the SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.",
"Technical details.",
"For ListOps, we follow the experimental protocol of Nangia and Bowman (2018), i.e., a 128 dimensional model and a ten-way softmax classifier.",
"However, we replace their multi-layer perceptron (MLP) by a linear classifier.",
"The validation set is composed of 1 k examples randomly selected from the training set.",
"For SST and NLI, we follow the setup of Choi et al. (2018): we initialise the word vectors with GloVe300D (Pennington et al., 2014) and train an MLP classifier on the sentence representations.",
"The hyperparameters are selected on the validation set using 5 random seeds for each configura-tion.",
"Our hyperparameters are the learning rate, weight decay, the regularisation parameter , the leaf transformations, variance reduction hyperpa-No baseline Moving average Self critical No PPO PPO No PPO PPO No PPO PPO min 61.7 61.4 61.7 59.4 63.7 98.2 max 70.1 76.6 74.3 96.0 64.1 99.6 mean std 66.2 3.2 66.5 5.9 65.5 4.7 67.5 14.3 64.0 0.1 99.2 0.5 Table 1: Accuracy on ListOps test set for our model with three different baselines, with and without PPO.",
"rameters and the number of updates K in PPO.",
"We use an adadelta optimizer (Zeiler, 2012).",
"The ListOps dataset probes the syntax learning ability of latent tree models (Nangia and Bowman, 2018).",
"It is designed to have a single correct parsing strategy that a model must learn in order to succeed.",
"It is composed of prefix arithmetic expressions and the goal is to predict the numerical output associated with the evaluation of the expression.",
"The sequences are made of integers in [0 , 9] and 4 operations: MIN , MAX , MED and SUM MOD .",
"The output is an integer in the range [0 , 9] .",
"For example, the expression [MIN 2 [MAX 0 1] [MIN 6 3 ] 5 ] is mapped to the output 1 .",
"The ListOps task is thus a sequence classification problem with 10 classes.",
"There are 90 k training examples and 10 k test examples.",
"It is worth mentioning that the underlying semantic of operations and symbols is not provided.",
"In other words, a model has to infer from examples that [MIN 0 1] = 0 .",
"As shown in Table 2, the current leading latent tree models are unable to learn the correct parsing strategy on ListOps (Nangia and Bowman, 2018).",
"They even achieve performance worse than purely sequential recurrent networks.",
"On the other hand, our model achieves near perfect accuracy on this task, suggesting that our model is able to discover the correct parsing strategy.",
"Our model differs in several ways from the Gumbel Tree-LSTM of Choi et al. (2018) that could explain this gap in performance.",
"In the rest of this section, we perform an ablation study on our model to understand the importance of each of these differences.",
"Impact of the baseline and PPO.",
"We report the impact of our design choices on the performance in Table 1.",
"Our model without baseline nor PPO is vanilla REINFORCE.",
"The baselines only improve performance when PPO is used.",
"Furthermore, these ablated models without PPO perform on-par with the RL-SPINN model (see Table 2).",
"This confirms our expectations for models that fail to synchronise syntax and semantics learning.",
"Interestingly, using PPO has a positive impact on both baselines, but accuracy remains low with the moving average baseline.",
"The reduction of variance induced by the SCT baseline leads to a near-perfect recovery of the good parsing strategy in all five experiments.",
"This shows the importance of this baseline for the stability of our approach.",
"Sensitivity to hyperparameters.",
"Our model is relatively robust to hyperparameters changes when we use the SCT baseline and PPO.",
"For example, changing the leaf transformation or dimensionality of the model has a minor impact on performance.",
"However, we have observed that the choice of the optimiser has a significant impact.",
"For example, the average performance drops to 73 .",
"0% if we replace Adadelta by Adam (Kingma and Ba, 2014).",
"Yet, the maximum value out of 5 runs remains relatively high, 99 .",
"0% .",
"Untied parameters.",
"As opposed to previous work, the parameters of the parser and the composition function are not tied in our model.",
"Without this separation between syntax and semantics, it would be impossible to update one module with-Figure 1: Blue crosses depict an average accuracy of five models on the test examples that have lengths within certain range.",
"Black circles illustrate individual models.",
"out changing the other.",
"The gradient direction is then dominated by the low variance signal from the semantic component, making it hard to learn the parser.",
"We confirmed experimentally that our model with tied parameters fails to find the correct parser and its accuracy drops to 64 .",
"7% .",
"Extrapolation and Grammaticality.",
"Recursive models have the potential to generalise to any sequence length.",
"Our model was trained with sequences of length up to 130 tokens.",
"We test the ability of the model to generalise to longer sequences by generating additional expressions of lengths 200 to 1000 .",
"As shown in Fig.1, our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.",
"On the other hand, we notice that final representations produced by the parser are very similar to each other.",
"Indeed, the cosine similarity between these vectors for the test set has a mean value of 0.998 with a standard deviation of 0.002.",
"There are two possible explanations for this observation: either our model assigns similar representations to valid expressions, or it produces a trivial uninformative representation regardless of the expression.",
"To verify which explanation is correct, we generate ungrammatical expressions by removing either one operation token or one closing bracket symbol for each sequence in the test set.",
"As shown in Figure 2, in contrast to grammatical expressions, ungrammatical ones tend to be very different from each other: Happy families are all alike; every unhappy family is unhappy in its own way.",
"The only exception, marked by a mode near 1 , come Figure 2: The distributions of cosine similarity for elements from the different sets of mathematical expressions.",
"from ungrammatical expressions that represent incomplete expressions because of missing a closing bracket at the end.",
"This kind of sequences were seen by the parser during training and they indeed have to be represented by the same vector.",
"These observations show that our model does not produce a trivial representation, but identifies the rules and constraints of the grammar.",
"Moreover, vectors for grammatical sequences are so different from vectors for ungrammatical ones that you can tell them apart with 99 .",
"99% accuracy by simply measuring their cosine similarity to a randomly chosen grammatical vector from the training set.",
"Interestingly, we have not observed a similar signal from the vectors generated by the composition function.",
"Even learning a naive classifier between grammatical and ungrammatical expressions on top of these representations achieves an accuracy of only 75% .",
"This suggests that most of the syntactic information is captured by the parser, not the composition function.",
"We next evaluate our model on natural language inference using the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018b) datasets.",
"Natural language inference consists in predicting the relationship between two sentences which can be either entailment, contradiction, or neutral.",
"The task can be formulated as a three-way classification problem.",
"The results are shown in Tables 3 and 4.",
"When training the model on MultiNLI dataset we augment the training data with the SNLI data and use matched versions of the de-Model Dim.",
"velopment and test sets.",
"Surprisingly, two out of four models for MultiNLI task collapsed to left-branching parsing strategies.",
"This collapse can be explained by the absence of the entropy regularisation and the small number of PPO updates K = 1 , which were determined to be optimal via hyperparameter optimisation.",
"As with ListOps, using an Adadelta optimizer significantly improves the training of the model.",
"We evaluate our model on a sentiment classification task using the Stanford Sentiment Treebank (SST) of Socher et al. (2013).",
"All sentences in SST are represented as binary parse trees, and each subtree of a parse tree is annotated with the corresponding sentiment score.",
"There are two versions of the dataset, with either binary labels, negative or positive, (SST-2) or five labels, representing fine-grained sentiments (SST-5).",
"As shown in Ta-SST-2 SST-5 Sequential sentence representation Radford et al. (2017) 91.8 52.9 McCann et al. (2017) 90.3 53.7 Peters et al. (2018) -54.7 RvNN based models with external tree Socher et al. (2013) 85.4 45.7 Tai et al. (2015) 88.0 51.0 Munkhdalai and Yu (2017) 89.3 53.1 Looks et al. (2017) 89.4 52.3 RvNN based models with latent tree Yogatama et al. (2016) 86.5 Choi et al. (2018) 90.7 53.7 Choi et al. (2018) 90.3 0.5 51.6 0.8 Ours 90.2 0.2 51.5 0.4 Table 5: Accuracy results of models on the SST.",
"ble 5, our results are in line with previous work, confirming the benefits of using latent syntactic parse trees instead of the predefined syntax.",
"We noticed that all models trained on NLI or sentiment analysis tasks have parsing policies with relatively high entropy.",
"This indicates that the algorithm does not prefer any specific grammar.",
"Indeed, generated trees are very similar to balanced ones.",
"This result is in line with Shi et al. (2018) where they observe that binary balanced tree encoder gets the best results on most classification tasks.",
"We also compare with state-of-the-art sequence-based models.",
"For the most part, these models are pre-trained on larger datasets and fine-tuned on these tasks.",
"Nonetheless, they outperform recursive models by a significant margin.",
"Performance on these datasets is more impacted by pre-training than by learning the syntax.",
"It would be interesting to see if a similar pre-training would also improve the performance of recursive models with latent tree learning.",
"In this paper, we have introduced a novel model for learning latent tree parsers.",
"Our approach relies on a separation between syntax and semantics.",
"This allows dedicated optimisation schemes for each module.",
"In particular, we found that it is important to have an unbiased estimator of the parser gradients and to allow multiple gradient steps with PPO.",
"When tested on a CFG, our learned parser generalises to sequences of any length and distinguishes grammatical from ungrammatical expressions by forming meaningful representations for well-formed expressions.",
"For natural language tasks, instead, the model prefers to fall back to trivial strategies, in line with what was previously observed by Shi et al. (2018).",
"Additionally, our approach performs competitively on several real natural language tasks.",
"In the future, we would like to explore further relaxation-based techniques for learning the parser, such as REBAR (Tucker et al., 2017) or ReLAX (Grathwohl et al., 2017).",
"Finally, we plan to look into applying recursive approaches to language modelling as a pre-training step and measure if it has the same impact on downstream tasks as sequential models.",
"We would like to thank Alexander Koller, Ivan Titov, Wilker Aziz and anonymous reviewers for their helpful suggestions and comments."
] | [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"other"
] |
[
"The goal of meta-learning is to learn to adapt to a new task with only a few labeled examples.",
"Inspired by the recent progress in large language models, we propose in-context tuning (ICT), which recasts task adaptation and prediction as a simple sequence prediction problem: to form the input sequence, we concatenate the task instruction, labeled in-context examples, and the target input to predict; to meta-train the model to learn from in-context examples, we fine-tune a pre-trained language model (LM) to predict the target label given the input sequence on a collection of tasks.",
"We benchmark our method on two collections of text classification tasks: LAMA and BinaryClfs.",
"Compared to MAML which adapts the model through gradient descent, our method leverages the inductive bias of pre-trained LMs to perform pattern matching, and outperforms MAML by an absolute 6% average AUC-ROC score on BinaryClfs, gaining more advantage with increasing model size.",
"Compared to non-fine-tuned in-context learning (i.e. prompting a raw LM), in-context tuning meta-trains the model to learn from in-context examples.",
"On BinaryClfs, ICT improves the average AUC-ROC score by an absolute 10%, and reduces the variance due to example ordering by 6x and example choices by 2x.",
"Few-shot learning (FSL) refers to a system's ability to quickly adapt to new tasks when very few labeled examples are available for training.",
"FSL is a key feature of human learning (Lake et al., 2016), but current machine learning systems often rely on large amounts of labeled training data (Silver et al., 2016; He et al., 2016; Adiwardana et al., 2020).",
"Recently, prompting large pre-trained language models (LMs) for FSL has achieved remarkable progress (Brown et al., 2020; Schick and Schtze, Work done during summer internship at AWS AI. 2021a).",
"LM prompting with in-context learning reduces the task learning and predict process to a simple sequence prediction problem.",
"To perform a new task, Brown et al. (2020) prompt a raw LM (i.e., a pre-trained LM not fine-tuned on any labeled data) with the concatenation of the task instruction, some input-output examples, and the target input to be predicted on; then they extract the answer from the LM's continuation of the concatenated sequence (Figure 1 left).",
"For example, to coax the model into performing sentiment classification on the target input This movie is a waste of time , we prompt the LM with the sequence I like the movie! Positive review? Yes. Horrible Movie! Positive review? No. This movie is a waste of time. Positive review? ___ , and predict positive if the next word is more likely to be Yes rather than No .",
"However, raw LMs are not optimized for in-context FSL during pre-training, and exhibit undesirable behavior when used for FSL.",
"For example, Zhao et al. (2021) observed that LMs suffer from the recency bias, which assigns higher probability to labels that appear closer to the target input.",
"As a result, the accuracy becomes extremely sensitive to the ordering of the in-context examples.",
"Previous work has also shown that prompting raw LMs is often oversensitive to example choices and instruction wording (Schick and Schtze, 2021a; Jiang et al., 2020; Gao et al., 2021; Liu et al., 2021).",
"We address this weakness through a meta-learning lens and directly fine-tune the LM for FSL.",
"Under the meta-learning framework, we meta-train a model to learn to adapt to new tasks from a few examples on a wide range of tasks, so that it learns to leverage the few-shot examples to adapt to new tasks at test time.",
"Since LM prompting already reduces the task learning and predict process to a simple sequence prediction problem, we meta-train a LM by directly fine-tuning it to optimize for this sequence prediction problem on a wide range of tasks (Figure 1 left).",
"Since we fine-719 Instruction x1 y1 x' Y' x2 y2 Meta-Update via Gradient Descent In-Context Tuning := Few-shot Adaptation via In-context Learning MAML y1 x1 Instruction y2 x2 Instruction := y' x' Instruction Calculate loss with Meta-Update: Optimize to minimize the loss.",
"tune our model to learn in-context learning, we call our approach in-context tuning (ICT).",
"Unlike optimization-based meta learning approaches such as MAML (Finn et al., 2017), in-context tuning adapts to new tasks through in-context learning where model parameters are frozen, thus it avoids the challenging nested optimization problem in MAML (Figure 1).",
"We benchmark our algorithm on LAMA (Petroni et al., 2019), a dataset for testing models' factual knowledge, and BinaryClfs (Zhong et al., 2021), a wide range of binary classification tasks each annotated with a few language descriptions of the task.",
"Compared to prompting raw LMs, in-context tuning improves performance by 7.6 Precision@1 points on LAMA and 10.6% AUC-ROC score on BinaryClfs.",
"In addition, in-context tuning mitigates the over-sensitivity of raw LM prompting, significantly reducing the variance of the performance with respect to example ordering (by 68% on LAMA and 83% on BinaryClfs), example choices (by 56% on LAMA and 40% on BinaryClfs), and instruction wording (by 19% on LAMA).",
"Our approach also out-performs MAML, which adapts the model by gradient descent on a few examples and learns an initialization that can adapt to a new task through a few gradient steps (Finn et al., 2017; Nichol et al., 2018).",
"Since our approach better takes advantage of the inductive bias of LMs to extrapolate from in-context examples, our approach out-performs first-order MAML by 2.8 points on LAMA and 5.1 points on BinaryClfs, with increasing advantage as models become larger.",
"Given the empirical effectiveness of in-context tuning (Section 4.1), we conjecture that the few-shot learning potential of large LMs (e.g., GPT-3) may be broadly underestimated if prompted without any direct optimization for FSL.",
"We also conjecture that in-context tuning can mitigate various undesirable properties of LM prompting, such as over-sensitivity to example ordering, example choices, and instruction wording (Section 4.2).",
"We introduce the problem setup (Section 2.1), describe our in-context tuning algorithm (Section 2.2), compare our algorithm to gradient-based adaptation methods (Section 2.3) and other baselines (Sec-tion 2.4).",
"We focus on the few-shot classification problem, where the model first learns from a set of training tasks T T train , each associated with its natural language instructions IT and a large amount of task input-output examples DT = { ( x iT , y iT ) } (see Figure 1 left for examples).",
"At test time, we ask the model to learn a new task T given its instruction and only a few ( K ) labeled examples, i.e. S T D T , | S T | = K .",
"We denote the task input to be predicted at test time as x target T .",
"Note that task input is different from model input.",
"For example, on the left panel of Figure 1, the task input is Good movie! while the model 720 input can be a concatenation of the instruction, task inputs and task outputs.",
"In-context tuning directly optimizes pre-trained LMs with the few-shot in-context learning objective (Brown et al., 2020): task-agnostic LMs are meta-trained to perform few-shot in-context learning on a wide variety of training tasks.",
"Similar to in-context learning, LMs trained with in-context tuning adapt to a new task by using few-shot training examples as the input prefix.",
"Formally, during meta-training, we build the model input by concatenating the task instruction IT , task input-output pairs ST DT , and the task input x target T 1 to be classified.",
"We then fine-tune a pre-trained LM to predict y target T and hope that the model learns to use the in-context examples ST .",
"Here is the few-shot in-context tuning objective L : LT ( ) := (cid:88) ( x tgt T ,y tgt T ) DT [ log p ( y tgt T | x tgt T , ST , IT )] (1) L ( ) := (cid:88) T T train LT ( ) (2) To adapt to a new task T at test time, we directly concatenate the few-shot examples S T with the instruction I T and the target task input x target T to be classified to form the model input, and ask the model to predict its corresponding output.",
"No gradient update is performed during adaptation.",
"We compare in-context tuning with two classical few-shot learning methods: multi-task fine-tuning (instruction tuning + fine-tuning) and MAML.",
"Both methods adapt the model parameters to new tasks by gradient descent on few-shot examples.",
"Instruction Tuning + Fine-tuning (InsT + FT) We extend the recent work on zero-shot instruction tuning (Wei et al., 2021) to the FSL setting as a multi-task fine-tuning baseline.",
"During meta-training, the model is optimized to predict the task output given the task instruction and the task input on a wide range of tasks (Zhong et al., 2021).",
"Formally, we train the model parameter to predict y iT given IT x iT , where is shared across all tasks and represents the concatenation operation.",
"During the few-shot adaptation phase, the model is presented with a new task T , its natural language instruction I T and a small set of ( K ) task input-output examples S T = { ( x i T , y i T ) | i [ K ] } .",
"We then fine-tune the model to predict the task output y i T from the new task given I T x i T and update with a few gradient steps to get T .",
"Finally, we use the updated model T to predict the output from the task input x target T and the instruction I T under the test task T .",
"MAML The few-shot adaptation stage of MAML is the same as instruction tuning + fine-tuning, where we update the model parameters (ini-tialized with ) by gradient descent on K examples S T D T .",
"However, during meta-training, MAML aims to learn a task-agnostic model initialization such that, T , which is to be found by initializing with and performing gradient descent on ST , would lead to good performance (Finn et al., 2017).",
"Therefore, MAML involves two levels of optimization, an inner optimization to learn T given and ST DT , and an outer optimization to learn given T .",
"Due to the bi-level structure in this optimization problem, MAML has been found to be empirically unstable, sensitive to hyperparameters, and computationally expensive (Finn et al., 2017; Nikolaev et al., 2020).",
"Even worse, few-shot task adaptation is known to be highly sensitive to optimization hyperparameters (Antoniou et al., 2019), while a large labeled validation set for hyperparameter tuning may not be available under a FSL setting (Perez et al., 2021).",
"In comparison, in-context tuning simplifies the two-stage process of (1) few-shot task adaptation and (2) task-specific prediction as one sequence prediction problem, where task-specific examples are concatenated to the model input to provide information about the task.",
"Hence, in-context tuning removes the bi-level optimization during meta-training, which can be empirically unstable and expensive.",
"Additionally, since model weights are frozen during task adaptation, it is not sensitive to adaptation hyperparameters.",
"Raw In-context Learning (Raw IC-L) We directly evaluate a raw LM on a new task using the same evaluation set-up for in-context tuning, without fine-tuning the LM on any labeled data.",
"Instruction Tuning (InsT) The model learns to predict the target output only based on the instruction and the target input.",
"Only the instruction is available during the adaptation phase, and this setup is also known as zero-shot learning.",
"We categorize all approaches in our paper based on their meta-training objective and how they use task-specific examples in Table",
"1. In-context tuning is the only method that directly optimizes the FSL objective without gradient-based adaptation.",
"We experiment with two meta-datasets that contain a wide range of tasks, LAMA and BinaryClfs.",
"Each task is associated with several different natural language descriptions, and we call them instructions for convenience, even though some of them are realized as questions.",
"LAMA LA nguage M odel A nalysis (Petroni et al., 2019) is a dataset that tests the factual and commonsense knowledge learned by LMs.",
"In our experiments, we use the TREx-UHN portion of LAMA (Poerner et al., 2020), which consists of (subject, relation, object) triples from Wikidata.",
"LAMA is an entity prediction task, where a model is asked to predict the object entity given the subject entity and the relation.",
"In our experiments, we treat one relation as a task as in Perez et al. (2021).",
"Initial experiments on LAMA showed that LMs take significant advantage of majority label bias (Zhao et al., 2021), where they assign higher probability to object entities that have appeared in the in-context examples, thus inflating the accuracy.",
"To reflect the improvement due to few-shot learning rather than this simple heuristic to copy answers, for all tasks we prune the LAMA dataset so that all object entities appear less than 2.5% of times.",
"Our final filtered LAMA dataset consists of 29 relations (tasks) and 12k (subject, relation, object) examples.",
"We use task instructions from two datasets: LAMA and LPAQA (Jiang et al., 2020).",
"LAMA contains one task instruction for each task, and the auxiliary LPAQA dataset contains on average 10 additional instructions for each LAMA task.",
"We use the same evaluation protocol as in Petroni et al. (2019): 1) the object entity is predicted from a pre-defined vocabulary set of 21k words (each LAMA task is 21k-way classifica-tion); 2) we compute mean precision at one (P@1) for each task, and report the average across tasks.",
"Because LAMA does not have an official train-validation-test split, we use 8-fold cross-validation in our experiments.",
"We randomly partition the 29 tasks into 8 groups of similar sizes.",
"For each cross-validation split, we use six groups for training, one group for validation, and one group for testing.",
"The test sets of the eight folds are disjoint and their union is the set of all tasks.",
"BinaryClfs This dataset contains a wide range of binary cl assi f ication task s , and each task can be described by 1-4 yes/no\" questions, which we concatenate to the input context as instructions. There are in total 204 different tasks, and 73 of them are used for testing, which include sentiment classification, topic classification, definition detection, stance classification, etc. We use the same evaluation protocol as in Zhong et al. (2021): 1) we group the tasks by similarity and do not allow training tasks to be similar to testing tasks; 2) we treat Yes answer as the positive class and calculate the AUC-ROC score for each instruction of each task.",
"To fit model inputs (concatenation of in-context examples and task input to classify) within the maximum context length (1024) of our LMs, we leave out five evaluation tasks where the maximum task input length exceeds 230 BPE tokens.",
"We also leave out the spam classification task due to its small test set.",
"BinaryClfs does not come with an official validation set.",
"To perform hyperparameter tuning, for each testing group, we randomly sample another testing group as its validation group.",
"Architecture We use BERT models for LAMA (BERT-Base [110M parameters], BERT-Large [340M] and DeBERTa-XLarge-V2 [900M]) and GPT2 models for BinaryClfs (GPT2-Medium [345M] and GPT2-Large [774M]).",
"We use the Hug-722 LAMA BinaryClfs BERT-Base BERT-Large DeBERTa-xlarge GPT2-M GPT2-L 0-S 1-S 2-S 5-S 0-S 1-S 2-S 5-S 0-S 1-S 2-S 5-S 0-S 5-S 0-S 5-S Raw IC-L 10.3 8.5 10.8 14.1 12.7 12.1 15.4 18.6 11.2 12.6 20.6 23.7 50.5 57.8 51.0 58.3 InsT + FT / 17.5 18.6 20.0 / 21.6 22.6 23.9 / 24.7 25.6 27.0 / 67.0 / 69.4 ICT 14.6 16.3 17.6 19.6 18.0 21.6 23.4 24.3 21.9 26.0 27.5 28.8 62.9 67.4 66.3 69.8 Raw IC-L w/o Ins 1.5 4.9 8.7 12.3 1.4 3.5 7.0 12.5 2.7 13.0 19.5 22.6 / / / / ICT w/o Ins 7.1 14.6 17.0 18.2 9.3 19.4 19.9 22.9 10.6 23.5 26.0 27.6 / / / / Table 2: Few-shot learning accuracy of our in-context tuning approach (ICT) compared to in-context learning with raw LMs (Raw IC-L) and instruction tuning + fine-tuning (InsT + FT).",
"Hyperparameters We select hyperparameters based on few-shot classification accuracy on validation tasks.",
"Our validation tasks and testing tasks are disjoint, so hyperparameter tuning on validation tasks does not use extra labeled examples on the testing tasks (Perez et al., 2021).",
"See Appendix A for the hyperparameters we tuned.",
"Sampling Different instructions and few-shot example choices can lead to different predictions (Section 2.2).",
"At training time, we expose the model to diverse task instructions and few-shot choices by randomly sampling task instructions and few-shot examples for each target example.",
"At test time, we report the average accuracy across task instructions and few-shot choices.",
"Since computing the average across all few-shot choices is intractable (there are combinatorically many distinct few-shot choices), we thus calculate the average accuracy of multiple random samplings of few-shot choices as approximation.",
"In-context tuning out-performs MAML and various baselines on the two text classification meta-datasets (Section 4.1).",
"It also significantly reduces model sensitivity to instruction wording, example choices, and example ordering compared to prompting raw LMs (Section 4.2).",
"In-context tuning improves in-context learning accuracy over raw LMs.",
"We compare ICT with Raw IC-L in Table",
"2. In-context tuning consistently out-performs raw LM prompting by 7.6 points on LAMA and 10.6 points on BinaryClfs (averaged across model size and number of few-shots).",
"As expected, directly optimizing the few-shot in-context learning objective (Section 2.2) improves the few-shot in-context learning accuracy.",
"Few-shot examples lead to more effective task adaptation.",
"We compare few-shot in-context tuning with instruction tuning (equivalent to 0-shot ICT) in Table",
"2. Few-shot in-context tuning consistently out-performs instruction tuning on both LAMA and BinaryClfs, with increasing performance gains as number of shots increases.",
"Specifically, we observe that 5-shot in-context tuning out-performs instruction tuning by 6.1 points on LAMA and 4.0 points on BinaryClfs.",
"Results show that demonstration examples besides task instructions facilitate more effective task adaptation.",
"tuning (equivalent to 0-shot ICT) of Table 2, we see that MAML out-performs instruction tuning in most evaluation settings, which indicates that MAML is indeed able to take advantage of the few-shot task examples for task adaptation.",
"However, Table 3 shows that our approach of 5-shot in-context tuning out-performs 5-shot MAML consistently on both datasets with an accuracy gain of 2.8 points on LAMA and 5.1 points on BinaryClfs (averaged across model size).",
"We argue that in-context tuning out-performs MAML because in-context tuning better leverages the existing inductive bias of pre-trained LMs to perform pattern matching with in-context examples.",
"We also compare in-context tuning to the pipeline of instruction tuning + task-specific fine-tuning (Table 2).",
"Surprisingly, fine-tuning an instruction-tuned model on as few as one task-specific example significantly improves task accuracy, without over-fitting to the few labeled examples.",
"We observe that instruction tuning + 1-shot fine-tuning out-performs instruction tuning (equiv-alent to 0-shot ICT) by 3.1 points on LAMA (Ta-ble 2).",
"Our in-context tuning approach performs comparable or better than instruction tuning + fine-tuning, with increasing accuracy gains as models get bigger (Table 2).",
"For DeBERTa-XLarge-v2 (the largest models we use in this work), in-context tuning out-performs InsT + FT across all numbers of shots, achieving an accuracy gain of 1.7 points on LAMA (averaged across all numbers of shots).",
"We conjecture that in-context tuning will be increasingly effective for bigger models that have a stronger inductive bias of pattern matching.",
"In-context tuning reduces the need of task instructions.",
"As coming up with good task instructions can be hard (Schick and Schtze, 2021a; Jiang et al., 2020), we further investigate the effectiveness of in-context tuning without task instructions (Table 2).",
"In-context tuning is effective in the no-instruction setting as well, consistently out-performing raw in-context learning with no instructions by an average margin of 9.5 points on LAMA.",
"Comparing raw in-context learning with (Raw IC-L) and without instructions (Raw IC-L w/o Ins) (Table 2), we observe that task instructions yield the most significant performance gains when model size is relatively small (+2.5 points on BERT-Base, +7.7 points on BERT-Large, only +0.6 points on DeBERTa-xlarge).",
"We conjecture that smaller models may be weaker at inferring patterns LAMA BinaryClfs BB BL GPT2-M GPT2-L Raw IC-L 1.82 2.14 9.26 8.84 ICT 0.66 0.61 1.41 1.58 Table 4: In-context tuning is significantly less sensitive to example ordering compared to in-context learning with raw LMs.",
"from in-context examples alone compared to larger models, which is why instructions yield larger performance gains on smaller models.",
"On BERT-Base and BERT-Large models where task instructions are most helpful, in-context tuning reduces the improvement gain from task instructions from 5.1 points (raw in-context learning) to 1.8 points (aver-aged across BERT-Base and BERT-Large), which indicates that in-context tuning reduces the need of task instructions compared to raw in-context learning.",
"However, we note that instructions still yield performance improvement even if in-context tuning is applied.",
"We analyze the sensitivity of in-context tuning accuracy with respect to example ordering, example choices, and instruction wording, and compare it with prompting raw LMs.",
"Let I denote a random selection of task instruction, ST a random unordered set of few-shot training examples with size K , a random permutation of K examples.",
"The accuracy is a function of these three random variables, i.e. : ( ST , , I ) (cid:55) [0 , 1] .",
"We can decompose the total variance of into its variance w.r.t. each of the three random variables, since they are independent (order variance is independent to choice variance because ST is unordered ): Var ST ,,I [ ] = Var I [ EST , [ | I ]] (cid:124) (cid:123)(cid:122) (cid:125) instruction wording variance + EI [ Var ST [ E [ | I, ST ]]] (cid:124) (cid:123)(cid:122) (cid:125) example choice variance + E I,S T [ Var [ | I, ST ]] (cid:124) (cid:123)(cid:122) (cid:125) example order variance We analyze each type of variance below.",
"tuning and in-context prompting with raw LMs in Table",
"4. Results show that in-context tuning is significantly less sensitive to ordering of in-context examples compared to in-context prompting with raw LMs, reducing the sensitivity by 68% on LAMA and 83% on BinaryClfs.",
"In-context tuning is significantly less sensitive to example choices.",
"We compare the variance with respect to example choices for in-context tuning and in-context prompting with raw LMs in Table",
"5. Results show that in-context tuning is significantly less sensitive to selection of in-context examples compared to in-context prompting with raw LMs across both datasets and all model sizes, reducing the sensitivity by 56% on LAMA and 40% on BinaryClfs (averaged across model sizes).",
"We conjecture that in-context tuning is significantly less sensitive to example ordering and selection because the model is exposed to various example orderings and selections during in-context tuning.",
"In-context tuning is less sensitive to instruction wording.",
"We report the variance with respect to instruction wording for in-context tuning and in-context prompting with raw LMs in Table",
"6. Results show that in-context tuning is less sensitive to instruction wording compared to in-context prompting with raw LMs in five out of six evaluation settings, reducing the variance by 19% on LAMA (averaged across model size and number of shots).",
"We also observe that in-context tuning is especially effective on task instructions with low accuracy under raw in-context learning.",
"For each task, we compute the Pearson correlation between the raw in-context learning accuracy and the accuracy gain from in-context tuning (over raw in-context learning) on all instructions.",
"On the LAMA dataset, we see a strong negative correlation of -0.563 (aver-aged across all tasks), with p-value < 0.05 on 63% of the tasks.",
"We conjecture that in-context tuning is much less sensitive to instruction wording because the model is exposed to a wide variety of different task instructions during in-context tuning.",
"instructions.",
"We observe that in-context tuning is especially effective on task instructions with low accuracy under instruction tuning.",
"For each task, we compute the Pearson correlation between the instruction tuning accuracy and the accuracy gain from in-context tuning (over instruction tuning) on all instructions.",
"On the LAMA dataset, we see a strong negative correlation of -0.910 (averaged across all tasks), with p-value < 0.01 on 91% of the tasks.",
"We conjecture that in-context tuning is much less sensitive to instruction wording because few-shot in-context examples provide additional task information besides the task instructions.",
"LM Prompting for FSL Pre-trained LMs can be used to perform various FSL tasks when prompted with a natural language task instruction and several task examples (Radford et al., 2019; Brown et al., 2020; Schick and Schtze, 2021b; Li and Liang, 2021; Lester et al., 2021; Qin and Eisner, 2021).",
"However, prompting pre-trained LMs directly for FSL is known to be sensitive to various artifacts, such as the wording of the task instruction and the selection and ordering of few-shot training examples (Schick and Schtze, 2021a; Jiang et al., 2020; Zhao et al., 2021; Gao et al., 2021; Liu et al., 2021).",
"Our work is the first to show that meta-learning with an explicit FSL objective significantly reduces the sensitivity of LM prompting with respect to the in-context examples and instruction wording.",
"Meta-learning for FSL Meta-learning is a widely used technique in NLP to improve cross-domain transfer (Yu et al., 2018; Geng et al., 2019; Holla et al., 2020; Deng et al., 2020) and cross-task transfer (Gu et al., 2018; Bansal et al., 2020; Dou et al., 2019).",
"Existing optimization-based meta-learning methods mostly perform task adap-725 tation by fine-tuning a task-agnostic model on task-specific examples using gradient descent (Finn et al., 2017; Jiang et al., 2019; Nichol et al., 2018).",
"However, fine-tuning on few-shot task examples is sensitive to hyperparameters (Antoniou et al., 2019) and nested optimization during meta-training is often unstable (Nichol et al., 2018; Antoniou et al., 2019; Rajeswaran et al., 2019).",
"In contrast, our approach performs few-shot task adaptation by using task-specific examples as part of the model input while keeping the model parameters frozen and task-agnostic during the adaptation stage.",
"Multi-task Learning In multi-task learning, a single model is trained on the union of training sets of multiple tasks to learn a shared representation (Liu et al., 2019).",
"The multi-task model is then fine-tuned on task-specific examples to adapt to new tasks.",
"Multi-task learning is shown to improve performance on various downstream tasks, especially tasks with small training sets (Khashabi et al., 2020; Ye et al., 2021; Aghajanyan et al., 2021).",
"Compared to meta-learning, multi-task learning does not optimize task adaptation directly.",
"Fine-tuned LMs for Instruction Learning Recent work shows that fine-tuning LMs to learn task instructions on a wide variety of tasks can further leverage the inductive bias of LMs to perform instruction learning (Zhong et al., 2021; Mishra et al., 2021; Wei et al., 2021).",
"Our work is partially inspired by this line of work, but we work under the more generic few-shot meta-learning setting, and show that our approach out-performs both instruction tuning and existing few-shot meta-learning methods (e.g., MAML).",
"While previous work focuses on the accuracy improvement gained from instruction fine-tuning, our work also looks into the well-known over-sensitivity issue of FSL and shows that in-context tuning effectively reduces the sensitivity of FSL with respect to various factors.",
"Concurrent to our work, Min et al. (2021) also explores in-context tuning under more general Seq2Seq tasks.",
"In comparison, our work compares in-context tuning to a meta-learning baseline MAML, and shows that in-context tuning mitigates the well-known oversensitivity issue of LM prompting.",
"Contrary to our paper, Min et al. (2021) finds that in-context tuning under-performs InsT + FT.",
"This might be because they use many more shots (16-shot), which could give gradient-based methods more advantage.",
"Scaling Up and Broader Applications Our work only considers simple binary classification and knowledge retrieval tasks, at most 5 in-context examples, and models with fewer than 1 billion parameters.",
"Nevertheless, it is straightforward to scale up our framework to a wider and more diverse range of general sequence-to-sequence tasks (Ye et al., 2021), more few-shot examples (which requires a longer context size (Dai et al., 2019; Wang et al., 2020)), and larger models (Brown et al., 2020; Kaplan et al., 2020).",
"It is also straightforward to apply in-context tuning to a broader range of scenarios that require adapting to a new setup, e.g., adapting to a new label in classification tasks (Xia et al., 2021), an unseen database in semantic parsing tasks (Suhr et al., 2020; Lee et al., 2021), or a new language pair in machine translation (Gu et al., 2018; Aharoni et al., 2019), etc.",
"Meta-learning for Robustness Our work assumed that the few-shot training examples come from the same distribution as the test examples, but this assumption does not necessarily hold in practice.",
"For example, the test distribution might constitute new input compositions (Lake and Baroni, 2018), rare subgroups (Sagawa et al., 2019), other types of distribution shifts (Hendrycks and Diet-terich, 2019), or even adversarial examples (Kang et al., 2019).",
"More effective meta-learning methods might learn a more robust learning mechanism and combat these generalization challenges.",
"Understanding In-context Learning Many properties of in-context learning are still unknown.",
"Is in-context learning more robust to distribution shift (Lester et al., 2021)?",
"Can we combine in-context learning and gradient learning to get the benefit of both worlds (Wortsman et al., 2021)?",
"In this work, we propose meta-learning via in-context tuning, which recasts the few-shot learning process of task adaptation and task-specific prediction as a simple sequence prediction problem, where few-shot labeled examples are concatenated with the target example to form the model input.",
"In-context tuning out-performs a wide variety of baselines in terms of accuracy, including raw LM prompting, MAML and instruction tuning.",
"Meanwhile, sensitivity study shows that our FSL approach of in-context tuning is significantly 726 less sensitive to few-shot examples and instruction wording compared to raw LM prompting.",
"Given the empirical effectiveness of in-context tuning, we conjecture that the few-shot learning potential of large LMs (e.g., GPT-3) might be broadly underestimated, and that in-context tuning can eliminate well-known artifacts of few-shot LM prompting such as over-sensitivity to example ordering, example selection and instruction wording."
] | [
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain"
] |
[
"Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation.",
"We focus on evaluating response generation systems via response selection.",
"To evaluate systems properly via response selection, we propose a method to construct response selection test sets with well-chosen false candidates.",
"Specifically, we propose to construct test sets filtering out some types of false candidates:",
"(i) those unrelated to the ground-truth response and",
"(ii) those acceptable as appropriate responses.",
"Through experiments, we demonstrate that evaluating systems via response selection with the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU.",
"Automatic evaluation for open-domain dialogue generation systems has a potential for driving their research and development because of its high reproducibility and low cost.",
"However, existing automatic evaluation metrics, such as BLEU (Papineni et al., 2002), correlate poorly with human evaluation (Liu et al., 2016).",
"This poor correlation arises from a nature of dialogue, that is, there are many acceptable responses to an input context, known as the one-to-many problem (Zhao et al., 2017).",
"To tackle this problematic issue, we focus on evaluating response generation systems via response selection.",
"In this task, systems select an appropriate response for a given context from a set of response candidates.",
"Each candidate has the label that indicates whether the candidate is appropriate response for the given context.",
"Traditionally, response selection has been used to evaluate retrieval-based dialogue systems (Lowe et al., 2015; Wu et al., 2017).",
"We consider applying this task to driving the research for dialogue generation Repository Context: Do you have a car?",
"systems.",
"Specifically, we consider using response selection to pick out promising systems that should be evaluated more precisely by humans among a lot of candidate systems.",
"We assume that response selection is a valid option for such a preliminary evaluation on the basis of the following assumption: systems that can generate appropriate responses can also select appropriate responses.",
"One advantage of evaluating generation systems via response selection is that it can remedy the one-to-many problem, because we do not have to consider the appropriate responses that are not included in sets of response candidates.",
"Another advantage is that it enables a simple and clear comparison between systems in accuracy.",
"Generally, false response candidates are randomly sampled from a repository (Lowe et al., 2015; Gunasekara et al., 2019), which causes two problems:",
"(i) unrelated false candidates and",
"(ii) acceptable utterances as false.",
"The first problem is that randomly sampled false candidates are often too far from ground-truth responses.",
"Consider the case where for a given context Do you have a car?, a response candidate I play tennis. is randomly sampled.",
"Systems can easily recognize this candidate as a false one because there are no related content words between them.",
"Such excessive easiness is not preferable because the performance gap between good and inferior systems tends to be small.",
"The second problem is that there is no guarantee that randomly sampled candidates are always unacceptable ones.",
"For example, I don't know. is often sampled as a false response because this phrase often occurs in open-domain dialogues.",
"This phrase can be regarded as acceptable for various contexts.",
"These two problems make general response selection test sets unreliable.",
"In this work, we propose a method to construct response selection test sets with well-chosen false candidates (Figure 1).",
"First, we retrieve only utterances related to the ground-truth response.",
"Then we remove acceptable utterances by human evaluation.",
"Through experiments, we demonstrate that automatic evaluation using the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU.",
"Our empirical results indicate that response selection with wellchosen false candidates can be a valid option for evaluating response generation systems.",
"We will release the test set used in the experiments.",
"1 2 Related Work Automatic evaluation metrics Various metrics have been proposed for automatic evaluation of dialogue systems, such as BLEU, METEOR (Baner-jee and Lavie, 2005), ROUGE (Lin, 2004), Greedy Matching (Rus and Lintean, 2012), and Vector Extrema (Forgues et al., 2014).",
"These metrics evaluate the quality of the responses generated by systems.",
"However, this is challenging due to the one-to-many problem.",
"For example, ADEM, a metric proposed by (Lowe et al., 2017), is easily fooled by adversarial examples (responses) (Sai et al., 2019).",
"To remedy one-to-many problem, we focus on evaluating systems via response selection.",
"Response selection test sets with human labels One popular test set for response selection is Douban Conversation Corpus in Chinese (Wu et al., 2017).",
"In this test set, each response candidate has a manually annotated label that indicates whether or not the candidate is appropriate for the given context.",
"Although this test set is similar to ours, 1 The test set is available at https://github.com/ cl-tohoku/eval-via-selection .",
"there are some differences between the purposes and procedure of test set designs.",
"The purpose of creating their test set is to simulate and evaluate retrieval-based dialogue systems.",
"Thus, all the candidates in this corpus are retrieved by using the context as queries, as retrieval-based systems do.",
"In this paper, we develop an English response selection test set with human labels to evaluate dialogue generation systems.",
"One of the salient differences from Douban Conversation Corpus is the procedure of retrieving false candidates.",
"We retrieve false candidates using the ground-truth responses.",
"By this method, we can more certainly collect false candidates that are related to ground-truth responses and facilitate error analysis as described in Section 4.3.",
"For each context c and ground-truth response r true , we construct a set of false response candidates r false R false by retrieving utterances from an utterance repository u U .",
"As we mentioned in Section 1, we want to filter out some types of utterance:",
"(i) those unrelated to the ground-truth response and",
"(ii) those acceptable as appropriate responses.",
"We filter out such utterances as follows: 1. Retrieve M utterances, { u 1 , , u M } , related to the ground-truth response r true from the utterance repository U .",
"2. Remove acceptable ones from the retrieved utterances by human evaluation.",
"ground-truth response We assume that utterances related to the ground-truth response share some similar content words between them.",
"Here, we retrieve the related utterances on the basis of the similarities of the content words.",
"This process makes it difficult for systems to distinguish between ground-truth and false candidates only by comparing the content words.",
"2. Remove acceptable utterances Coincidentally, some of the retrieved utterances may be acceptable as an appropriate response.",
"To remove such utterances, we ask human annotators to evaluate each retrieved utterance.",
"Specifically, we instruct five annotators (per candidate) to score each retrieved candidate in a five-point scale from 1 to 5 .",
"A score of 5 means that the utterance can clearly be regarded as an appropriate response for the given context, whereas a score of 1 means that it cannot be regarded as an appropriate one at all.",
"In addition to the scores, we also instruct annotators to give a score of 0 to ungrammatical utterances.",
"We remove the utterances that are given a score of 3 or higher by three or more annotators because these utterances with a high score can be acceptable.",
"In addition, we remove the utterances that are given a score of 0 by three or more annotators because these are likely to be ungrammatical ones.",
"We also instruct annotators to score ground-truth responses, combining them with retrieved utterances.",
"We remove the questions if the score of the ground-truth response is low, i.e., three or more annotators give a score of 3 or lower.",
"This is intended to ensure that ground-truth responses are certainly appropriate for the given context.",
"Settings of test set construction We retrieve 10 utterances (per question) from the repository and remove acceptable ones following the method described in Section 3.1.",
"We use crowdsourcing 2 to score the retrieved utterances.",
"After removing acceptable utterances, there are some questions that have 6 or more available false candidates.",
"From these questions, we develop new questions with the same context but different candidates (both ground-truth responses and false candidates).",
"We regard one of acceptable utterances removed by human evaluation as the ground-truth responses of new questions.",
"We use the dialogue data from DailyDialog (Li et al., 2017) to construct the test set.",
"We extract the four beginning turns of each dialogue sample from DailyDialog, regarding the fourth utterance as the ground-truth response.",
"We extract the utterances of OpenSubtitles2018 (Lison et al., 2018) to construct the repository used to retrieve false candidates.",
"Note that the repository does not contain the utterances in the dialogue data used to train response generation systems in Section 4.1.",
"Statistics of our test set We developed the test set that consists of 1 , 019 questions with 4 candidates ( 1 ground-truth + 3 false candidates).",
"Table 1 shows the basic statistics of our test set.",
"The Fleiss' Kappa (Fleiss, 1971) of the annotators' scoring in the six scale is 0 .",
"22 .",
"3 Note that if we 2 https://www.mturk.com/ 3 We calculated Fleiss' Kappa based on the scale of the scores as categorical.",
"regard the scoring as binary classification (scores higher than 3 are regarded as appropriate responses, and the others not), the Fleiss' Kappa of the scoring is 0 .",
"63 , which is higher than Douban Conversation Corpus ( 0 . 41 ).",
"Example of our test set Table 2 shows an example of our test set.",
"All the false response candidates share the same content word focus related to the topic camera.",
"Preliminary experiments We conducted a simple experiment to investigate whether or not a system that takes only content words into account can recognize false response candidates in our test set.",
"For the model, we used the TF-IDF model (Lowe et al., 2015), which simply compares between content words of a given context and each candidate.",
"As a result, the accuracy was 0 .",
"461 .",
"For a comparison, we also replaced all the false candidates in our test set with randomly sampled utterances.",
"The accuracy of the same TF-IDF model increased to 0 .",
"671 .",
"These results indicates that it is difficult to recognize false candidates in our test set only by comparing content words.",
"We test whether the automatic evaluation of response generation systems on our test set correlates with human evaluation.",
"We train multiple response generation systems and rank them on the basis of human and automatic evaluation scores.",
"By comparing between the system ranking by human scores and the ranking by each automatic score, we verify the correlations.",
"We train 10 different response generation systems to be ranked in the experiments.",
"Their architectures are ones of Seq2Seq with GRU (Cho et al., 2014), Seq2Seq with LSTM (Hochreiter and Schmidhu-ber, 1997), or Transformer (Vaswani et al., 2017).",
"Some systems have same architecture, but different hyper-parameters.",
"4 We train the models on OpenSubtitles2018.",
"The training data consists of 5M samples and the validation data consists of 0.05M samples, each of which is four-turns dialogue.",
"Ground-truth system ranking by human scores The trained systems generate a response r gen for each input context c C .",
"Then, five human annotators (per response) score each generated response r gen in a five-point scale from 1 to 5. A score of 5 means that the response can clearly be regarded as an appropriate response for the given context, whereas a score of 1 means that it cannot be regarded as an appropriate one at all.",
"As a result, we obtain five scores, { s 1 , s 2 , , s 5 } , for each response r gen and average them: s mean = mean( s 1 , s 2 , , s 5 ) .",
"We also average s mean across all the questions in the test set and yield the final score s final for each system.",
"Based on this score, we make a ranking of the systems and regard it as the ground-truth ranking.",
"Although we developed the test set that consists of 1,019 questions, it is too costly to evaluate all the 10 systems' responses for 1,019 questions by humans.",
"Thus we give the context of 56 randomly sampled questions from our test set to the 10 systems as inputs C .",
"System ranking by response selection accuracy We rank the systems by response selection accuracy with well-chosen false candidates (CHO-SEN).",
"The trained response generation systems compute the softmax cross-entropy loss (cid:96) r for each response candidate r R .",
"We regard the candidate with the lowest loss as the system's selection: 4 We describe the model settings in Appendix B. Metrics Spearman p-value BLEU-1 0.36 0.30 BLEU-2 0.085 0.82 METEOR 0.073 0.84 ROUGE-L 0.35 0.33 RANDOM 0.43 CHOSEN 0.48 0.19 HUMAN 0.87 0.0038 Table 3: Correlations between the ground-truth system ranking and the rankings by automatic evaluation.",
"For comparison, we also make a ranking by response selection accuracy with randomly sampled false candidates (RANDOM).",
"5 We compute the accuracy of CHOSEN and RANDOM using all 1 , 019 questions from our test set.",
"System ranking by other evaluation metrics For comparison, we also make rankings of the systems by three existing automatic evaluation metrics: BLEU, METEOR, and ROUGE-L.",
"First, the trained systems generate a response for each input context.",
"Then we compute the scores comparing generated responses and the ground-truth responses.",
"These scores can be computed automatically without false candidates.",
"Thus we compute them using all 7 , 393 available four-turns dialogue samples from DailyDialog, regarding the fourth utterances as the ground-truth responses.",
"We compare the rankings by Spearman's rank correlation coefficients, shown in Table 3. First, we yielded the human upper bound.",
"we evaluated the correlation between the rankings made by different annotators (HUMAN).",
"We randomly divided human evaluation into two groups and made two rankings.",
"The correlation coefficient between the two rankings was 0 .",
"87 .",
"Second, we found that the rankings made using existing automatic evaluation metrics correlate poorly with ground-truth ranking.",
"BLEU, often used to evaluate generation systems, does not correlate with human evaluation at all.",
"One exception is ROUGE-L.",
"However, 5 We compute the coefficient of RANDOM by averaging the coefficients of different 100 trials.",
"its correlation coefficient is lower than 0 .",
"4 , which means reasonable correlation.",
"Third, we found that the ranking made by using our test set reasonably correlates with the ground-truth ranking compared with other metrics, and the correlation coefficient (CHOSEN) is higher than 0 .",
"4 .",
"Instability of evaluation with random sampling The correlation coefficient of the ranking by response selection with randomly sampled false candidates (RANDOM) is higher than that of BLEU and slightly lower than that of CHOSEN.",
"However, a serious problem has been observed: the instability.",
"We make 100 test sets, each of which consists of different false candidates by random sampling with different seeds.",
"For each test set, we make a system ranking and compute its coefficient.",
"Figure 2 shows the box plot of the Spearman's rank correlation coefficients of the trials.",
"The range of the coefficients is very wide (0.06-0.67).",
"This result means that the quality of evaluation with randomly sampled false candidates strongly depends on the sampled candidates, which is the uncontrollable factor stemming from the randomness.",
"Interpretable error analysis Our automatic evaluation with well-chosen false candidates brings another benefit: the interpretable error analysis.",
"Table 4 shows an example of a question of our test set.",
"The well-chosen false candidate (CHOSEN) is similar to the ground-truth response.",
"However, the grammatical subject of the CHOSEN sentence is You, which completely mismatches the context.",
"Thus if systems select this false candidate, they may lack the ability to determine correctly the subject of sentences.",
"In this way, our test set enables us to analyze systems' predictions from various meaningful perspectives.",
"As a case study, we design a set of error labels, each of which indicates why the false candidate is false, and assign them to 50 false candidates in our test set.",
"We succeed in assigning the labels to 22 out of 50 candidates.",
"6 Limitation Our test set is designed to evaluate open-domain dialogue generation systems.",
"Thus, it is not suitable for evaluating other types of dialogue system such as task-oriented ones.",
"By contrast, existing automatic evaluation metrics, such as BLEU, do not have this type of restriction.",
"In this paper, we focused on evaluating response generation systems via response selection.",
"To evaluate systems properly via response selection, we proposed a method to construct response selection test sets with well-chosen false candidates.",
"Specifi-cally, we proposed to construct test sets filtering out some types of false candidates:",
"(i) those unrelated to the ground-truth response and",
"(ii) those acceptable as appropriate responses.",
"We demonstrated that evaluating systems via response selection with the test sets developed by our method correlates more strongly with human evaluation, compared with that of widely used metrics such as BLEU.",
"In the future, we will provide labels that indicate Why this candidate is false for false candidates in our test set, so that one can easily detect weak points of systems through error analysis.",
"This work was partially supported by JSPS KAK-ENHI Grant Number JP19H04162.",
"We would like to thank the laboratory members who gave us advice and all reviewers of this work for their insightful comments."
] | [
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks.",
"However, the measure and impact of similarity between pretraining data and target task data are left to intuition.",
"We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data.",
"We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs.",
"Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar.",
"Modern neural architectures for NLP are highly effective when provided a large amount of labelled training data (Zhang et al., 2015; Conneau et al., 2017; Bowman et al., 2015).",
"However, a large labelled data set is not always readily accessible due to the high cost of expertise needed for labelling or even due to legal barriers.",
"Researchers working on such tasks usually spend a considerable amount of effort and resources on collecting useful external data sources and investigating how to transfer knowledge to their target tasks (Qi et al., 2009; Kim et al., 2017).",
"Recent transfer learning techniques make the most of limited labelled data by incorporating word vectors or LMs pretrained on a large amount of unlabelled data.",
"This produces dramatic improvements over a range of NLP tasks where appropriate unlabelled data is available (Pe-ters et al., 2017, 2018; Akbik et al., 2018; Devlin et al., 2019).",
"However, there is still a lack of systematic study on how to select appropriate data to pretrain word vectors or LMs.",
"We observe a range of heuristic strategies in the literature: (1) collecting a large amount of generic data, e.g., web crawl (Penning-ton et al., 2014; Mikolov et al., 2018); (2) selecting data from a similar field (the subject matter of the content being discussed), e.g., biology (Chiu et al., 2016; Karimi et al., 2017); and, (3) selecting data from a similar tenor (the participants in the discourse, their relationships to each other, and their purposes), e.g., Twitter, or online forums (Li et al., 2017; Chronopoulou et al., 2019).",
"In all these settings, the decision is based on heuristics and varies according to the individual's experience.",
"We also conducted a pilot study that suggests that the prac-titioner's intuition is to prioritise field over tenor (see Section 3).",
"Our overarching goal is to develop a cost-effective approach that, given a NER data set, nominates the most suitable source data to pretrain word vectors or LMs from several options.",
"Our approach builds on the hypothesis that the more similar the source data is to the target data, the better the pretrained models are, all other aspects (such as source data size) being equal.",
"We propose using target vocabulary covered rate and language model perplexity to select pretraining data.",
"We also introduce a new measure based on the change from word vectors pretrained on source data to word vectors initialized from source data and then trained on target data.",
"Experiments leverage 30 data pairs from five source and six target NER data sets, each selected to provide a range of fields (i.e., biology, computer science, medications, local business) and tenors (i.e., encyclopedia articles, journal articles, experimental protocols, online reviews).",
"Our contributions can be summarized as below: We propose methods to quantitatively measure different aspects of similarity between source and target data sets and find that these measures are predictive of the impact of pretraining data on final accuracy.",
"To the best of our knowledge, this is the first systematic study to investigate LMs pretrained on various data sources.",
"1 We find that it is important to consider tenor as well as field when selecting pretraining data, contrary to human intuitions.",
"We show that models pretrained on a modest amount of similar data outperform pretrained models that take weeks to train over very large generic data.",
"Text Similarity Word similarity following the hypothesis that similar words tend to occur in similar contexts (Harris, 1954) is well studied and forms the foundation of neural word embedding architectures.",
"Hill et al. (2015) and Budanitsky and Hirst (2006) evaluate functional similarity (as in school versus college ) and associative similarity (as in school versus teacher ) captured by semantic models, respectively.",
"Pavlick et al. (2015) study sentence-level similarity, using entailment relation, vector embedding and stylistic variation measures.",
"Kusner et al. (2015) propose Word Mover's Distance to measure the similarity between documents and evaluate on document classification tasks.",
"We extend the study of similarity to corpus-level, and focus on its implication on unsupervised pretraining.",
"Pretrained Word Vectors The effectiveness of pretrained word vectors mainly depends on three factors: source data, training algorithm, and its hyper-parameters.",
"Turian et al. (2010) and Levy et al. (2015) systematically compare count-based distributional models and distributed neural embedding models.",
"They find that both models can improve the performance of downstream tasks.",
"Chiu et al. (2016) identify the most influential hyper-parameters of neural embedding methods.",
"They also investigate the impact of the source data size and find that larger pretraining data do not necessarily produce better word vectors for biomedical NER.",
"Our work regarding pretrained word vectors is conducted using skip-gram model with default hyper-parameter setting (Mikolov et al., 2013), and our focus is on the impact of 1 Our pretrained word vectors and LMs are publicly available: https://bit.ly/2O0mOOG.",
"similarity between source data and target task data on the effectiveness of pretrained word vectors for NER tasks.",
"Our observations are a useful supplement to the literature as a practitioners' guide.",
"Pretrained Language Models Dai and Le (2015) investigate different methods to transfer knowledge to supervised recurrent neural networks.",
"They establish that a pretrained recurrent LM can improve the generalization ability of the supervised models.",
"They use unlabelled data from Amazon reviews to pretrain the LM and find that it can improve classification accuracy on the Rotten Tomatoes data set.",
"Joshi et al. (2018) empirically showed that, for their vaccination behaviour detection task on twitter data, LMs pretrained on a small amount of movie reviews outperform the ones pretrained on large size of Wikipedia data.",
"Peters et al. (2017) successfully inject the information captured by a bidirectional LM into a sequence tagger, and extend this approach to other NLP tasks (Peters et al., 2018).",
"Our work is based on (Peters et al., 2018) and investigates the impact of pretraining data on the effectiveness of pretrained LMs for downstream NER tasks.",
"Transfer Learning While our study falls into the paradigm of semi-supervised learning, we distinguish ourselves from other studies in transfer learning.",
"One sub-area of transfer learning is domain adaptation, which aims to learn transferable representation from a source domain and apply it to a target domain (Blitzer et al., 2006; Yang and Eisenstein, 2015).",
"The question in domain adaptation is usually framed as Given a source and a target, how to transfer?'.",
"In contrast, the question we address is Given a specific target, which source to choose from?'.",
"The other sub-area of transfer learning is transferring from multiple sources (Yin and Sch utze, 2015; Li et al., 2018).",
"Our work focuses, instead, on the selection of a single external data source.",
"Our work is inspired by the methodology proposed by Johnson et al. (2018) where they predict a system's accuracy using larger training data from its performance on much smaller pilot data.",
"However, we aim to predict the usefulness of pretrained models for target tasks from the similarity between the source pretraining data and the target task data.",
"Named Entity Recognition Our work builds on the literature on deep neural networks applied to sequence tagging tasks.",
"Architectures based on Figure 1: Likert scale ratings from NLP and ML practitioners ( N = 30 ) for the statement Unsupervised pretraining on S would be useful for supervised named entity recognition learning on T.' Target data T is described as Online forum posts about medications,' source data S1 as Research papers about biology and health,' and source data S2 as Online reviews about restaurants, hotels, barbers, mechanics, etc.' different combinations of convolutional and recurrent neural networks have achieved state-of-the-art results on many NER tasks.",
"A detailed review and comparison of these methods can be found in (Yang et al., 2018).",
"Our experiments on the usefulness of pretrained word vectors and pretrained LMs for NER tasks are based on one variant proposed by Lample et al. (2016).",
"Results of a survey capturing intuition regarding selection of pretraining data across 30 NLP or machine learning practitioners is shown in Figure",
"1. Participants were provided short descriptions of the target data set T, and two possible source data sets S1 and S2 as T: Online forum posts about medications; S1: Research papers about biology and health; S2: Online reviews about restaurants, hotels, barbers, mechanics, etc.",
"We constructed each of these descriptions as t about f ' where t is intended to describe the tenor and f the field.",
"Each participant rated both sources on a five-point Likert, indicating agreement with the statement Unsupervised pretraining on S would be useful for supervised named entity recognition learning on T.",
"73% of participants agreed or strongly agreed that S1 would be useful, while only 27% agreed that S2 would be useful.",
"A Wilcoxon signed-rank test indicates that scores are significantly higher for S1 than for S2 ( Z = 43 . 0 , p < 0 . 001 ).",
"Although small in scale, these results show that intuition varies across practitioners, motivating our work on identifying quantitative measures that are predictive of performance.",
"These results also suggest that practitioners favour field over tenor when selecting pretraining data, which would be detrimental to accuracy of the target NER tasks in later experiments (Section 7.2).",
"To measure the similarity between source and target data, we start from identifying linguistic concepts behind these human intuitions.",
"Then, we propose several measures to quantify these attributes which lead to the perception that two data sets are similar.",
"Researchers who select pretraining data from a similar field believe that, if the source data has a similar field to the target data, they tend to share similar vocabulary.",
"Conversely, vocabularies are different from each other if source and target are from different fields.",
"Imagine data sets about medications and restaurants.",
"Those who select pretraining data from a similar tenor believe that tenor may impact the writing style of text.",
"Imagine the participants in online reviews and scientific papers, their relationships to each other, their purposes and how these affect text style, including punctuation, lexical normalization, politeness, emotiveness and so on (Lee, 2001; Solano-Flores, 2006; Pavlick and Tetreault, 2016).",
"Below, we detail different measures based on these intuitions to quantify different aspects of similarity between two data sets.",
"The first measure is simply the percentage of the target vocabulary that is also present in the source data.",
"An extremely dissimilar example is that of different languages.",
"They have a totally different vocabulary and are considered dissimilar, even if they are written in a similar style and talking about the same subject 2 .",
"We propose Target Vocabulary Covered (TVC) as a measure of field, calculated as T V C ( DS , DT ) = | VDS VDT | | VDT | , where VDS and VDT are sets of unique tokens in source and target data sets respectively.",
"We also investigate a variant where only content words (nouns, verbs, adjectives) are used to calculate VDS and VDT .",
"We denote this variant as VCcR .",
"A language model can assign a probability to any sequence of words < w 1 , , w N > using chain rule of probability:",
"where N is the length of the sequence and w i 1 1 are all words before word w i .",
"In practice, this equation can be simplified by n-gram models based on Markov Assumption: p ( w 1 , w 2 , , w N ) = N (cid:89) i =1 p ( w i | w i 1 i n +1 ) , where w i 1 i n +1 represents only n preceding words of w i .",
"To make the model generalize better, smoothing techniques can be used to assign non-zero probabilities to unseen events.",
"In this study, we use Kneser-Ney smoothed 5-gram models (Heafield, 2011).",
"To measure the similarity between two data sets using language modeling, we first train the language model on the source data, then evaluate it on the target data using perplexity to represent the degree of similarity.",
"The intuition is that, if the model finds a sentence very unlikely (dissimilar from the data where this language model is trained on), it will assign a low probability and therefore high perplexity.",
"The summed up perplexity (PPL) is then: P P L ( DS , DT ) = m (cid:88) i =1 P ( D iT ) 1 Ni , where m is the number of sentences in the target data set, and P ( D iT ) is the probability assigned by 2 Our focus is on transferring through pretrained models using one single source and we do not consider multilingual similarity.",
"the language model trained on the source data to the i -th sentence from the target data set, whose sentence length is N i .",
"PPL is token-based, similar to TVC, but also captures surface structure.",
"We therefore propose PPL as a proxy to measure tenor as well as field.",
"Pretrained word vectors capture semantic and syntactic regularities of words (Artetxe et al., 2018).",
"The variance of a word vector that is first trained on the source data and then on the target data can reflect the difference of linguistic regularities between the two data sets.",
"Intuitively, if the context words around a given word are very different in the source and target data, then the word vector of this word learned from the source will be updated more than those words whose context words are similar between source and target.",
"Therefore, we use Word Vector Variance (WVV) as another combined measure of tenor and field.",
"To calculate word vector variance, we first train word vectors on the source data set using skip-gram model (Mikolov et al., 2013).",
"The trained word vectors are denoted as W S R | VS | d , where | VS | is the vocabulary size of the source data set and d is the vector dimension.",
"Then, we use W S as initial weights of a new skip-gram model, and train this new model on the target data.",
"We denote the final word vectors as W T .",
"The WVV can be calculated as: W V V ( DS , DT ) = 1 | VS | 1 d | VS | (cid:88) i d (cid:88) j ( W S ji W T ji ) 2 .",
"The smaller the word vector variance, the more similar context surrounds the same words from the two data sets, and therefore the more similar the two data sets are.",
"Source data sets We use five data sets as source data, covering a range of fields (i.e., clinical, biomedical, local business and Wiki with diverse fields) and tenors (i.e., popular reporting, notes, scholarly publications, online reviews and ency-clopedia).",
"To isolate the impact of source size, we sample all source data to approximately 100 million tokens.",
"We also analyze the impact of source data size separately in Section 7.3.",
"The specifica-tions of these source data sets are given in Table",
"1. Data set Description 1BWB The original one billion word language model benchmark data (Chelba et al., 2013), produced from News Crawl data.",
"Target data sets Six NER data sets are used as target data: CADEC (Karimi et al., 2015), CoNLL2003 (Sang and Meulder, 2003), CRAFT (Bada et al., 2012), JNLPBA (Collier and Kim, 2004), ScienceIE (Augenstein et al., 2017) and WetLab (Kulkarni et al., 2018).",
"Details of these target data are listed in Table",
"2. We choose these data sets based on two considerations:",
"1. NER is a popular structured NLP task.",
"Using NER, we want to observe how the similarity between source and target data may affect the effectiveness of different pretrained word vectors and LMs on downstream tasks.",
"2. NER is highly sensitive to word representations, because the model needs to make token level decisions.",
"That is, each token needs to be assigned a proper label.",
"Past studies have shown that removing pretrained word vectors from a tagging system results in a large drop in performance (Huang et al., 2015; Lample et al., 2016).",
"To investigate the impact of source data on pretrained word vectors and LMs, we pretrain word vectors and LMs on different sources separately, then observe how the effectiveness of these pretrained models varies in different NER data sets.",
"We use the BiLSTM-CRF model, a state-of-the-art model for sequence tagging tasks, as a supervised model for the target NER task.",
"We follow the architecture proposed in (Lample et al., 2016), except that we use two BiLSTM-layers and employ a CNN network to learn character-level representations (Ma and Hovy, 2016).",
"Micro average F 1 score is used to evaluate the performance of the tagger (Sang and Meulder, 2003).",
"Word vectors are pretrained using word2vec with its default hyper-parameter setting (Mikolov et al., 2013).",
"In different experiments, we only replace the word embedding weights initialized by word vectors pretrained on different source data, then make these weights trained jointly with other model parameters.",
"The baseline is denoted as None in Table 3, where word embedding weights are randomly initialized.",
"LMs are pretrained using the architecture proposed by Jozefowicz et al. (2016) with hyper-parameters in (Peters et al., 2018).",
"The supervised model used for NER is the same BiLSTM-CRF model mentioned above, and we follow the approach proposed by Peters et al. (2018) to incorporate the pretrained LMs.",
"Note that these pretrained LMs are character-based.",
"Therefore, words in the target data set are first converted into a sequence of characters, and then fed into the LMs.",
"The contextualized representation of each word is generated using the outputs of all layers of the pretrained LMs, then injected to the input of the second BiL-STM layer of the supervised model.",
"Using proposed similarity measures, we first quantify the similarity between all source-target pairs (Section 7.1), then investigate how these measures can be used to predict the usefulness of pretraining data (Section 7.2).",
"Finally, we take the source data size into consideration, and observe its impact on the effectiveness of pretrained model on both similar and dissimilar source-target settings (Section 7.3).",
"Different aspects of similarity measured between five source and six target data sets are shown in the left side of Table",
"3. The language model trained on PubMed achieves lower perplexity when evaluated on CRAFT, JNLPBA and ScienceIE compared to other sources.",
"On one hand, it is expected that PubMed is similar to CRAFT and JNLPBA, since they are all sampled journal articles about biology and health, thus being similar in terms of both field and tenor.",
"On the other hand, although ScienceIE does not have the same field as PubMed (computer science, material and physics versus biology and health), they are similar because they share a similar tenor (scholarly publications).",
"The measures calculated on CADEC also show that tenor is reflected more than field by PPL and WVV.",
"Source data set Yelp is more similar to CADEC than PubMed and MIMIC from both PPL and WVV perspectives.",
"CADEC is a data set focusing on recognizing drugs, diseases and adverse drug events.",
"The field of CADEC is therefore more similar to PubMed which includes journal articles in health discipline and MIMIC which contains clinical notes.",
"However, CADEC is written by patients, and can be considered as drug re-views'.",
"The tenor is therefore closer to the one in Yelp, where customers use informal language to describe their experiences.",
"All sources are measured against WetLab with relatively high PPL and WVV values.",
"This reflects the fact that the tenor of WetLab (experimental protocols) is different from the tenor of all sources, although WetLab has a similar field (biology) with PubMed which is therefore more similar than other sources.",
"For CoNLL2003, 1BWB which is News Crawl data is the most similar source, while PubMed is the most dissimilar source from PPL perspective, and MIMIC is the most dissimilar one using WVV measure.",
"Although WVV does not distinguish between different sources as PPL does, it still reflects the same trend as PPL regarding which source is the most similar to a given target data set.",
"Can these different similarity measures reach a consensus?",
"Similarity results in Table 3 indicate that using different measures can lead to almost the same answer regarding which source is the most similar one to a given target.",
"To further investigate the level of agreement between different similarity measures, we employ inter-method agreement that we ask a fine-grained question on the results in Table 3: given a target and two sources, do similarity measures make the same conclusion as to which source is more similar?",
"Using the five source and six target data sets, we generate a total of 60 binary comparisons.",
"For example, given WetLab, is 1BWB a more similar source than Wiki?",
"PPL shows that 1BWB is more simi-Similarity NER F 1 Score Pretrained word vectors Pretrained LMs Target Source PPL WVV TVC (%) TVcC (%) F 1 score F 1 score CADEC None 66.14 ( 0.53) 66.14 ( 0.53) 1BWB 307.4 1.137 81.73 82.94 69.44 ( 0.52) 3.30 70.08 ( 0.43) 3.94 MIMIC 1007.0 1.134 78.19 81.69 69.65 ( 0.43) 3.51 70.11 ( 0.48) 3.97 PubMed 927.4 1.195 78.81 79.79 69.84 ( 0.55) 3.70 70.15 ( 0.50) 4.01 Wiki 519.8 1.196 79.74 76.71 69.62 ( 0.15) 3.48 69.32 ( 0.65) 3.18 Yelp 291.1 1.104 80.76 82.28 70.27 ( 0.34) 4.13 70.46 ( 0.52) 4.32 CoNLL2003 None 82.08 ( 0.38) 82.08 ( 0.38) 1BWB 480.6 1.020 75.64 87.35 86.36 ( 0.29) 4.28 89.78 ( 0.12) 7.70 MIMIC 2945.0 1.542 34.47 39.55 84.94 ( 0.35) 2.86 83.68 ( 0.30) 1.60 PubMed 3143.1 1.356 53.29 68.41 85.56 ( 0.46) 3.48 84.15 ( 0.22) 2.07 Wiki 650.4 1.159 66.21 80.87 86.32 ( 0.28) 4.24 89.11 ( 0.23) 7.03 Yelp 2025.5 1.399 53.92 68.95 85.58 ( 0.26) 3.50 85.19 ( 0.38) 3.11 CRAFT None 69.17 ( 0.64) 69.17 ( 0.64) 1BWB 1328.1 2.073 59.07 62.98 73.97 ( 0.06) 4.80 71.23 ( 0.81) 2.06 MIMIC 2427.5 2.390 48.73 50.03 73.01 ( 0.22) 3.84 71.90 ( 0.26) 2.73 PubMed 360.3 1.838 76.29 80.69 75.45 ( 0.28) 6.28 75.45 ( 0.09) 6.28 Wiki 974.7 2.075 63.66 63.12 74.07 ( 0.40) 4.90 69.75 ( 0.09) 0.58 Yelp 2085.7 2.187 48.01 50.85 72.48 ( 0.13) 3.31 72.75 ( 0.26) 3.58 JNLPBA None 70.45 ( 0.21) 70.45 ( 0.21) 1BWB 1190.8 2.000 39.90 53.54 72.39 ( 0.23) 1.94 72.54 ( 0.34) 2.09 MIMIC 2533.4 2.172 36.95 50.04 73.24 ( 0.29) 2.79 71.76 ( 0.13) 1.31 PubMed 205.9 1.597 58.87 80.17 72.77 ( 0.65) 2.32 74.29 ( 0.40) 3.84 Wiki 717.9 2.036 42.34 53.05 72.77 ( 0.27) 2.32 72.42 ( 0.23) 1.97 Yelp 2134.4 2.155 30.78 41.41 72.53 ( 0.18) 2.08 72.51 ( 0.21) 2.06 ScienceIE None 26.85 ( 0.17) 26.85 ( 0.17) 1BWB 884.6 1.197 71.50 76.78 34.40 ( 0.50) 7.55 38.10 ( 0.31) 11.25 MIMIC 2706.7 1.461 54.29 59.34 31.23 ( 0.15) 4.38 35.27 ( 0.43) 8.42 PubMed 345.6 1.037 83.25 87.01 37.91 ( 0.12) 11.06 42.07 ( 0.03) 15.22 Wiki 684.2 1.127 76.99 78.01 36.15 ( 0.11) 9.30 40.39 ( 0.05) 13.54 Yelp 1562.2 1.347 62.32 66.42 33.92 ( 0.14) 7.07 36.05 ( 0.02) 9.20 WetLab None 76.91 ( 0.10) 76.91 ( 0.10) 1BWB 1526.0 2.167 59.67 61.47 78.66 ( 0.35) 1.75 78.94 ( 0.05) 2.26 MIMIC 3046.1 2.393 53.83 55.31 78.68 ( 0.14) 1.13 78.65 ( 0.13) 1.74 PubMed 1104.7 2.078 71.39 74.46 78.93 ( 0.28) 2.02 79.62 ( 0.07) 2.71 Wiki 1617.8 2.158 61.02 60.31 78.45 ( 0.20) 1.54 79.05 ( 0.21) 2.14 Yelp 1784.5 2.240 54.16 54.96 78.48 ( 0.15) 1.57 79.04 ( 0.19) 2.13 Table 3: Similarity between source and target data sets (left), and the effectiveness of word vectors and LMs pretrained using different sources for NER (right).",
"lar, while WVV gives an opposite answer.",
"Fleiss's kappa (Fleiss, 1971) (a variant of Cohen's kappa for more than two raters) is a robust metric used to measures inter-rater agreement, since it takes random chance into consideration.",
"We use it to measure the inter-method agreement between the 60 binary comparisons inferred using PPL, WVV and TVC.",
"Our results achieve a Fleiss's kappa of 0.733, which shows a high agreement between conclusions inferred using different measures.",
"Overall, we find that these similarity measures can reach high level of consensus.",
"To simplify our following discussion, from here on similar means low PPL (because of its clear distinction between different sources), unless otherwise stated.",
"After we quantify the similarity between source and target data sets, the next question is how these similarity measures can be used to predict the effectiveness of pretrained models for NER tasks.",
"Results in Table 3 show that, although all pretrained word vectors and LMs can improve the performance of the target model, the improvement varies in different target data sets.",
"In other words, no single source is suitable for all target NER data Word vectors LMs TVC 0.454 0.666 TVcC 0.469 0.739 PPL -0.398 -0.618 WVV -0.406 -0.747 Table 4: Correlation coefficients between similarity measures and the effectiveness of pretrained models.",
"sets.",
"Word vectors and LMs pretrained on a source similar to the target outperform the ones pretrained on other sources (except pretrained word vectors for JNLPBA data set).",
"We also observe that pretrained LMs provide more benefits than pretrained word vectors if source data is similar to the target (see 1BWB-CoNLL2003 and PubMed-JNLPBA data pairs).",
"However, if the source is dissimilar to the target, pretrained word vectors outperform pretrained LMs (see these pairs: MIMIC-CoNLL2003, PubMed-CoNLL2003, MIMIC-CRAFT).",
"Predictiveness of similarity measures To analyze how proposed similarity measures correlate to the effectiveness of pretrained word vectors and LMs for the target NER tasks, we employ the Pearson correlation analysis to find out the relationships between improvement due to pretrained models and TVC, TVcC, PPL and WVV.",
"The results in Table 4 show that our proposed similarity measures are predictive of the effectiveness of the pretraining data.",
"In terms of pretrained word vectors, VCcR is the most informative factor in predicting the effectiveness of pretrained word vectors given a target data set.",
"It implies that find-ing a source data set which has large vocabulary intersection with the target data set is a promising first step to generate effective pretrained word vectors.",
"The results regarding the LM performance show that it has a stronger correlation with similarity measures than the one of word vectors, thus more predictable using our proposed measures.",
"models Recent literature shows substantial improvements are sometimes possible when pretraining on very large generic corpora.",
"Given that pretrained models are freely available, is it even necessary to pretrain on similar data as proposed above?",
"We compare to publicly available (1) word vectors trained on 6 billion tokens of encyclopae-Word vectors LMs GloVe Ours ELMo Ours CADEC 70.30 70.27 71.91 70.46 CoNLL2003 90.25 86.36 91.34 89.78 CRAFT 74.22 75.45 75.77 75.45 JNLPBA 73.19 73.24 73.65 74.29 ScienceIE 37.10 37.91 41.15 42.07 WetLab 79.15 78.93 79.57 79.62 Table 5: Comparison between our best performance pretrained models and the publicly available ones, which are pretrained on much larger corpora.",
"dia articles and news stories about various fields 3 and (2) LMs trained on 5.5 billion tokens of encyclopaedia articles and news stories about various fields 4 .",
"We use the same experimental setup described in Section 6, that pretrained word vectors are used to initialize the weights of word embedding layer, whereas outputs of pretrained LMs are used as input features of the supervised model.",
"We find that word vectors and LMs pretrained on small similar sources can achieve competitive or even better performance than the ones pretrained on larger sources (Table 5).",
"On JNLPBA, ScienceIE and Wetlab, LMs pretrained on the small similar source perform better, while word vectors pretrained on the small similar source perform better on CRAFT, JNLPBA, and ScienceIE.",
"These results indicate that a small similar source reduces the computational cost without sacrificing the performance.",
"This is especially important in practice, because collecting data and pretraining models are expensive.",
"For example, a LM pretrained on 1 billion tokens takes three weeks to train on 32 GPUs (Jozefowicz et al., 2016).",
"Chiu et al. (2016) propose a hyper-parameter combination of skip-gram model that is empirically",
"3 https://nlp.stanford.edu/projects/glove/ 4 https://allennlp.org/elmo",
"identified on NER tasks.",
"They find that a narrow context window size can boost the performance since it can capture better word function rather than domain similarity.",
"We use their proposed hyper-parameter setting to train word vectors on different source data, and evaluate these pretrained word vectors on the ScienceIE and WetLab data sets.",
"The reason for hand-picking these two is that benefits of pretrained word vectors on these two sets vary with a large margin.",
"Our results suggest that this hyper-parameter setting can overall (except Wiki-ScienceIE and MIMIC-WetLab pairs) produce better performance compare to the default setting (Table 6).",
"Most importantly we observe that our observation that similar sources generate better pretrained models can still holds with these hyper-parameters: PubMed, which is the most similar source to both target data sets, still outperforms other sources.",
"To further investigate how source data size affects pretrained word vectors and LMs for NER tasks, we sample six PubMed subsets of different size.",
"For target data sets, we use CoNLL2003, to which PubMed is the most dissimilar source, and JNLPBA, to which PubMed is the most similar source.",
"We observe that 500 MB of pretraining data appears to be sufficient to calculate similarity, and capping factors out the impact of size (Figure 2).",
"As discussed, VCcR is the most influential factor affecting the usefulness of pretrained word vectors for NER task.",
"Increasing source data size may provide a larger vocabulary intersection with the target data set, but the resulting absolute F 1 score increase is less than 0.5, after the source data has been large enough.",
"We also observe that if source and target data are dissimilar (PubMed-CoNLL2003 pair), pretrained word vectors is a better option than pretrained LMs, no matter how large source data is.",
"However, pretrained LMs outperform pretrained word vectors, if source is similar to target (PubMed-JNLPBA pair).",
"We leave exploration of the combined effect of size and similarity to future work, but believe size should be considered separately, noting that results here suggest that similarity is more important.",
"and LMs that are building blocks of NER models.",
"We proposed using three measures, Target Vocabulary Covered, Language Model Perplexity, and Word Vector Variance, to measure different aspects of similarity between source and target data.",
"We investigated how these measures correlate with the effectiveness of pretrained word vectors and LMs for NER tasks.",
"We found that the effectiveness of pretrained word vectors strongly depends on whether the source data have a high vocabulary intersection with target data, while pretrained LMs can gain more benefits from a similar source.",
"While different NLP tasks may rely on different aspects of language, our study is a step towards systematically guiding researchers on their choice of data for pretraining.",
"As a future study, we will explore how these similarity measures predict performance of pretrained models in other NLP tasks.",
"We would like to thank Massimo Piccardi and Mark Dras for their constructive feedback.",
"The authors also thank the members of CSIRO Data61's Language and Social Computing (LASC) team for helpful discussions, as well as anonymous reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"method",
"objective",
"objective",
"abstain",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"objective",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"Sentiment analysis has attracted increasing attention in e-commerce.",
"The sentiment polarities underlying user reviews are of great value for business intelligence.",
"Aspect category sentiment analysis (ACSA) and review rating prediction (RP) are two essential tasks to detect the fine-to-coarse sentiment polarities.",
"ACSA and RP are highly correlated and usually employed jointly in real-world e-commerce scenarios.",
"While most public datasets are constructed for ACSA and RP separately, which may limit the further exploitations of both tasks.",
"To address the problem and advance related researches, we present a large-scale Chinese restaurant review dataset ASAP including 46 , 730 genuine reviews from a leading online-to-offline (O2O) e-commerce platform in China.",
"Besides a 5 -star scale rating, each review is manually annotated according to its sentiment polarities towards 18 pre-defined aspect categories.",
"We hope the release of the dataset could shed some light on the field of sentiment analysis.",
"Moreover, we propose an intuitive yet effective joint model for ACSA and RP.",
"Experimental results demonstrate that the joint model outperforms state-of-the-art baselines on both tasks.",
"With the rapid development of e-commerce, massive user reviews available on e-commerce platforms are becoming valuable resources for both customers and merchants.",
"Aspect-based sentiment analysis(ABSA) on user reviews is a fundamental and challenging task which attracts interests from both academia and industries (Hu and Liu, 2004; Ganu et al., 2009; Jo and Oh, 2011; Kiritchenko et al., 2014).",
"According to whether the aspect terms are explicitly mentioned in texts, ABSA can be further classified into aspect term sentiment analEqual contribution.",
"ysis (ATSA) and aspect category sentiment analysis (ACSA), we focus on the latter which is more widely used in industries.",
"Specifically, given a review Although the fish is delicious, the waiter is",
"horrible!, the ACSA task aims to infer the sentiment polarity over aspect category food is positive while the opinion over the aspect category service is negative.",
"The user interfaces of e-commerce platforms are more intelligent than ever before with the help of ACSA techniques.",
"For example, Figure 1 presents the detail page of a coffee shop on a popular e-commerce platform in China.",
"The upper aspect-based sentiment text-boxes display the aspect categories (e.g., food , sanitation ) mentioned frequently in user reviews and the aggregated sentiment polarities on these aspect categories (the orange ones represent positive and the blue ones represent neg-ative).",
"Customers can focus on corresponding reviews effectively by clicking the aspect-based sentiment text-boxes they care about (e.g., the orange filled text-box ( good sanitation )).",
"Our user survey based on 7 , 824 valid questionnaires demonstrates that 80 .",
"08% customers agree that the aspect-based sentiment text-boxes are helpful to their decision-making on restaurant choices.",
"Besides, the merchants can keep track of their cuisines and service qualities with the help of the aspect-based sentiment text-boxes.",
"Most Chinese e-commerce platforms such as Taobao 1 , Dianping 2 , and Koubei 3 deploy the similar user interfaces to improve user experience.",
"Users also publish their overall 5 -star scale ratings together with reviews.",
"Figure 1 displays a sample of 5 -star rating to the coffee shop.",
"In comparison to fine-grained aspect sentiment, the overall review rating is usually a coarse-grained synthesis of the opinions on multiple aspects.",
"Rating pre-1 https://www.taobao.com/ 2 https://www.dianping.com/ 3 https://www.koubei.com/ diction(RP) (Jin et al., 2016; Li et al., 2018; Wu et al., 2019a) which aims to predict the seeing stars of reviews also has wide applications.",
"For example, to promise the aspect-based sentiment text-boxes accurate, unreliable reviews should be removed before ACSA algorithms are performed.",
"Given a piece of user review, we can predict a rating for it based on the overall sentiment polarity underlying the text.",
"We assume the predicted rating of the review should be consistent with its ground-truth rating as long as the review is reliable.",
"If the predicted rating and the user rating of a review disagree with each other explicitly, the reliability of the review is doubtful.",
"Figure 2 demonstrates an example review of low-reliability.",
"In summary, RP can help merchants to detect unreliable reviews.",
"Therefore, both ACSA and RP are of great importance for business intelligence in e-commerce, and they are highly correlated and complementary.",
"ACSA focuses on predicting its underlying sentiment polarities on different aspect categories, while RP focuses on predicting the user's overall feelings from the review content.",
"We reckon these two tasks are highly correlated and better performance could be achieved by considering them jointly.",
"As far as we know, current public datasets are constructed for ACSA and RP separately, which limits further joint explorations of ACSA and RP.",
"To address the problem and advance the related researches, this paper presents a large-scale Chinese restaurant review dataset for A spect category S entiment A nalysis and rating P rediction, denotes as ASAP for short.",
"All the reviews in ASAP are collected from the aforementioned e-commerce platform.",
"There are 46 , 730 restaurant reviews attached with 5 -star scale ratings.",
"Each review is manually annotated according to its sentiment polarities towards 18 fine-grained aspect categories.",
"To the best of our knowledge, ASAP is the largest Chinese large-scale review dataset towards both ACSA and RP tasks.",
"We implement several state-of-the-art (SOTA) baselines for ACSA and RP and evaluate their performance on ASAP.",
"To make a fair comparison, we also perform ACSA experiments on a widely used SemEval-2014 restaurant review dataset (Pon-tiki et al., 2014).",
"Since BERT (Devlin et al., 2018) has achieved great success in several natural language understanding tasks including sentiment analysis (Xu et al., 2019; Sun et al., 2019; Jiang et al., 2019), we propose a joint model that employs the fine-to-coarse semantic capability of BERT.",
"Our joint model outperforms the competing baselines on both tasks.",
"Our main contributions can be summarized as follows.",
"(1) We present a large-scale Chinese review dataset towards aspect category sentiment analysis and rating prediction, named as ASAP, including as many as 46 , 730 real-world restaurant reviews annotated from 18 pre-defined aspect categories.",
"Our dataset has been released at https: //github.com/Meituan-Dianping/asap .",
"(2) We explore the performance of widely used models for ACSA and RP on ASAP.",
"(3) We propose a joint learning model for ACSA and RP tasks.",
"Our model achieves the best results both on ASAP and SemEval RESTAURANT datasets.",
"Aspect Category Sentiment Analysis.",
"ACSA (Zhou et al., 2015; Movahedi et al., 2019; Ruder et al., 2016; Hu et al., 2018) aims to predict sentiment polarities on all aspect categories mentioned in the text.",
"The series of SemEval datasets consisting of user reviews from e-commerce websites have been widely used and pushed forward related research (Wang et al., 2016; Ma et al., 2017; Xu et al., 2019; Sun et al., 2019; Jiang et al., 2019).",
"The SemEval-2014 task-4 dataset (SE-ABSA14) (Pontiki et al., 2014) is composed of laptop and restaurant reviews.",
"The restaurant subset includes 5 aspect categories (i.e., Food , Service , Price , Ambience and Anecdotes/Miscellaneous ) and 4 polarity labels (i.e., Positive , Negative , Conflict and Neutral ).",
"The laptop subset is not suitable for ACSA.",
"The SemEval-2015 task-12 dataset (SE-ABSA15) (Pontiki et al., 2015) builds upon SE-ABSA14 and defines its aspect category as a combination of an entity type and an attribute type(e.g., Food#Style Options ).",
"The SemEval-2016 task-5 dataset (SE-ABSA16) (Pontiki et al., 2016) extends SE-ABSA15 to new domains and new languages other than English.",
"MAMS (Jiang et al., 2019) tailors SE-ABSA14 to make it more challenging, in which each sentence contains at least two aspects with different sentiment polarities.",
"Compared with the prosperity of English resources, high-quality Chinese datasets are not rich enough.",
"ChnSentiCorp (Tan and Zhang, 2008), IT168TEST (Zagibalov and Carroll, 2008), Weibo 4 , CTB (Li et al., 2014) are 4 popular Chinese datasets for general sentiment analysis.",
"However, aspect category information is not annotated in these datasets.",
"Zhao et al. (2014) presents two Chinese ABSA datasets for consumer electronics (mobile phones and cameras).",
"Nevertheless, the two datasets only contain 400 documents ( 4000 sentences), in which each sentence only mentions one aspect category at most.",
"BDCI 5 automobile opinion mining and sentiment analysis dataset (Dai et al., 2019) contains 8 , 290 user reviews in automobile industry with 10 pre-defined categories.",
"Peng et al. (2017) summarizes available Chinese ABSA datasets.",
"While most of them are constructed through rule-based or machine learning-based approaches, which inevitably introduce additional noise into the datasets.",
"Our ASAP excels above Chinese datasets both on quantity and quality.",
"Rating Prediction.",
"Rating prediction (RP) aims to predict the seeing stars of reviews, which represent the overall ratings of reviews.",
"In comparison 4 http://tcci.ccf.org.cn/conference/ 2014/pages/page04_dg.html 5 https://www.datafountain.cn/ competitions/310 to fine-grained aspect sentiment, the overall review rating is usually a coarse-grained synthesis of the opinions on multiple aspects.",
"Ganu et al. (2009); Li et al. (2011); Chen et al. (2018) form this task as a text classification or regression problem.",
"Considering the importance of opinions on multiple aspects in reviews, recent years have seen numerous work (Jin et al., 2016; Cheng et al., 2018; Li et al., 2018; Wu et al., 2019a) utilizing the information of the aspects to improve the rating prediction performance.",
"This trending also inspires the motivation of ASAP.",
"Most RP datasets are crawled from real-world review websites and created for RP specifically.",
"Amazon Product Review English dataset (McAuley and Leskovec, 2013) containing product reviews and metadata from Amazon has been widely used for RP (Cheng et al., 2018; McAuley and Leskovec, 2013).",
"Another popular English dataset comes from Yelp Dataset Challenge 2017 6 , which includes reviews of local businesses in 12 metropolitan areas across 4 countries.",
"Openrice 7 is a Chinese RP dataset composed of 168 , 142 reviews.",
"Both the English and Chinese datasets don't annotate fine-grained aspect category sentiment polarities.",
"We collect reviews from one of the most popular O2O e-commerce platforms in China, which allows users to publish coarse-grained star ratings and writing fine-grained reviews to restaurants (or places of interest) they have visited.",
"In the reviews, users comment on multiple aspects either explicitly or implicitly, including ambience , price , food , service , and so on.",
"First, we retrieve a large volume of user reviews from popular restaurants holding more than 50 user reviews randomly.",
"Then, 4 pre-processing steps are performed to promise the ethics, quality, and reliability of the reviews.",
"(1) User information (e.g., user-ids, usernames, avatars, and post-times) are removed due to privacy considerations.",
"(2) Short reviews with less than 50 Chinese characters, as well as lengthy reviews with more than 1000 Chinese characters are filtered out.",
"(3) If the ratio of non-Chinese characters within a review is over 70 %, the review is discarded.",
"(4) To detect the low-6 http://www.yelp.com/dataset_ challenge/ 7 https://www.openrice.com quality reviews (e.g., advertising texts), we build a BERT-based classifier with an accuracy of 97% in a leave-out test-set.",
"The reviews detected as low-quality by the classifier are discarded too.",
"3.2 Aspect Categories Since the reviews already hold users' star ratings, this section mainly introduces our annotation details for ACSA.",
"In SE-ABSA14 restaurant dataset (denoted as RESTAURANT for simplicity), there are 5 coarse-grained aspect categories, including food , service , price , ambience and miscellaneous .",
"After an in-depth analysis of the collected reviews, we find the aspect categories mentioned by users are rather diverse and fine-grained.",
"Take the text ...The restaurant holds a high-end decoration but is quite noisy since a wedding ceremony was being held in the main hall... (... ...) in Table 3 for example, the reviewer actually expresses opposite sentiment polarities on two fine-grained aspect categories related to ambience .",
"The restaurant's decoration is very high-end ( Positive ), while it's very noisy due to an ongoing ceremony ( Negative ).",
"Therefore, we summarize the frequently mentioned aspects and refine the 5 coarse-grained categories into 18 fine-grained categories.",
"We replace miscellaneous with location since we find users usually review the restaurants' location (e.g., whether the restaurant is easy to reach by public transportation.).",
"We denote the aspect category as the form of Coarse-grained Category#Fine-grained Categoty , such as Food#Taste and Ambience#Decoration .",
"The full list of aspect categories and definitions are listed in Table 1.",
"Bearing in mind the pre-defined 18 aspects, assessors are asked to annotate sentiment polarities towards the mentioned aspect categories of each review.",
"Given a review, when an aspect category is mentioned within the review either explicitly and implicitly, the sentiment polarity over the aspect category is labeled as 1 ( Positive ), 0 ( Neutral ) or 1 ( Negative ) as shown in Table 3.",
"We hire 20 vendor assessors, 2 project managers, and 1 expert reviewer to perform annotations.",
"Each assessor needs to attend a training to ensure their intact understanding of the annotation guidelines.",
"Three rounds of annotation are conducted sequentially.",
"First, we randomly split the whole dataset into 10 groups, and every group is assigned to 2 assessors to annotate independently.",
"Second, each group is split into 2 subsets according to the annotation results, denoted as Sub-Agree and Sub-Disagree .",
"Sub-Agree comprises the data examples with agreement annotation, and Sub-Disagree comprises the data examples with disagreement annotation.",
"Sub-Agree will be reviewed by assessors from other groups.",
"The controversial examples during the review are considered as difficult cases.",
"Sub-Disagree will be reviewed by the 2 project managers independently and then discuss to reach an agreement annotation.",
"The examples that could not be addressed after discussions are also considered as difficult cases.",
"Third, for each group, the difficult examples from two subsets are delivered to the expert reviewer to make a final decision.",
"More details of difficult cases and annotation guidelines during annotation are demonstrated in Table 2.",
"Finally, ASAP corpus consists of 46 , 730 pieces of real-world user reviews, and we split it into a training set ( 36 , 850 ), a validation set ( 4 , 940 ) and a test set ( 4 , 940 ) randomly.",
"Table 3 presents an example review of ASAP and corresponding annotations on the 18 aspect categories.",
"Figure 3 presents the distribution of 18 aspect categories in ASAP.",
"Because ASAP concentrates on the domain of restaurant, 94 .",
"7 % reviews mention Food#Taste as expected.",
"Users also pay great attention to aspect categories such as Service#Hospitality , Price#Level and Ambi-ence#Decoration .",
"The distribution proves the advantages of ASAP, as users' fine-grained preferences could reflect the pros and cons of restaurants more precisely.",
"The statistics of ASAP are presented in Table",
"4. We also include a tailored SE-ABSA14 RESTAURANT dataset for reference.",
"Please note that we remove the reviews holding aspect categories with sentiment polarity of conflict from the original RESTAURANT dataset.",
"Compared with RESTAURANT , ASAP excels in the quantities of training instances, which supports the exploration of recent data-intensive deep neural models.",
"ASAP is a review-level dataset, while RESTAURANT is a sentence-level dataset.",
"The average length of reviews in ASAP is much longer, thus the reviews tend to contain richer aspect information.",
"In ASAP, the reviews contain Table 1: The full list of 18 aspect categories and definitions.",
"5 .",
"8 aspect categories in average, which is 4 .",
"7 times of RESTAURANT .",
"Both review-level ACSA and RP are more challenging than their sentence-level counterparts.",
"Take the review in Table 3 for example, the review contains several sentiment polarities towards multiple aspect categories.",
"In addition to aspect category sentiment annotations, ASAP also includes overall user ratings for reviews.",
"With the help of ASAP, ACSA and RP can be further optimized either separately or jointly.",
"We use D to denote the collection of user review corpus in the training data.",
"Given a review R which consists of a series of words: { w 1 , w 2 , ..., w Z } , ACSA aims to predict the sentiment polarity y i { P ositive, Neutral, Negative } of review R with respect to the mentioned aspect category a i , i { 1 , 2 , ..., N } .",
"Z denotes the length of review R .",
"N is the number of pre-defined aspect categories (i.e., 18 in this paper).",
"Suppose there are K mentioned aspect categories in R .",
"We de-fine a mask vector [ p 1 , p 2 , ..., p N ] to indicate the occurrence of aspect categories.",
"When the aspect category a i is mentioned in R , p i = 1 , otherwise p i = 0 .",
"So we have (cid:80) Ni =1 p i = K .",
"In terms of RP, it aims to predict the 5 -star rating score of g , which represents the overall rating of the given review R .",
"Given a user review, ACSA focuses on predicting its underlying sentiment polarities on different aspect categories, while RP focuses on predicting the user's overall feelings from the review content.",
"We reckon these two tasks are highly correlated and better performance could be achieved by considering them jointly.",
"of the pre-training and then fine-tuning paradigm for NLP tasks.",
"BERT-based models have achieved impressive results in ACSA (Xu et al., 2019; Sun et al., 2019; Jiang et al., 2019).",
"Review rating prediction can be deemed as a single-sentence classification (regression) task, which could also be addressed with BERT.",
"Therefore, we propose a joint learning model to address ACSA and RP in a multitask learning manner.",
"Our joint model employs the fine-to-coarse semantic representation capability of the BERT encoder.",
"Figure 4 illustrates the framework of our joint model.",
"ACSA As shown in Figure 4, the token embeddings of the input review are generated through a shared BERT encoder.",
"Briefly, let H R d Z be the matrix consisting of token embedding vectors { h 1 , ..., h Z } that BERT produces, where d is the size of hidden layers and Z is the length of the given review.",
"Since different aspect category information is dispersed across the content of R , we add an attention-pooling layer (Wang et al., 2016) to aggregate the related token embeddings dynamically for every aspect category.",
"The attention-pooling layer helps the model focus on the tokens most related to the target aspect categories.",
"Where W ai R d d , M ai R d Z , i R d , i RZ , W pi R d d , and r i R d .",
"i is a vector consisting of attention weights of all tokens which can selectively attend the regions of the aspect category related tokens, and r i is the attentive representation of review with respect to the i th aspect category a i , i { 1 , 2 , ..., N } .",
"Then we have y i = softmax ( W q i r i + b q i ) (4) Where W qi RC d and b qi RC are trainable parameters of the softmax layer.",
"C is the number of labels (i.e, 3 in our task).",
"Hence, the ACSA loss for a given review R is defined as follows, loss ACSA = 1 KN (cid:88) i =1 p i (cid:88) C y i log y i (5) If the aspect category a i is not mentioned in S , y i is set as a random value.",
"The p i serves as a gate function, which filters out the random y i and ensures only the mentioned aspect categories can participate in the calculation of the loss function.",
"Rating Prediction Since the objective of RP is to predict the review rating based on the review content, we adopt the [CLS] embedding h [ cls ] R d BERT produces as the representation of the input review, where d is the size of hidden layers in the BERT encoder.",
"g = T tanh( W r h [ cls ] + b r ) (6) Hence the RP loss for a given review R is defined as follows, loss RP = | g g | (7) Where W r R d d , b r R d , R d are trainable parameters.",
"The final loss of our joint model becomes as follows.",
"loss = loss ACSA + loss RP (8) 5 Experiments We perform an extensive set of experiments to evaluate the performance of our joint model on ASAP Table 4: The statistics and label/rating distribution of ASAP and RESTAURANT .",
"and RESTAURANT (Pontiki et al., 2014).",
"Ablation studies are also conducted to probe the interactive influence between ACSA and RP.",
"Baseline Models We implement several ACSA baselines for comparison.",
"According to the different structures of their encoders, these models are classified into Non-BERT based models or BERT-based models.",
"Non-BERT based models include TextCNN (Kim, 2014), BiLSTM+Attn (Zhou et al., 2016), ATAE-LSTM (Wang et al., 2016) and CapsNet (Sabour et al., 2017).",
"BERT-based models include vanilla BERT (Devlin et al., 2018), QA-BERT (Sun et al., 2019) and CapsNet-BERT (Jiang et al., 2019).",
"Implementation Details of Experimental Models In terms of non-BERT-based models, we initialize their inputs with pre-trained embeddings.",
"For Chinese ASAP, we utilize Jieba 8 to segment Chinese texts and adopt Tencent Chinese word embeddings (Song et al., 2018) composed of 8 , 000 , 000 words.",
"For English RESTAURANT , we adopt 300 -dimensional word embeddings pre-trained by Glove (Pennington et al., 2014).",
"12 -layer Google BERT Base 9 to encode the inputs.",
"The batch sizes are set as 32 and 16 for non-BERT-based models and BERT-based models respectively.",
"Adam optimizer (Kingma and Ba, 2014) is employed with 1 = 0 .",
"9 and 2 = 0 .",
"999 .",
"The maximum sequence length is set as 512 .",
"The number of epochs is set as 3 .",
"The learning rates are set as 0 .",
"001 and 0 .",
"00005 for non-BERT-based models and BERT-based models respectively.",
"All the models are trained on a single NVIDIA Tesla 32 G V 100 Volta GPU.",
"Evaluation Metrics Following the settings of RESTAURANT , we adopt Macro-F1 and Accuracy 8 https://github.com/fxsjy/jieba 9 https://github.com/google-research/ bert (Acc) as evaluation metrics.",
"Experimental Results & Analysis We report the performance of aforementioned models on ASAP and RESTAURANT in Table",
"5. Generally, BERT-based models outperform Non-BERT based models on both datasets.",
"The two variants of our joint model perform better than vanilla-BERT, QA-BERT, and CapsNet-BERT, which proves the advantages of our joint learning model.",
"Given a user review, vanilla-BERT, QA-BERT, and CapsNet-BERT treat the pre-defined aspect categories independently, while our joint model combines them together with a multi-task learning framework.",
"On one hand, the encoder-sharing setting enables knowledge transferring among different aspect categories.",
"On the other hand, our joint model is more efficient than other competitors, especially when the number of aspect categories is large.",
"The ablation of RP (i.e., joint model(w/o RP)) still outperforms all other baselines.",
"The introduction of RP to ACSA brings marginal improvement.",
"This is reasonable considering that the essential objective of RP is to estimate the overall sentiment polarity instead of fine-grained sentiment polarities.",
"We visualize the attention weights produced by our joint model on the example of Table 3 in Figure",
"5. Since different aspect category information is dispersed across the review of R , we add an attention-pooling layer (Wang et al., 2016) to aggregate the related token embeddings dynamically for every aspect category.",
"The attention-pooling layer helps the model focus on the tokens most related to the target aspect categories.",
"Figure 5 visualizes attention weights of 3 given aspect categories.",
"The intensity of the color represents the magnitude of attention weight, which means the relatedness of tokens to the given aspect category.",
"It's obvious that our joint model focus on the tokens most related to the aspect categories across the review of R .",
"We compare several RP models on ASAP, including TextCNN (Kim, 2014), BiLSTM+Attn (Zhou et al., 2016) and ARP (Wu et al., 2019b).",
"The data pre-processing and implementation details are identical with ACSA experiments.",
"Evaluation Metrics.",
"We adopt Mean Absolute Error (MAE) and Accuracy (by mapping the predicted rating score to the nearest category) as evaluation metrics.",
"Experimental Results & Analysis The experimental results of comparative RP models are illustrated in Table",
"6. Table 6: Experimental results of RP models on ASAP.",
"Our joint model which combines ACSA and RP outperforms other models considerably.",
"On one hand, the performance improvement is expected since our joint model is built upon BERT.",
"On the other hand, the ablation of ACSA (i.e., joint model(w/o ACSA)) brings performance degradation of RP on both metrics.",
"We can conclude that the fine-grained aspect category sentiment prediction of the review indeed helps the model predict its overall rating more accurately.",
"This section conducts preliminary experiments to evaluate classical ACSA and RP models on our proposed ASAP dataset.",
"We believe there still exists much room for improvements to both tasks, and we will leave them for future work.",
"This paper presents ASAP , a large-scale Chinese restaurant review dataset towards aspect category sentiment analysis (ACSA) and rating prediction (RP).",
"ASAP consists of 46 , 730 restaurant user reviews with star ratings from a leading e-commerce platform in China.",
"Each review is manually annotated according to its sentiment polarities on 18 fine-grained aspect categories.",
"Besides evaluations of ACSA and RP models on ASAP separately, we also propose a joint model to address ACSA and RP synthetically, which outperforms other state-of-the-art baselines considerably.",
"we hope the release of ASAP could push forward related researches and applications."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"method",
"other",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain"
] |
[
"We present DART , an open domain structured DA taR ecord-toT ext generation dataset with over 82k instances (DARTs).",
"Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and contain nontrivial structures.",
"To this end, we propose a procedure of extracting semantic triples from tables that encodes their structures by exploiting the semantic dependencies among table headers and the table title.",
"Our dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion and predicate unification, all with minimum post-editing.",
"We present systematic evaluation on DART as well as new state-of-the-art results on WebNLG 2017 to show that DART (1) poses new challenges to existing data-to-text datasets and (2) facilitates out-of-domain generalization.",
"Our data and code can be found at https://github.",
"com/Yale-LILY/dart .",
"Automatically generating textual descriptions from structured data improves the accessibility of knowledge bases to lay users.",
"Such applications include explaining data records to non-experts (Cawsey et al., 1997), writing sports news (Chen and Mooney, 2008), summarizing information in multiple documents (Fan et al., 2019), and generating dialogue responses (Wen et al., 2015).",
"While significant progress has been made in this field, there are still several issues with existing Data-to-Text datasets.",
"First, they adopt a flat ontology structure of the data, such as slot-value pairs for data records (Lebret et al., 2016; Novikova et al., 2017b) or flat schema for tables (Wiseman et al., Now at Facebook AI. 2017; Chen et al., 2020a; Parikh et al., 2020).",
"This flat structure is not powerful enough to encode rich semantic relationships in the ontology of the structured data, especially tables, whose representation can be further improved with these semantic knowledge.",
"Second, some of the datasets only focus on a small number of domains or knowledge graphs, therefore providing limited number of predicates and data ontologies.",
"For example, E2E (Novikova et al., 2017b) on restaurants and WebNLG (Gar-dent et al., 2017) on 15 categories from DBPedia.",
"Furthermore, some of them only have loose alignments between data input and sentence due to the nature of the task (Wiseman et al., 2017) and the automatic generation procedure (Vougiouklis et al., 2018; Elsahar et al., 2018).",
"To address some of these issues and to encourage further research in natural language generation from structured data, we introduce DART , a large and open-domain structured DA taR ecord-toT ext generation corpus.",
"The goal of DART is to har-vest the diverse predicates occurred in Wikipedia tables, which is significantly richer than those defined in the domain specific ontologies E2E and WebNLG were built on (Table 2).",
"We also introduce a novel tree ontology annotation approach on tables, which converts a flat table schema into a tree structured semantic frame.",
"The tree ontology reflects the core and auxiliary relations in the table schema, and naturally occurs across many domains.",
"As a result, DART provides high-quality sentence annotations to tree structured semantic frames extracted from various data sources, including WikiSQL (Zhong et al., 2017) and WikiTableQuestions (Pasupat and Liang, 2015), two open-domain question answering datasets, as well as E2E (Novikova et al., 2017b) and WebNLG (Gardent et al., 2017) (Figure 1).",
"We evaluated several state-of-the-art data-to-text models on DART, and found that while these models achieve impressive performance on domain-specific datasets, their performance suffers on DART due to its open-domain nature and richer semantic structures.",
"Our contributions are as follows.",
"(1) We present a large and open-domain corpus for structured data record to text generation, annotated with tree ontologies converted from the table.",
"This hierarchical input differentiates our corpus from existing data-to-text corpora.",
"(2) We benchmark several state-of-the-art data-to-text models to show that DART introduces new generalization challenges.",
"(3) We demonstrate that using DART for data augmentation improves the performance of existing models on the WebNLG 2017 dataset.",
"We expect the results to generalize to other data-to-text datasets given the open-domain nature of DART.",
"As shown in Figure 1, DART is constructed from three different sources: (1) human annotation on Wikipedia tables from two table semantic parsing and question answering datasets WikiSQL and WikiTableQuestions ( 2.1), (2) automatic conversion of questions in WikiSQL to declarative sentences ( 2.2), and (3) incorporation of existing datasets including WebNLG 2017 and Cleaned E2E ( 2.3).",
"After collecting the (cid:104) triple-set, sentence (cid:105) pairs from various data sources, we manually canonicalized the predicates and show that DART covers a broad range of topics ( 2.4).",
"Finally, we discuss the data split in 2.5.",
"Tables are a major source of structured data that contain a wealth of information complementary to text and knowledge graphs.",
"We aim to collect (cid:104) triple-set, sentence (cid:105) pairs from open-domain Wikipedia tables.",
"However, table schema are flat, making them not directly usable for building subject-predicate-object triples to capture rich relationships in the data.",
"As shown in Figure 2, we propose a two-stage annotation process that involves two groups of annotators: internal annotators and Amazon Mechanical Turk 1 workers.",
"In the first stage, skilled internal annotators specify the parent of every column header to construct a tree-structured ontology for each table.",
"In the second stage, both internal and external annotators provide a sentential description of the 1 https://www.mturk.com/ highlighted cells in a row that are automatically-chosen based on the ontology.",
"Tree Ontology Annotation For each column in a given table, our internal annotators labeled its ontological parent.",
"In Figure 2, for example, the annotator would provide the sequence { NULL , TEAM , STADIUM , STADIUM , TEAM } as the parent of each column column TEAM has no parent, STADIUM has parent TEAM , and so on.",
"In many cases, the relationship between a parent column and its child column can be conceptualized as a \"has-a\" relationship.",
"For tables that are malformed or have duplicate or missing column names (as shown in Figure 5 of the Appendix), annotators either changed or added appropriate column names in order to fit these patterns.",
"For each table we generate an ontology tree whose root is always [TABLECONTEXT].",
"This root node either has (1) one child node [TI-TLE] in the cases where the table title is the subject of entire table, or (2) column header node(s) and a [TITLE] node as children, as shown in Figure",
"2. This is because in some tables, the table title itself is more appropriate to be the root of the ontology tree (example shown in Figure 6 of the Appendix).",
"In these cases, annotators assigned the special token [TITLE] as the parent of the relevant column nodes.",
"For other tables, title usually provides important context for understanding the table's rows (example shown in Figure 7 of the Appendix).",
"In such cases, [TITLE] is made a child of [TABLE-CONTEXT] together with the column headers that are appropriate.",
"We evaluate the quality of the initial tree ontology annotation and made corrections with the following procedure: (1) reject and request corrections from the original annotators if the provided ontology is disconnected or contains a cycle, (2) verify that all column headers appear as a node in the tree.",
"For many tables, the determination of an ontology is a subjective process with many \"cor-rect\" answers for example, swapping the positions of TEAM and CITY in the tree in Figure 2 produces an equally valid ontology for the referenced table.",
"If there are multiple ways to construct an ontology based on annotators' decisions of attribute relationships among column headers, we manually unify the annotations for similar tables (for examples, tables about athletes in different sports).",
"The ontologies exhibit a great deal of structural variety.",
"Relevant statistics are summarized in Table 7 and Figure 3 of the Appendix.",
"Connected Component Extraction After we annotated the ontology, we automatically choose a subset of cells for a selected table row to form the triple set.",
"Randomly selecting cells leads to poor quality annotation as the selected data could lack a subject, lack cohesion, or would require information not encoded in the ontology to form a coherent sentence.",
"For example, in Figure 2, if only two nodes CITY and CAPACITY were highlighted then a coherent sentence cannot be produced as there is no direct logical relationship (functional dependency) between them.",
"To solve these issues, instead of randomly selecting cells in a row, we extract connected components from the ontology.",
"The extracted components have two controllable properties: size and shape.",
"To create variation in size, we randomly sampled between [2 , 5] .",
"The shape is determined by two numbers: the number of sibling node pairs and parent-child node pairs.",
"Increasing the number of sibling node pairs creates a wider tree, while increasing the latter creates a deeper tree.",
"We created a sliding scale between width and depth using an expansion parameter, p .",
"We recursively visit a node if it has children with probability p and otherwise move to a sibling if it exists.",
"If p = 1, the search becomes a DFS and if p = 0, it becomes BFS.",
"We found that randomly selecting p from 0.5 to 0.7 created a reasonable variation in extracted component shapes.",
"This ensures the balance between breadth and depth of ontology coverage of the selected cells, therefore ensuring the quality of the sentence annotation.",
"were asked to write a description of the highlighted cells.",
"We encouraged the annotators to use diverse vocabulary and syntactic structures.",
"To ensure quality, internal annotators reviewed every crowd sourced sentence for correctness.",
"They either rewrote or discarded the sentences that were nonsensical or incorrect.",
"In some cases, they also changed cell highlighting patterns to match the sentence provided.",
"Build Tripleset-Sentence Pairs Finally, we convert the highlighted cells to triplesets.",
"For a row R , we start with the table's column ontology T .",
"We first place the cell values in R in their corresponding slots in T , e.g. in Figure 2 we fill TEAM with \"Amsterdam Admirals\".",
"We then check that the nodes of T corresponding to the highlighted cells in R form a connected subtree.",
"If not, we walk up the tree and highlight each traversed node up until the lowest common ancestor of the highlighted nodes (inclusive) to form a connected subtree.",
"For each node N in the tree except the root node, we can extract the triple ( parent ( N ) , title ( N ) , N ).",
"For example, since STADIUM is highlighted in Figure 2, we extract the triple (Amsterdam Admirals, STADIUM , Olympisch Stadion).",
"A small number of triple-sets contained more than 10 triples.",
"We discarded these because their associated surface realizations were of poor quality.",
"The numbers of tripleset-sentence pairs annotated by different annotators are shown in Table",
"2. 2.2 Automatically Converting Questions to Declarative Sentences High quality natural language questions in open domain semantic parsing datasets such as WikiSQL and QA2D techniques found in automatically constructing NLI datasets (Demszky et al., 2018) present themselves as an attractive opportunity to semi-automatically construct an abundance of declarative sentences and align to table cells.",
"We leveraged rule-based QA2D technique 2 together with manual screening to combine WikiSQL questions and SQL-retrieved-answers into declarative sentences and manually filtered out bad sentences.",
"We only execute SQL queries without aggregate commands 3 to retrieve answers corresponding to questions answerable by single rows.",
"An example of such conversion is as follows: 2 We use the rule-based model from https://github.",
"com/kelvinguu/qanli (Demszky et al., 2018).",
"The neural model code is not released.",
"3 MAX, MIN, COUNT, SUM, AVG, JOIN, INTERSECT, UNION, GROUP BY, ORDER BY.",
"Question: In which year did Greece hold its last Summer Olympics?",
"Answer: 2004 Declarative Sentence: Greece held its last Summer Olympics in 2004.",
"Alignment with table cells is done at two stages.",
"We first align sentences with corresponding rows by changing SQL commands to SELECT * and use string matching to obtain columns and column headers relevant to the answer and WHERE condition.",
"After manually filtering out bad sentences, bad alignments, or tables without ontology annotations, we were able to get 4,204 sentences.",
"Finally, the corresponding table cells are then converted into triples in the same way as we described in Section 2.1.",
"Since they provide a large amount of strictly aligned data-text pairs with high quality sentences, we incorporate the following existing datasets in the same (cid:104) triple-set, sentence (cid:105) pair format with some modifications.",
"WebNLG 2017 An instance of the WebNLG dataset contains a set of triples extracted from DBpedia and the target text written by human.",
"We include the WebNLG 2017 dataset 4 consisting of 27731 triple-set sentence pairs with up to 7 RDF triples in a triple set covering 15 domains.",
"Cleaned E2E The original E2E dataset includes dialogue act meaning representations (MR) and natural language references in the restaurant domain.",
"Later, Duek et al. (2019) provide Cleaned E2E 5 by automatically fixing the dialogue acts to account for omissions and hallucinations in the text.",
"We incorporate Cleaned E2E because of its strict alignment between the meaning representation and the text.",
"To convert the MR to a triple-set, we take the NAME slot (present in almost all the MRs) as the subject.",
"For example, the MR ( NAME [ALIMENTUM ], AREA [ CITY CENTRE ], FAMILYFRIENDLY [ NO ]) is converted to the 4 https://gitlab.com/shimorina/ webnlg-dataset/-/tree/master/webnlg_challenge_2017 5 https://github.com/tuetschek/ e2e-cleaning triple-set {(A LIMENTUM , AREA , CITY CENTRE ), (ALIMENTUM , FAMILYFRIENDLY , NO )}.",
"We drop MRs which do not contain the NAME slot.",
"We canonicalized the predicates in our triple sets such that those of the same meaning are also represented the same.",
"We manually constructed a predicate mapping table to achieve this.",
"As an example, our predicate mapping maps \"Hometown,\" \"Home Town,\" and \"Home Town/City\" to the unified predicate \"HOMETOWN.\"",
"After unifying predicates, we evaluated the diversity of DART by counting the number of unique predicates in its partitions.",
"As shown in Table 2, we see that the Wikipedia partition of DART contains much more unique predicates than the WebNLG and Cleaned E2E partitions combined, despite having smaller number of (cid:104) triple-set, sentence (cid:105) pairs.",
"This contributes significantly to the domain diversity of DART.",
"In addition, we can see that DART exhibits a great deal of topical variety in terms of number of unique triples and vocabulary size.",
"2.5 Dataset Split For WebNLG 2017 and Cleaned E2E, we use their original data splits.",
"For our annotation on WikiTableQuestions and WikiSQL, random splitting will make train, dev, and test splits contain similar tables and similar (cid:104) triple-set, sentence (cid:105) examples.",
"Therefore, to increase the generalization challenge, we compare the table title and the table header to find similar tables, and make sure the model is evaluated on test split tables that are least similar to those used for training.",
"We first sample some tables as a seed test set, and then compute Jaccard similarity 6 with remaining tables based on the titles and the headers.",
"If a table has a Jaccard similarity greater than 0.5 with any of the tables in the test set, we add it into the test set.",
"A similar process is repeated to create the dev set, and the remaining tables form the training set.",
"This results in 62,659/6,980/12,552 sentences in the train/dev/test sets, respectively.",
"We conduct experiments on DART and the WebNLG 2017 dataset, with an ablation study on",
"We investigate several state-of-the-art Data-to-Text generation models.",
"We report results of the following models on DART-testset: (1) Bidirectional-LSTM with attention, for which we use 2-layer bi-LSTM for encoder, with 300 dimensional word embeddings (without using pretrained word vec-tors), 512 hidden units and 0.3 dropout rate for the decoder.",
"(2) Transformer (Vaswani et al., 2017), previously used by Castro Ferreira et al. (2019) on the WebNLG dataset.",
"The input is formed by linearizing the unordered triple set.",
"(3) BART (Lewis et al., 2020), for which we report results of both BART-base and BART-large.",
"(4) T5 (Raffel et al., 2020): we add the same prefix \"translate Graph to English:\" to the input, as it is used in Ribeiro et al. (2020).",
"We report results of T5-small, T5-base and T5-large models.",
"For both BART and T5 models, we use implementations of Ribeiro et al. (2020), with same hyperparameter setting.",
"We use a variety of automatic metrics and human evaluation (Section 4) to evaluate the quality of the generated text.",
"We report BLEU, METEOR, and TER which are used in the official WebNLG challenge.",
"However, these measures have limitations in considering the semantic meanings of words or phrases (Novikova et al., 2017a), therefore we also report MoverScore (Zhao et al., 2019), BERTScore (Zhang et al., 2020), and BLEURT (Sellam et al., 2020) that incorporate semantics rather than surface forms using contextual embeddings.",
"Furthermore, we include PARENT (Dhingra et al., 2019) which explicitly aligns n-grams from the reference and generated text to the data contents.",
"DART Our experimental results on DART are summarized in Table",
"3. The T5-large model has the highest performance among all models with a BLEU score of 50.66.",
"We attribute this to T5's generalization and transfer learning ability due to pretraining on multi-tasks.",
"We can see that in general, pretrained models outperform others by a large margin, and increasing the model size seems to further boost the performance on DART.",
"However, language models such as BART and T5 are pretrained by reconstructing text and, as a result, we found that their output on DART often contains hallucinated words (Parikh et al., 2020; Harkous et al., 2020; Reiter, 2020), as shown in Figure 11.",
"In addition, while the pretrained model shows better text generation quality due to its generalization ability from pretraining, it does not fully capture the hierarchical ontology nature of the triple sets in their linearized input, therefore making DART more challenging.",
"We suspect that models that are better at exploiting the ontology structure preserved in the input tripleset will achieve better performance on DART.",
"WebNLG Furthermore, we investigate if DART can improve pretrained models' performance on other Data-to-Text generation tasks.",
"To this end, we finetune the baseline transformer model, BART-[base, large] and T5-[small, base, large] on the WebNLG 2017 dataset, and augment the training by adding instances in the DART training set.",
"The experimental results can be found in Table 4.",
"We report performances of some competitive models that are not pretrained, as well as the state-of-the-art performances of pretrained models on the WebNLG 2017 dataset by Ribeiro et al. (2020).",
"On the bottom panel, we include results of experiments augmented with DART instances whose triplesets are generated with table ontology annotation, paired with human written sentences.",
"We are able to achieve new state-of-the-art results on all WebNLG 2017 test set splits (seen, unseen and all) by finetuning T5-large on DART.",
"We observe that using DART for data augmentation consistently improves the performance across all models, including the baseline transformer model that is not pretrained.",
"Furthermore, we observe that more improvement is shown on unseen split of the test set, due to DART's open-domain nature.",
"See Figure 12 of the Appendix for example model outputs aligned with their human references.",
"We also conduct an ablation study on the WebNLG dataset to investigate what part of DART contributes most to improving the Data-to-Text tasks in general.",
"We report results of the study in Table 6 of the Appendix.",
"We divide DART into 4 partitions, where declarative sentence (auto-generated) partition and human annotated sentence partition contain instances whose triplesets are extracted from Wikipedia tables based on ontology.",
"E2E partition contains instances converted from the E2E BLEU METEOR TER MoverScore BERTScore(F1) BLEURT PARENT LSTM with Attention 29.66 0.27 0.63 0.31 0.90 -0.13 0.35 End-to-End Transformer 27.24 0.25 0.65 0.25 0.89 -0.29 0.28 BART-base 47.11 0.38 0.46 0.51 0.95 0.37 0.55 BART-large 48.56 0.39 0.45 0.52 0.95 0.41 0.57 T5-small 47.69 0.39 0.46 0.52 0.95 0.40 0.56 T5-base 49.21 0.40 0.44 0.53 0.95 0.43 0.57 T5-large 50.66 0.40 0.43 0.54 0.95 0.44 0.58 Table 3: Model results on the test set of DART : Higher is better.",
"dataset, and WebNLG partition keeps the original data format.",
"In general, we observe that adding DART instances that contain human written sentences brings most improvement, especially on unseen split.",
"While adding E2E partition boosts the scores on seen test split and deteriorates the performance on unseen test split.",
"This trend is consistent across all models.",
"Comparing results of declarative sentence partition and human written sentence partition, we see that for most of the models, DART instances with human written sentences have better quality as it brings more improvement to the task.",
"In Table 5, we perform human evaluation on DART based on two criteria: (1) fluency if a sentence is natural and grammatical, and (2) semantic faithfulness if a sentence is supported by the input triples.",
"We defined three levels of fluency: fluent, mostly fluent, and not fluent, and the same for semantic faithfulness.",
"We ask 5 internal annotators to evaluate on 100 triplesets sampled from declarative sentence partition and another 100 triplesets sampled from human written sentence partition.",
"Each tripleset is paired with 3 sentences, one of them is the reference sentence, and the other two are outputs of BART-base and T5-base models.",
"The results in Table 5 attest to the high quality of our annotations since the human written references achieve highest fluency and faithfulness comparing to outputs of two strong baseline models.",
"The evaluation on faithfulness also demonstrates that there is a considerable gap between the DART reference and the outputs of the state-of-the-art pretrained model, showing that there is a large room for improvement.",
"We also noticed that the auto-generated declarative sentences are not as fluent or faithful as the model outputs because they are generated with a rule-based system.",
"However, we decided to release this partition, along with other partitions of DART because it demonstrates an economic way to obtain large amounts of DART instances and it also shows benefits for generalization due to the diverse topics it contains.",
"Data-to-Text Data-to-Text generation aims to produce natural language output from structured input.",
"Applications include generating sports commentaries (Chen and Mooney, 2008; Wiseman et al., 2017), weather forecasts (Liang et al., 2009; Konstas and Lapata, 2012), biographical texts (Le-bret et al., 2016; Liu et al., 2018), knowledge-base descriptions (Gardent et al., 2017), dialogue response generation (Wen et al., 2015, 2016), and commonsense reasoning (Lin et al., 2020).",
"Yet, most existing datasets are restricted to specific domains and applications.",
"In contrast, a major source of DART is from Wikipedia tables covering various domains and topics.",
"Representation of Data The input of the Data-to-Text datasets take different formats, including slot-value pairs, Abstract Meaning Representation (AMR) (Song et al., 2017; Ribeiro et al., 2019), Minimal Recursion Semantics (MRS) (Ha-jdik et al., 2019), Resource Description Framework (RDF triples) (Gardent et al., 2017), and logic forms (Chen et al., 2020b).",
"There are also studies of converting tabular data to RDF triples in the Semantic Web community (Kellogg et al., 2015).",
"Recently, some open-domain table-to-text datasets have been proposed including WikiTableText (Bao et al., 2018), LogicNLP (Chen et al., 2020a), and ToTTo (Parikh et al., 2020), whose inputs are rows or entire tables.",
"In ToTTo, highlighted cells are also provided as input, and the authors found using only highlighted cells with flat row and column headers led to higher performance than using the entire table.",
"In contrast, DART is constructed by first annotating the tree-structured table ontology that encodes the semantic dependencies among table headers, and we could flexibly incorporate additional contexts such as the table title to the ontology tree.",
"We then use an automatic procedure to extract connected components from the tree to form the input of a DART instance.",
"Our annotation framework not only provides a flexible way of incorporating any contexts to the representation of tables, but also encodes hierarchical relationships among table headers and contexts, ensuring the extracted triples are logically consistent and can be described in text without loss of information.",
"Model Traditional Data-to-Text models break the generation progress into different stages such as signal analysis, data interpretation, document planning, microplanning, and realization (Reiter and Dale, 2000; Reiter, 2007).",
"Recently, neural encoder-decoder models based on attention and copy mechanisms have shown promising results (Gehrmann et al., 2018; Puduppully et al., 2018, 2019; Castro Ferreira et al., 2019).",
"Furthermore, recent progress on pretrained models such as GPT-2 (Radford et al., 2018), BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) has shown effective results for text generation tasks on machine translation, summarization, and conversation response generation.",
"Chen et al. (2020c); Peng et al. (2020); Kale (2020) also finetune pretrained models on Data-to-Text tasks.",
"In this paper, we introduce DART, an open-domain corpus for structured data record to text generation.",
"DART's ontology-preserving representation of data inputs differentiates itself from other open-domain Data-to-Text corpora.",
"We found that DART introduces new challenges to several state-of-the-art Data-to-Text models due to its open-domain nature and its ontology structure of the semantic triple input.",
"Furthermore, we found that using it for data augmentation improves other Data-to-Text tasks.",
"For future work, we will explore more controlled, high-fidelity generation that better incorporates the ontology hierarchy of data.",
"Our dataset is constructed by accumulating and processing resources from various existing datasets that are open to the public.",
"In addition, we collect annotations on structure of tabular data and human written sentences that describe data records.",
"The existing resources that we utilize mainly consist of (1) tabular data from Wikipedia, (2) information of restaurants presented with dialogue-act meaning representation and its textual description (E2E), and (3) information of various entities and their relationship that are in 15 different categories of DBPedia, which is a knowledge base built on contents created in various Wikimedia projects (WebNLG).",
"It is possible that there are biases in these resources, either in the tabular data or the textual description written by humans.",
"For additional annotations we collected, we have two groups of annotators participating: internal annotators who are the authors of this work, and external annotators recruited from the Amazon Mechanical Turk platform.",
"On MTurk, we use a pay rate of $15 per hour approximately based on our estimation of the time it takes to complete our annotation tasks.",
"In total, it took 125 hours to complete all tasks on the Amazon Mechanical Turk platform.",
"There are three annotation tasks: (1) Annotators are asked to specify ontological structure of the table by indicating relationship between table column headers, (2) Annotators are asked to write descriptions that are fluent and semantically faithful to the data records presented to them, and (3) Annotators are asked to evaluate sentences that are either references or model generated outputs.",
"We acknowledge that it is also possible to have biases in the sentences written by the annotators, or in the data records that are presented to them.",
"We conducted experiments on our own dataset and the WebNLG dataset using BART and T5, two large-scale pretrained models.",
"Both models are trained on large amounts of textual data such as news, books, and web text, which may contain any kinds of biases.",
"As a result, it is possible to insert those biases into the models.",
"In total, we conducted 43 experiments: 7 on DART and 36 for our ablation study on the WebNLG dataset.",
"We use a single NVIDIA V100 GPU for all experiments and each experiment took from 5 to 40 hours depending on the model size.",
"The authors would like to thank the anonymous reviewers for their discussion and feedback."
] | [
"method",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other"
] |
[
"Previous sarcasm generation research has focused on how to generate text that people perceive as sarcastic to create more human-like interactions.",
"In this paper, we argue that we should first turn our attention to the question of when sarcasm should be generated, finding that humans consider sarcastic responses inappropriate to many input utterances.",
"Next, we use a theory-driven framework for generating sarcastic responses, which allows us to control the linguistic devices included during generation.",
"For each device, we investigate how much humans associate it with sarcasm, finding that pragmatic insincerity and emotional markers are devices crucial for making sarcasm recognisable.",
"The prevalence of sarcasm on the social web (Kho-dak et al., 2018; Sykora et al., 2020) has motivated computational investigations across the NLP community.",
"Most focus on textual sarcasm detection, the task of classifying whether or not a given text is sarcastic (Riloff et al., 2013; Wallace et al., 2015; Rajadesingan et al., 2015; Bamman and Smith, 2015; Joshi et al., 2016; Amir et al., 2016; Hazarika et al., 2018; Oprea and Magdy, 2019; Abu Farha et al., 2022).",
"A recent research direction considers sarcasm generation.",
"Approaches to sarcasm generation introduced so far (Joshi et al., 2015; Mishra et al., 2019; Chakrabarty et al., 2020) are mainly motivated by the potential to create more approachable, human-like conversational agents, considering that sarcasm is a natural part of human discourse.",
"We suggest reconsidering this motivation, as a community, for two reasons.",
"First, in human discourse, sarcasm is not a communicative goal in itself.",
"Rather, it can be used to achieve a wide variety of goals.",
"Some of these goals, such as to diminish the impact of criticism (Dews and Winner, 1995), to create humour (Kreuz et al., 1991; Colston and O'Brien, 2000b,a), to praise (Bruntsch and Ruch, 2017), or to strengthen relationships (Jorgensen, 1996; Pexman and Zvaigzne, 2004), might be desirable in human-machine interactions as well.",
"However, other goals, such as criticising, mocking, or expressing dissociation, often with surface contempt or derogation (Wilson, 2006), might not be desirable in human-machine interactions.",
"Second, the communicative goals mentioned above were observed in human interactions.",
"Even when a machine seeks potentially desirable goals, it is unclear whether sarcastic utterances have the same effect on humans when coming from machines.",
"RQ1.",
"When should a chatbot be sarcastic?",
"(a) When do humans consider sarcasm appropriate?",
"(b) When do humans prefer sarcasm, over non-sarcasm?",
"RQ2.",
"How should a chatbot formulate sarcasm?",
"(a) What linguistic devices do humans associate with sarcasm?",
"(b) What sarcasm flavour do they prefer?",
"Here, by flavour , we mean a specific conjunction of linguistic devices that humans may associate with sarcasm, such as intensifiers and emotional markers, as introduced in Section 3, and expanded upon in Section 4. To address our research questions, we suggest the following approach.",
"First, given a set of input utterances, generate several sarcastic responses.",
"Each response should be of a specific sarcasm flavour, i.e. should display a specific conjunction of linguistic devices.",
"Next, create a survey that asks human participants: to indicate how appropriate it was to respond sarcastically to the input; to select their preferred response; and to rate the sarcasticness of each response, investigating whether they associate the linguistic devices in the response with sarcasm.",
"To achieve this, we require a sarcastic response generator that provides control over the linguistic devices used.",
"Most previous generators rely on variants of the traditional theory of sarcasm, which claims that the intended meaning concealed by sarcasm is the opposite of the literal meaning.",
"However, this theory provides a grounding that is neither necessary, nor sufficient, for sarcasm to occur, as discussed in Section 3. To overcome this limitation, we recently introduced Chandler, a novel modular sarcastic response generation framework (Oprea et al., 2021).",
"It is grounded on a formal theory that, from a linguistic-theoretical perspective, specifies devices whose presence is both necessary and sufficient to unambiguously differentiate sarcasm from non-sarcasm.",
"These are allusion to a failed expectation, pragmatic insincerity, and emotional markers.",
"Chandler can generate sarcasm of different flavours, and allows control over flavour its output should reflect.",
"Herein, we also compare Chandler's outputs to those of previous generators, to examine participant preferences toward an even greater range of sarcasm flavours.",
"Our results indicate that people find sarcastic responses inappropriate for most input utterances.",
"When sarcasm was considered appropriate, the inputs commonly had a positive sentiment, and often had elements of humour.",
"Further, even when considered appropriate, people still did not usually prefer sarcastic responses over non-sarcastic ones.",
"Sarcasm was typically preferred when it was also considered funny and not too specific.",
"Finally, we identified pragmatic insincerity and emotional markers (cf. Section 3) as crucial linguistic devices to include in generating recognizable sarcasm.",
"Contributions We summarise our contributions as follows.",
"First, our approach allows us to understand people's preferences about when sarcasm should be used, and how it should be formulated.",
"Using this information, we provide guidelines for future work in sarcasm generation.",
"Second, observing people's preferences also allows us to quantitatively evaluate the practical advantages of the formal linguistic theory that grounds Chandler.",
"The earliest work on sarcasm generation is that of Joshi et al. (2015), who introduce SarcasmBot, a sarcastic response generation system.",
"SarcasmBot uses one of eight possible generators, each containing a set of predefined patterns, one of which is instantiated as the response.",
"The generators do not in fact account for the meaning of the input, rather, they only focus on aspects such as the overall sentiment or presence of swear words.",
"Further, in our experiments, we noticed that most of the time a fallback generator was employed, returning the simple concatenation of a random positive phrase to a random negative one, from a set of predefined phrases that have no specific connection to the input.",
"Mishra et al. (2019) suggest a sarcastic paraphrase generator.",
"They assume that the input is always of negative polarity, and suggest an unsupervised pipeline of four modules to convert such an input u ( ) to a sarcastic version.",
"In the Sentiment Neutralisation module, they filter out negative sentiment words from u ( ) to produce u (0) .",
"In the Positive Sentiment Induction module, they modify u (0) to convey positive sentiment, producing u (+) .",
"Next, in the Negative Situation Retrieval module, they mine a phrase v ( ) that expresses a negative situation.",
"v ( ) is selected from a set of predefined phrases, based on the similarity to the original input.",
"Finally, the Sarcasm Synthesis module constructs the sarcastic paraphrase from u (+) and v ( ) .",
"Chakrabarty et al. (2020) suggest a similar pipeline.",
"Their R 3 system first employs a Reversal of Valence module, which replaces input words of negative valence with their lexical antonyms using WordNet (Miller, 1995) to produce u (+) .",
"Next, it builds an utterance v that is incongruous to u (+) , and generates sarcasm from u (+) and v .",
"Previous generators share a limitation that make them unfit for our purposes.",
"Mainly, relying on the traditional theory, they identify sarcasm with linguistic incongruity.",
"Thus, they only provide this single device for investigation, device that is not sufficient for sarcasm to occur, as discussed in Section 3. A further limitation, shared by Mishra et al. (2019) and Chakrabarty et al. (2020), is that their generators only work with input utterances of negative sentiment.",
"However, as discussed earlier, sarcastic communication can have many goals, 7687 including to praise, or to strengthen friendships.",
"In this section, we describe the Implicit Display Theory, a formal linguistic theory that grounds Chandler.",
"Previous Theories In the traditional theories , sarcasm is created by literally saying one thing but figuratively meaning, or conversationally implicating (Grice, 1975), the opposite.",
"However, such incongruity is not necessary for sarcasm.",
"To see this, consider sarcastic understatements such as saying This was not the best movie ever to mean the movie was bad.",
"It is also not sufficient.",
"For instance, it also occurs in the construction of certain stylistic devices, such as metaphors, e.g. Time is money.",
"Further theories have been suggested to address these limitations, including the echoic mention theory (Sperber and Wilson, 1981) and its variants (Kreuz and Glucksberg, 1989; Wilson and Sperber, 1992; Sperber and Wilson, 1998), and the pretense theory (Clark and Gerrig, 1984) and its variants (Clark, 1996).",
"However they all fail to uniquely identify sarcasm, as argued by Utsumi (2000) and Oprea and Magdy (2020).",
"Implicit Display Theory (IDT) Introduced by Utsumi (1996), the IDT focuses specifically on making the distinction between sarcasm and non-sarcasm.",
"We invite the interested reader to consult (Utsumi, 2000) for an overview of how it overcomes the limitations of previous theories.",
"We chose it as a grounding for our generation system.",
"The IDT first defines the concept of an ironic environment.",
"We say a situation in which an utterance occurs is surrounded by an ironic environment if the discourse context includes the following components: (1) The speaker has expectation Q at time t 0 ; (2) Q fails at time t 1 > t 0 ; and (3) The speaker has a negative attitude towards the failure of Q .",
"Note that the idea of linking sarcasm to an expectation is not new to Utsumi (1996), rather it is supported by previous work (Kreuz and Glucksberg, 1989; Kumon-Nakamura et al., 1995).",
"Next, according to the IDT, an utterance is sarcastic if and only if it implicitly displays the ironic environment.",
"Implicit display is realised if the following linguistic devices are present in the utterance: (1) allusion to the speaker's failed expectation Q ; (2) pragmatic insincerity, realised by intentionally violating one of the pragmatic principles, e.g. Grice's maxims (Grice, 1975); and (3) implication (indirect expression) of the speaker's negative attitude towards the failure of Q .",
"Finally, the theory claims that the degree of sarcasm of an utterance is proportional to how many of these linguistic devices are present in the utterance.",
"In this section we look at the methodology employed to address our research questions.",
"Specifically, we first select a set of input utterances.",
"Next, for each input, we generate four sarcastic responses of different flavours using Chandler, and three more responses using other systems.",
"Finally, for each input, in a survey, we ask human participants to rate the responses across several dimensions, to understand their preference towards the appropriateness of sarcasm, and which linguistic devices they associate with sarcasm.",
"As inputs, we select texts from the corpus published by Wilson and Mihalcea (2019).",
"The corpus contains short texts (extracted from tweets) where users describe actions they performed.",
"We compute the sentiment polarity of each text using the classifier from Barbieri et al. (2020), a RoBERTa model (Liu et al., 2019) fine-tuned on the tweet sentiment dataset from Rosenthal et al. (2017).",
"Next, we form five partitions of 50 texts each: very negative and very positive , containing the top 50 texts based on their negative and positive probabilities, respectively; negative , containing random texts for which the probability of being negative was higher that the probabilities of being positive or neutral; and positive and neutral , partitions that we formed analogously to how we formed the negative partition.",
"Our final input dataset contains 250 texts.",
"For completeness, in this section we describe Chandler, the sarcastic response generator that we introduced in Oprea et al. (2021).",
"The IDT directly suggests an algorithm for sarcasm generation that identifies an ironic environment, then creates an utterance that implicitly displays it.",
"We now discuss how we implement each step.",
"Ironic Environment As discussed in Section 4.1, each input text U in describes an action.",
"In this scenario, herein, we assume the expectation 7688 Q that is part of the ironic environment negates that action.",
"For instance, say U in expresses the event P = [ <user> wins the marathon ] .",
"We assume Q = P = [ <user> does not win the marathon ] .",
"As we shall see, the algorithm we suggest will not, in fact, require us to formulate Q, but it relies on the above assumption.",
"Allusion to Q Following Utsumi (2000), we de-fine allusion in terms of coherence relations, similar to the relations of rhetorical structure theory (RST) (Mann and Thompson, 1987).",
"That is, if U is an utterance that expresses proposition , we say U alludes to the expectation Q if and only if there is a chain of coherence relations from to Q 1 .",
"So, we need to first select a proposition to either start or end the coherence chain, then specify the chain between and Q , and formulate U such that it expresses .",
"We suggest defining such as objects of if-then relations, where the subject is P , the proposition expressed by input text U in .",
"That is, relations of the form if P then should hold.",
"To infer given U in , we use COMET (Bosselut et al., 2019), an adaptation framework for constructing commonsense knowledge.",
"Specifically, we use the COMET variant fine-tuned on ATOMIC (Sap et al., 2019), a dataset of typed if-then relations.",
"COMET inputs the subject of the relation, along with the relation type, and outputs the relation object.",
"In our case, the subject is U in , and we set to the relation object.",
"In the examples that follow, assume the input text is U in = <user> won the marathon'.",
"We leverage four relation types: (1) xNeed : the object of a relation of this type specifies an action that the user needed to perform before the event took place, e.g. if U in then = [ xNeed to train hard ] ; (2) xAttr : the object specifies how a user that would perform such an action is seen, e.g. if P then = [ xAttr competitive ] ; (3) xReact : the object specifies how the user could feel as a result of the event, e.g. if P then = [ xReact happy ] ; and (4) xEffect : the object specifies a possible effect that the action has on the user, e.g. if P then = [ xEffect gets congratulated ] .",
"In Table 1 we show, for each relation type, the coherence chains between the relation object and the failed expectation Q .",
"Under these conditions, to generate an utterance U that alludes to Q , we need to choose 1 Note that a restriction in Utsumi (2000)'s definition of allusion is that U does not directly express the state of affairs that Q is expected via phrases such as \"I've expected ...\".",
"Pragmatic insincerity The second requirement for implicit display is that the utterance generated should include pragmatic insincerity.",
"In this paper, we focus on violating Grice's maxim of quality (Grice, 1975), where we aim for the propositional content of the generated utterance to be incongruous to that of U in (input text).",
"To achieve this, we first choose an if-then relation type, then infer the relation object from U in using COMET, and construct an utterance that expresses .",
"For instance, if U in = <user> won the marathon', and we have chosen the xAttr relation type, the constructed utterance could express = [ <user> is not competitive ] .",
"Negative attitude To fulfill the last requirement of implicit display, the utterance generated should imply a negative attitude towards the failure of the expectation Q .",
"As pointed out by Utsumi (1996), this can be achieved by embedding verbal cues usually associated with such attitudes, including hyperbole and interjections.",
"Logical form and explainability At this point we formulate Algorithm 1 for generating a sarcastic response U out , given an input utterance U in that expresses proposition P .",
"We refer to emotion ( ) as the logical form of the sarcastic response we generate.",
"Here, emotion is a function that augments to express a negative attitude.",
"Note that the logical form, together with the coherence chain between and the failed expectation Q , provide a complete explanation for how and why sarcasm occurs.",
"The explanation is (cid:15) = ( emotion ( ) , C ) , where is C the coherence chain from to Q .",
"The coherence chain for each relation type can be selected from Table 1. This makes our sarcasm generation process accountable.",
"Logical Form to Text To convert the logical form to text, we rely on predefined patterns for each if-then relation type.",
"As a running example, 7689 relation type example relation coherence chain xNeed if P then = [ xNeed to train hard ] volitional-cause ( , P ) and contrast ( P, Q ) xAttr if P then = [ xAttr competitive ] condition ( , IP ) purpose ( IP , P ) contrast ( P, Q ) xReact if P then = [ xReact happy ] contrast ( Q, P ) volitional-result ( P, ) xEffect if P then = [ xEffect gets congratulated ] contrast ( Q, P ) non-volitional-result ( P, ) Table 1: Coherence chains between the object of an if-then relation and the failed expectation Q , for each relation type, as discussed in Section 4.2.",
"assume the input utterance U in = <user> won the marathon' and the chosen relation type is xAttr .",
"Say = COMET ( U in , xAttr ) = [ xAttr competitive ] .",
"The logical form is emotion ( [ xAttr competitive ]) .",
"We first construct an intermediate utterance U using the following rule: <user> <verb> competitive .",
"Here, <verb> is a verb specific to each relation type.",
"In our example, U could be <user> is competitive'.",
"Next, for each input U in , we generate three responses.",
"The first response U e out only includes pragmatic insincerity, i.e. it expresses [ xAttr competitive ] .",
"To construct it, we apply a rule-based algorithm to generate the negation of U in a manner similar to (Chakrabarty et al., 2020), discussed in Section 2. U e out could be <user> is not competitive'.",
"The second response U i out does not include pragmatic insincerity, but only markers that express an emotional attitude, i.e. it expresses emotion ([ xAttr competitive ]) .",
"To achieve this, in a pattern-based manner, we augment U with hyperbole and interjections, as indicated by Utsumi (2000).",
"U i out could be <user> is definitely competitive, yay!'.",
"The third response U out includes both devices, i.e. it expresses emotion ( [ xAttr competitive ]) .",
"U out could be <user> is definitely not competitive, yay!'.",
"A full list of patterns is shown in Section A in the appendix.",
"In the running example we focused on the xAttr relation type.",
"Recall there are four relation types that we consider, xNeed , xAttr , xReact , and xEffect .",
"As such, for each input text U in , we generate 12 responses: three response types, U e out , U i out , and U out , for each relation type.",
"We use the pattern Ch-<relation > ( | i | e )?",
"to refer to each response of our system, Chandler .",
"For instance, Ch-xAttr refers to U out built considering the xAttr relation, while Ch-xNeed e refers to U e out built considering the xNeed relation.",
"Note that other strategies for converting the logical form of sarcasm to text are possible.",
"For instance, using policy-based generation with external rewards (Mishra et al., 2019) might have lead to higher perceived sarcasticness of our generated responses.",
"However, we leave this to future work.",
"Our goal is to understand user preferences towards when sarcasm should be used, and how sarcasm should be formulated.",
"We built three surveys, labelled",
"(a)(c), that we published on the Prolific Academic 2 crowdsourc-ing platform, one for each output type, out of U e out , U i out , and U out .",
"As such, in the survey corresponding to U out , we presented participants with the input text U in , along with the responses produced by Chandler-xNeed, Chandler-xAttr, Chandler-xReact, and Chandler-xEffect.",
"In each survey, we also enclosed a response from DialoGPT (Zhang et al., 2020), a recent dialogue system that is not built to be sarcastic; a response produced by SarcasmBot, the sarcastic response generator of Joshi et al. (2015) ; and a response produced by R 3 , the state-of-the-art sarcastic paraphrase generator of Chakrabarty et al. (2020) 3 .",
"We make a few observations.",
"First, DialoGPT is used as a reference system, following the reasoning of Joshi et al. (2015): responses designed to be sarcastic should have a higher perceived sarcasticness than responses from DialoGPT, which are not designed to be sarcastic.",
"Second, note that R 3 is designed to produce rephrases.",
"As such, we applied R 3 to the output of DialoGPT to get a sarcastic rephrase of a response to the input.",
"Table 2 shows an example input utterance, along with responses from all systems.",
"All in all, each survey instance contained a specific input text, and seven responses generated as mentioned above and presented in a random order.",
"In the survey, we asked participants to evaluate each response across four dimensions: (1) Sarcasm: How sarcastic is the response?",
"(2) Humour: How 2 https://prolific.co 3 https://github.com/tuhinjubcse/SarcasmGeneration-ACL2020 7690 system response DialoGPT I'm not sure if you're being sarcastic or not.",
"funny is the response?",
"(3) Coherence: How coherent is the response to the input?",
"It is coherent if it sounds like sensible response that a person might give in a real conversation; and (4) Specificity: How specific is the response to the input?",
"It is not specific if it can be used as a response to many other inputs.",
"Each dimension ranged from 0 to 4, in line with previous work (Chakrabarty et al., 2020).",
"Next, we asked participants to select their preferred response out of the seven, i.e. the one they would personally use.",
"Finally, we asked them to judge, on a scale from 0 to 4, how appropriate it was to respond sarcastically to the shown input text.",
"Each survey instance was presented to three different participants.",
"However, we did not use a voting scheme to aggregate the three survey instances into one.",
"Rather, aggregation was conducted per-system.",
"This is because our metrics (e.g. sarcasticness, preference towards a response, appropriateness) are inherently subjective, depending on the sociocultural background of the participants.",
"See, for instance, the work of Oprea and Magdy (2020).",
"As such, the concept of correct answer does not exist in the conventional sense.",
"Indeed, the inter-participant agreement was low, but not surprisingly so, given that participants could have come from different sociocultural backgrounds.",
"However, this does not entail that population statistics are not informative.",
"As related work in this direction, consider that of Amidei et al. (2018), who make the very pos pos neutral neg very neg 0 1 2 Figure 1: Mean sarcasm appropriateness score for each sentiment category, as discussed in Section 5.1.",
"point an unchecked focus on reduction of disagreement among annotators runs the danger of creating generation goals that reward output that is more distant from, rather than closer to, natural humanlike language. (Amidei et al., 2018) Consider also the work of Davani et al. (2021), who discuss the issue of disagreement in subjective tasks.",
"We do, however, encourage more work in this direction.",
"We now look at the responses that the participants provided in our survey, addressing our RQs.",
"Figure 1 shows the mean appropriateness score for each of the five sentiment categories.",
"A one-way ANOVA test between the means yielded a p -value 0 .",
"001 .",
"We therefore proceeded with Tukey's range test (Tukey, 1949), to find the means that are significantly different from one another.",
"We noticed that sarcasm was considered significantly more appropriate by survey participants in responses to positive inputs, compared to very positive, and very negative inputs, respectively.",
"This supports our statement from Section 2: the assump-tion of previous state-of-the-art generators that sarcasm should only be generated for negative inputs is problematic.",
"However, even for the positive class, the mean appropriateness is less than 2. This makes it difficult to recommend responding sarcastically based on sentiment only.",
"To gain more insight, we proceeded with a qualitative inspection of the inputs that yielded the highest and lowest appropriateness scores, respectively.",
"We noticed a few main themes, that we labelled joke , family , school , leisure and death .",
"We then asked two humans to label all inputs across these dimensions.",
"A third human resolved all disagreements.",
"Finally, we computed the Pearson correlation coefficient of each theme with the sarcasm appropriateness score, across all inputs.",
"We noticed a significant ( p < 0 . 05 ) positive correlation 7691 text approp.",
"between appropriateness and the category joke , and significant negative correlation with belonging to the family theme.",
"We show some examples of the theme family with low appropriateness scores in Table 3. Thus, according to our analysis, sarcasm seems to be most appropriate for positive inputs, and for humorous inputs, which may invite more sarcastic responses.",
"In other situations, however, sarcasm might be interpreted as inappropriate and even offensive (Meaney et al., 2021).",
"We first consider the overall preference towards either sarcasm or non-sarcasm.",
"Recall that participants also specified their preferred response for each input.",
"The distribution of the sarcasm, humour, specificity, and coherence scores of this preferred response, across all survey instances, is illustrated in Figure 2 with a blue, continuous, line.",
"The red, dashed, line illustrates the distribution across the 80 survey instances where the sarcasm appropriateness score of the input was higher than the midpoint, i.e. at least 3. We notice considerably higher preference towards non-sarcastic and non-humorous responses.",
"As indicated by the blue lines, over 50% of the preferred responses were those considered non-sarcastic and non-humorous by participants, the rest of the distribution being highly skewed towards the lower sarcasm and humour regions.",
"Furthermore, note that even when sarcasm was considered highly appropriate, participants still preferred non-sarcastic responses, as indicated by the red, dashed, line in the top-left of Figure 2. Although there is a shift in the distribution towards sarcasm in this case, the skew is still towards the non-sarcastic region.",
"Looking at the bottom row of Figure 2, on the other hand, we notice a negative skew, indicating an overall preference towards higher coherence.",
"This is slightly the case for specificity as well.",
"To investigate further, we fit a logistic regression model to predict whether a response is preferred based on its sarcasm, humour, specificity, coherence scores, and two-way interactions between these variables.",
"All coefficients are listed in Appendix B. We noticed noticed a significant ( p < 0 . 05 ) positive relationship between coherence and preference, as well as the interaction between sarcasm and humour.",
"The term representing the product of sarcasm and specificity had a significant negative effect on preference.",
"In terms of the specific systems, we notice DialoGPT was preferred about 44% of the time, followed by Ch-xAttr i (20%), and SarcasmBot (15%), which corresponds exactly to the coherence ranking in Table 4. Our results indicate that responses with high coherence to the inputs are generally preferred over sarcastic responses.",
"Sarcasm is only preferred when it is also considered humorous.",
"On the other hand, participants seem to have actively avoided sarcastic responses that were very specific.",
"In Table 4 we show mean sarcasm, humour, specificity, and coherence scores provided by participants for each variant of Chandler, across all inputs.",
"In the table, there are four groups (14) and three systems within each group (ac).",
"Rows with index",
"(a) show scores for the complete versions of Chandler, for each if-then relation type.",
"Rows",
"(b) and",
"(c) show partial versions, omitting pragmatic insincerity and emotional markers, respectively.",
"Allusion We have four strategies for alluding to the failed expectation, depending on the relation type considered.",
"We notice the highest sarcasm score is achieved by Ch-xAttr (row 2a), followed by Ch-xNeed (row 1a), Ch-xReact (row 3a) and Ch-xEffect (row 4a).",
"The same ranking holds for 7692 System sarc.",
"variants of Chandler that do not include pragmatic insincerity or emotional markers.",
"Out of the allusion strategies selected, the responses perceived as most sarcastic are those that mention attributes of the user.",
"Similarly, we notice that variants of Chandler that use the xAttr relation are also perceived and the most coherent, specific to the input, and achieve the highest humour score.",
"Pragmatic Insincerity Comparing the complete version, Ch-xAttr (row 2a), with Ch-xAttr i (row 2b), the same model without pragmatic insincerity, we notice a significant drop in average sarcasm score.",
"We observe a similar trend in group 3 for Ch-xReact i , indicating the importance of pragmatic insincerity.",
"However, this did not hold for the other two relation types.",
"Additionally, both specificity and coherence seem to significantly increase when removing pragmatic insincerity, irrespective of the relation type considered.",
"Emotional Markers Comparing complete versions of Chandler with those that omit emotional markers, we notice that the omission of such markers leads to significantly lower perceived sarcasm for all relation types.",
"Humour is also significantly impacted by the omission of emotional markers for all relation types considered except for xEffect (row 4).",
"Oh the other hand, coherence and specificity are not significantly influenced.",
"To sum up, the degree of perceived sarcasm is influenced by all linguistic devices considered.",
"Out of the if-then relation types we consider, mentioning attributes of the user seems to lead to the highest perceived sarcasm, humour, specificity and co-very pos pos neutral neg very neg 0 0.2 0.4 0.6 DialoGPT SarcasmBot DialoGPT+R3 Ch-xNeed Ch-xAttr Ch-xReact Figure 3: Normalized number of times each system was preferred for instances were the participant preferred a response that they also considered sarcastic.",
"herence.",
"Being insincere about the state of affairs leads to significantly higher perceived sarcasm, but significantly lower specificity and coherence.",
"Emotional markers increase sarcasm and humour perception, but do not significantly impact specificity or coherence.",
"Finally, recall that a main claim of IDT was that the degree of sarcasticness of an utterance grows with the number of implicit display conditions met.",
"Our results support this claim.",
"While we established that participants typically preferred non-sarcatic responses, we next set out to find what sarcasm people preferred in our experiments when they did prefer sarcasm.",
"To do this, we consider the set of survey instances that showed the complete versions of Chandler, where the sarcasm score given by the participant to their preferred response was at least 3, leaving us with 107 (around 14%) of the 750 survey instances.",
"We divide these instances into five categories, based on input sentiment.",
"Within each category, for each generation system, we count the number of times that a response produced by that system was preferred.",
"Figure 3 shows the normalised counts across all systems, for each sentiment category.",
"We observe that, for positive inputs, where sarcasm was considered significantly more appropriate than other sentiment categories, people prefer responses produced by Ch-xNeed.",
"Interestingly, however, we observe that people prefer the fairly nonspecific, pattern-based sarcastic remarks produced by SarcasmBot for most types of input text.",
"However, when analysing its outputs, we noticed it produced a total of only 28 unique responses (listed in Appendix C) to our 250 inputs.",
"While in our experiments each response was only shown at most three times, in a real scenario of a user interacting with a conversational agent, the user might not appreciate repeatedly receiving the same response.",
"We recommend that future work on sarcasm generation should account for the four main findings: (1) People think sarcasm is inappropriate as a response to most inputs.",
"However, if it is to be used, it is seen as most appropriate when the input is positive, but not extremely positive.",
"People also found sarcasm to be a suitable response to jokes.",
"(2) Even when deemed appropriate, people usually do not prefer sarcasm.",
"Rather, coherence is the most important factor in explaining their response preferences.",
"When people do prefer sarcasm, they like it mainly when it is also seen funny.",
"Further, they generally dislike sarcasm that is very specific.",
"(3) When generating sarcasm, pragmatic insincerity and emotional markers are important to include as they have a high influence of sarcasm perception.",
"(4) Overall, people commonly prefer the simple general sarcastic responses of SarcasmBot, even compared to more sophisticated generation models, which suggests that presently, a simpler solution to sarcasm generation may actually be advantageous.",
"Nevertheless, more investigation is required to examine if it will be desirable in long conversations, since it has limited diversity in outputs.",
"We have used a linguistically informed framework for sarcasm generation so that we could present human judges with a variety of flavors of sarcastic responses in a range of situations.",
"Our findings suggest that sarcasm should not always be generated, but the decision to generate sarcasm itself should informed by user preferences.",
"People find sarcasm most appropriate as a response to positive utterances and cases in which a joking environment has already been established.",
"Further, judges preferred sarcasm most when they actually found it to be funny, and most often preferred general sarcastic responses.",
"However, people often preferred non-sarcastic responses even more.",
"We recommend that future work in this area carefully considers both the appropriateness and necessity of generating sarcasm at all.",
"In our experiments, we noticed that some of the input tweets contained references to sensitive top-ics, such as religion and gender, or to tragic life events.",
"Producing sarcasm for such inputs might be inappropriate and offensive to some (as our experiments confirmed).",
"We clearly informed our survey participants about this possibility in the Participant Information Sheet, before accessing our survey.",
"The sheet is enclosed in Appendix D. References Ibrahim Abu Farha, Silviu Vlad Oprea, Steven R. Wilson, and Walid Magdy."
] | [
"abstain",
"objective",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"An interesting and frequent type of multiword expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities (Wells Fargo) and dates (July 5, 2020) as well as certain productive constructions (blow for blow, day after day).",
"Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions.",
"Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information.",
"We empirically compare these two common strategiesparsing and taggingfor predicting flat MWEs.",
"Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies.",
"Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers.",
"Headless multi-word expressions (MWEs), including many named entities and certain productive constructions, are frequent in natural language and are important to NLP applications.",
"In the context of dependency-based syntactic parsing, however, they pose an interesting representational challenge.",
"Dependency-graph formalisms for syntactic structure represent lexical items as nodes and head-dominates-modifier/argument relations between Officials at Mellon Capital were unavailable for comment O O B I O O O O nsubj case nmod mwe_NNP xcomp case nmod Figure 1: Dependency tree from the MWE-Aware English Dependency Corpus, imposing a head relationship between the words in the actually headless MWE Mellon Capital .",
"lexical items as directed arcs on the corresponding pair of nodes.",
"Most words can be assigned clear linguistically-motivated syntactic heads, but several frequently occurring phenomena do not easily fit into this framework, including punctuation, coordinating conjunctions, and flat, or headless MWEs.",
"While the proper treatment of headless constructions in dependency formalisms remains debated (Kahane et al., 2017; Gerdes et al., 2018), many well-known dependency treebanks handle MWEs by giving their component words a default head, which is not indicative of a true dominance relation, but rather as a tree encoding of a flat structure without a syntactic head (de Marneffe and Nivre, 2019, pg. 213).",
"Fig. 1 shows an example: the headless MWE Mellon Capital has its first word, Mellon , marked as the head of Capital .",
"Despite the special status of flat structures in dependency tree annotations, most state-of-the-art dependency parsers treat all annotated relations equally, and thus do not distinguish between headed and headless constructions.",
"When headless-span identification (e.g., as part of named-entity recognition (NER)) is the specific task at hand, begin-chunk/inside-chunk/outside-chunk (BIO) tagging (Ramshaw and Marcus, 1995) is generally adopted.",
"It is therefore natural to ask whether parsers are as accurate as taggers in identifying these flat branches in dependency trees.",
"Additionally, since parsing and tagging represent two different views of the same underlying structures, can joint decoding that combines scores from the two modules and/or joint training under a multitask learning (MTL) framework derive more accurate models than parsing or tagging alone?",
"To facilitate answering these questions, we introduce a joint decoder that finds the maximum sum of scores from both BIO tagging and parsing decisions.",
"The joint decoder incorporates a special deduction item representing continuous headless spans, while retaining the cubic-time efficiency of projective dependency parsing.",
"The outputs are consistent structures across the tagging view and the parsing view.",
"We perform evaluation of the different strategies on the MWE-Aware English Dependency Corpus and treebanks for five additional languages from the Universal Dependencies 2.2 corpus that have frequent multi-word headless constructions.",
"On average, we find taggers to be more accurate than parsers at this task, providing 0 .",
"59% ( 1 . 42% ) absolute higher F1 scores with(out) pre-trained contextualized word representations.",
"Our joint decoder combining jointly-trained taggers and parsers further improves the tagging strategy by 0 .",
"69% ( 1 . 64% ) absolute.",
"This corroborates early evidence (Finkel and Manning, 2009) that joint modeling with parsing improves over NER.",
"We also show that neural representation sharing through MTL is an effective strategy, as it accounts for a large portion of our observed improvements.",
"Our code is publicly available at https://github.com/tzshi/flat-mwe-parsing .",
"A (multi-word) headless construction, or flat structure, is a span of lexical items that together reference a single concept and where no component is a syntactically more plausible candidate for the span's head than any other component.",
"Examples are boldfaced in the following English sentences.",
"(1) Within the scope of this paper:",
"a. ACL starts on July 5, 2020 .",
"b. My bank is Wells Fargo .",
"c. The candidates matched each other insult for insult .",
"(Jackendoff, 2008) (1)a and (1)b show that dates and many named entities can be headless constructions, suggesting that they are frequent.",
"Indeed, in the MWE-Aware English Dependency Corpus (Kato et al., 2017), nearly half of the sentences contain headless constructions, 75% of which are named entities.",
"For comparison, (2) shows examples of non-flat MWEs, which are also interesting and important, but they are beyond the focus of our paper.",
"(2) Outside the scope of this paper:",
"a. congressman at large (Sag et al., 2002) [head = congressman]",
"b. I have moved on .",
"[verb-particle construction, head = moved]",
"c. I take your argument into account .",
"(Constant et al., 2017) [light-verb construction, head = take] Returning to headless MWEs, the choice of representation for headless spans depends on the task.",
"In named-entity recognition , such spans are often treated as BIO tag sequences: 1 for example, in Fig. 1, Mellon is tagged as B and Capital is tagged as I.",
"In dependency parsing , where labeled dependency arcs are the only way to express a syntactic analysis (short of treating MWEs as atomic lexical items, which would result in a chicken-and-egg problem) is to impose arcs within the MWE's span.",
"Different corpora adopt different annotation conventions.",
"The MWE-Aware English Dependency Corpus uses the arc label mwe_NNP , as shown in Fig.",
"1. The Universal Dependencies (UD; Nivre et al., 2018) annotation guidelines have all following tokens in such constructions attached to the first one via arcs labeled flat , a choice that is admittedly in principle arbitrary.",
"2 The frequency of flat structures across different treebanks varies according to language, genre, and even tokenization guidelines, among other factors.",
"Table 1 lists the UD 2.2 treebanks with the highest and lowest percentage of flat relations.",
"While the Korean treebank ko_gsd (with the highest percentage) splits up most names into multiple tokens and connects them through flat , the Japanese treebank ja_gsd (no flat s at all) treats all names as compound nouns, and thus represents them as having internal structure without any indication that a special case has occurred.",
"3 Fig. 2 shows examples from the UD parallel treebanks, illustrating 1 In this paper, we adopt the original BIO tagset, which cannot properly represent discontinuous MWEs.",
"See Schneider et al. (2014) for modified tagsets providing such support.",
"2 universaldependencies.org/u/dep/flat.html 3 Some flat structures can end up using other dependency labels such as compound , as a result of the fact that many UD treebanks, including ja_gsd , are automatically converted from non-UD style annotations.",
"The UD annotations depend It contains a monument to Martin Luther King , Jr.",
"the diversity of annotation for the same sentence rendered in different languages.",
"Overall, more than 20% of the treebanks in the UD 2.2 collection have flat structures in more than 20% of their training-set sentences.",
"4 Therefore, a parsing approach taking into account the special status of headless structural representations can potentially benefit models for a large number of languages and treebanks.",
"Formally, given an n -word sentence w w 1 , w 2 , . . . , w n , we define its dependency structure to be a graph G p V, E q . Each node in V corresponds to a word in the sentence. Each (labeled) edge p h, m, r q P E denotes a syntactic relation labeled r between the head word w h and modifier word w m , where h, m P t 0 , 1 , . . . , n u and 0 denotes the dummy root of the sentence. Since we work with dependency treebanks, we require that the edges in E form a tree. To represent a multiword headless span w i , . . . , w j , all subsequent words in the span are attached to the beginning word w i , i.e., @ k P t i ` 1 , . . . , j u , p i, k, f q P E , where f is the special syntactic relation label de-on",
"noting headless structures ( flat in UD annotation). Alternatively, one can also use a BIO tag sequence T p t 1 , t 2 , . . . , t n q P t B , I , O u n to indicate the location of any headless spans within w . The headless MWE span w i , . . . , w j has the corresponding tags t i B and @ k P t i ` 1 , . . . , j u , t k I; tokens outside any spans are assigned the tag O. We call G and T consistent if they indicate the same set of headless spans for w .",
"We first present the standard approaches of edge-factored parsing (3.2) and tagging (3.3) for extracting headless spans in dependency trees, and then introduce a joint decoder (3.4) that finds the global maximum among consistent (tree structure, tag sequence) pairs.",
"Given a lengthn sentence w which we henceforth denote with the variable x for consistency with machine-learning conventionswe first extract contextualized representations from the input to associate each word with a vector x 0 (for the dummy word root), x 1 , . . . , x n .",
"We consider two common choices of feature extractors: (1) bidirectional long short-term memory networks (bi-LSTMs; Graves and Schmidhuber, 2005) which have been widely adopted in dependency parsing (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) and sequence tagging (Ma and Hovy, 2016); and (2) the Transformer-based (Vaswani et al., 2017) BERT feature extractor (Devlin et al., 2019), pre-trained on large corpora and known to provide superior accuracies on both tasks (Kitaev et al., 2019; Kondratyuk and Straka, 2019).",
"For BERT models, we fine-tune the representations from the final layer for our parsing and tagging tasks.",
"When the BERT tokenizer renders multiple tokens from a single pre-tokenized word, we follow Kitaev et al. (2019) and use the BERT features from the last token as its representation.",
"Since we consider headless structures that are embedded inside parse trees, it is natural to identify them through a rule-based post-processing step after full parsing.",
"Our parsing component replicates that of the state-of-the-art Che et al. (2018) parser, which has the same parsing model as Dozat and Manning (2017).",
"We treat unlabelled parsing as a head selection problem (Zhang et al., 2017) with deep biaffine attention scoring: h attach i MLP attach-head p x i q m attach j MLP attach-mod p x j q s i,j r h attach i ; 1 s JU attach r m attach j ; 1 s P p h j i | x q softmax i p s : ,j q , where MLP attach-head and MLP attach-mod are multilayer perceptrons (MLPs) that project contextualized representations into a d -dimensional space; r ; 1 s indicates appending an extra entry of 1 to the vector; U att PR p d ` 1 qp d ` 1 q generates a score s i,j for w j attaching to w i (which we can then refer to as the head of w j , h j ); a softmax function de-fines a probability distribution over all syntactic head candidates in the argument vector (we use the range operator : to evoke a vector); and, recall, we represent potential heads as integers, so that we may write h j i P t 0 , . . . , n u . The model for arc labeling employs an analogous deep biaffine scoring function: h rel i MLP rel-head p x i q m rel j MLP rel-mod p x j q v i,j,r r h rel i ; 1 s JU rel r r m rel j ; 1 s P p r j r | x, h j i q softmax r p v i,j, : q , where r j is the arc label between w h j and w j . The objective for training the parser is to minimize the cumulative negative log-likelihood L parse p i ,j ,r qP E r log P p h j i | x q log P p r i r | x, h j i qs . After the model predicts a full parse, we extract headless structures as the tokens covered by the longest-spanning f -arcs ( f flat in UD). 3.3 Tagging For extracting spans in texts, if one chooses to ignore the existence of parse trees, BIO tagging is a natural choice. We treat the decision for the label of each token as an individual multi-class classification problem. We let P p t i t | x q softmax t p MLP tag p x i qq , where MLP tag has 3 output units corresponding to the scores for tags B, I and O respectively. 5 We train the tagger to minimize L tag i log P p t i t i | x q , where t corresponds to the gold BIO sequence. During inference, we predict the BIO tags independently at each token position and interpret the tag sequence as a set of MWE spans. As a postprocessing step, we discard all single-token spans, since the task is to predict multi-word spans. 3.4 A Joint Decoder A parser and a tagger take two different views of the same underlying data. It is thus reasonable to hypothesize that a joint decoding process that combines the scores from the two models might yield more accurate predictions. In this section, we propose such a joint decoder to find the parser+tagger-consistent structure with the highest product of probabilities. Formally, if Y is the output space for all consistent parse tree structures and BIO tag sequences, for y PY with components consisting 5 Sequence tagging is traditionally handled by conditional random fields (Lafferty et al., 2001, CRFs). However, in recent experiments using contextualized representations on tagging (Clark et al., 2018; Devlin et al., 2019), CRF-style loss functions provide little, if any, performance gains compared with simple multi-class classification solutions, at slower training speeds, to boot. Our preliminary experiments with both bi-LSTM and BERT-based encoders corroborate these findings, and thus we report results trained without CRFs. Axioms: R-INIT : i i : log P p t i O q L-INIT : i i : 0 R-MWE : i j : p i, j q , where p i, j q log P p t i B q ` jk i ` 1 p log P p t k I q ` log P p h k i qq Deduction Rules: R-COMB : i k : s 1 k j : s 2 i j : s 1 ` s 2 R-LINK : i k : s 1 k ` 1 j : s 2 i j : s 1 ` s 2 ` log P p h j i q L-COMB : j k : s 1 k i : s 2 j i : s 1 ` s 2 L-LINK : j k 1 : s 1 k i : s 2 j i : s 1 ` s 2 ` log P p h j i q Figure 3: Eisner's (1996) algorithm adapted to parsing headless structures (unlabeled case), our modifications highlighted in blue. All deduction items are annotated with their scores. R-MWE combines BIO tagging scores and head selection parsing scores. We need no L-MWE because of the rightward headless-structure-arc convention. of tags t i , head assignments h i , and relation labels r i , our decoder aims to find y satisfying y arg max y P YP p y | x q , where P p y | x q i P p t i | x q P p h i | x q P p r i | x, h i q . Fig. 3 illustrates our joint decoder in the unlabeled case. 6 It builds on Eisner's (1996) decoder for projective dependency parsing. In addition to having single-word spans as axioms in the deduction system, we further allow multi-word spans to enter the decoding procedures through the axiom R-MWE . Any initial single-word spans receive an O-tag score for that word, while the newly introduced MWE spans receive B-tag, I-tag, attachment and relation scores that correspond to the two consistent views of the same structure. The time complexity for this decoding algorithm remains the same O p n 3 q as the original Eisner algorithm. During training, we let the parser and the tagger share the same contextualized representation x and optimize a linearly interpolated joint objective L joint L parse ` p 1 q L tag , 6 In the labeled case, the parser further adds the arc-labeling scores to the R-MWE and LINK rules. where is a hyper-parameter adjusting the relative weight of each module. 7 This is an instance of multi-task learning (MTL; Caruana, 1993, 1997). MTL has proven to be a successful technique (Col-lobert and Weston, 2008) on its own; thus, in our experiments, we compare the joint decoder and using the MTL strategy alone. 4 Experiments Data We perform experiments on the MWE-Aware English Dependency Corpus (Kato et al., 2017) and treebanks selected from Universal Dependencies 2.2 (UD; Nivre et al., 2018) for having frequent occurrences of headless MWE structures. The MWE-Aware English Dependency Corpus provides automatically unified named-entity annotations based on OntoNotes 5.0 (Weischedel et al., 2013) and Stanford-style dependency trees (de Marneffe and Manning, 2008). We extract MWE spans according to mwe_NNP dependency relations. We choose the UD treebanks based on two basic properties that hold for flat structures 7 The joint decoder combines tagging and parsing scores regardless of whether the two modules are jointly trained. However, since feature extraction is the most time-consuming step in our neural models, especially with BERT-based feature extractors, it is most practical to save memory and time by sharing common feature representations across modules. Treebank # tokens # headless % # headless Average Compliance arcs spans span length ratio English 731 , 677 32 , 065 4 . 38% 16 , 997 2 . 89 100 . 00% UD 2 . 2 de_gsd 263 , 804 6 , 786 2 . 57% 5 , 663 2 . 59 93 . 00% it_postwita 99 , 441 2 , 733 2 . 75% 2 , 277 2 . 26 94 . 89% nl_alpino 186 , 046 4 , 734 2 . 54% 3 , 269 2 . 45 100 . 00% nl_lassysmall 75 , 134 4 , 408 5 . 87% 3 , 018 2 . 46 99 . 82% no_nynorsk 245 , 330 5 , 578 2 . 27% 3 , 670 2 . 54 99 . 78% pt_bosque 206 , 739 5 , 375 2 . 60% 4 , 310 2 . 25 97 . 38% Table 2: Dataset statistics. Language codes: de =German; it =Italian; nl =Dutch; no =Norwegian; pt =Portuguese. conforming to the UD annotation guidelines: (1) all words that are attached via flat relations must be leaf nodes and (2) all words within a flat span should be attached to a common head word, and each arc label should be either flat or punct .",
"8 For each treebank, we compute its compliance ratio , defined as the percentage of its trees containing flat arc labels that satisfy both properties above; and we filter out those with compliance ratios below 90%.",
"9 We rank the remaining treebanks by their ratios of flat relations among all dependency arcs, and pick those with ratios higher than 2%.",
"Six treebanks representing 5 languages, German (McDon-ald et al., 2013), Italian (Sanguinetti et al., 2018), Dutch (Bouma and van Noord, 2017), Norwegian (Solberg et al., 2014) and Portuguese (Rademaker et al., 2017), are selected for our experiments.",
"10 Data statistics are given in Table",
"2. To construct gold-standard BIO labels, we extract MWE spans according to the longest-spanning arcs that correspond to headless structures.",
"bi-LSTMs where each layer has 400 dimensions",
"8 punct inside a headless span is often used for hyphens and other internal punctuation in named entities.",
"See the English sentence in Fig. 2 for an example.",
"9 The two properties defined in the UD guidelines for headless structures provide us with a common basis for uniform treatment across languages and treebanks.",
"Unfortunately, the two properties can be violated quite often, due to issues in annotation and automatic treebank conversion into UD style.",
"In 6 out of the top 10 treebanks containing the most flat relations, (at least one of) these properties are violated in more than 35% of the sentences with flat relations and have to be excluded from our experiments.",
"We hope that ongoing community effort in data curation will facilitate evaluation on more diverse languages.",
"10 It is a coincidence that all the selected languages are Indo-European (IE).",
"Although there are some non-IE treebanks with high flat ratio, such as Korean (see Table 1), the annotated structures frequently break one or both of the basic properties.",
"See Fig. 2 for violation examples.",
"in both directions and the inputs are concatenations of 100 -dimensional randomly-initialized word embeddings with the final hidden vectors of 256 -dimensional single-layer character-based bi-LSTMs; for BERT, we use pre-trained cased multi-lingual BERT models 11 and fine-tune the weights.",
"We adopt the parameter settings of Dozat and Manning (2017) and use 500 and 100 dimensions for U att and U rel r , respectively.",
"The MLP in the taggers have 500 hidden dimensions.",
"We use a dropout (Srivastava et al., 2014) rate of 0 .",
"33 , a single hidden layer, and a ReLU activation function (Nair and Hinton, 2010) for all MLPs.",
"The models are trained with the Adam optimizer (Kingma and Ba, 2015) using a batch size of 16 sentences.",
"The learning rates are set to 1 e 3 for bi-LSTMs and 1 e 5 for BERT initially and then multiplied by a factor of 0 .",
"1 if the performance on the development set stops improving within 3200 training iterations.",
"For the parsing models, we use the projective Eisner (1996) decoder algorithm.",
"For the joint training and joint decoding models, we tune P t 0 .",
"02 , 0 .",
"05 , 0 .",
"1 , 0 .",
"3 , 0 .",
"5 , 0 .",
"9 u for each treebank independently and fix the settings based on the best dev-set scores.",
"We run each model with 5 different random seeds and report the mean and standard deviation for each setting.",
"Results We report F1 scores based on multi-word headless-structure extraction.",
"Table 3 compares different strategies for identifying headless MWEs in parse trees.",
"Tagging is consistently better than parsing except for two treebanks with BERT feature extractor.",
"Tagging beats parsing in all but two combinations of treebank and feature extractor.",
"As hypothesized, our joint decoder improves over both strategies by 0 .",
"69% ( 1 . 64% ) absolute through combined decisions from parsing and tagging with(out) 11 https://github.com/huggingface/transformers w/ bi-LSTM Compl.",
"BERT.",
"We also compare the joint decoding setting with MTL training strategy alone.",
"While joint decoding yields superior F1 scores, MTL is responsible for a large portion of the gains: it accounts for over half of the average gains with bi-LSTMs, and when we use pre-trained BERT feature extractors, the accuracies of jointly-trained taggers are essentially as good as joint decoding models.",
"Interestingly, the choice of feature extractors also has an effect on the performance gap between parsers and taggers.",
"With bi-LSTMs, tagging is 1 .",
"42% absolute F1 higher than parsing, and the gap is mitigated through MTL.",
"While pre-trained BERT reduces the performance difference dramatically down to 0 .",
"59% absolute, MTL no longer helps parsers overcome this gap.",
"Additionally, we observe that MTL helps both parsing and tagging models, demonstrating that the two views of the same underlying structures are complementary to each other and that learning both can be beneficial to model training.",
"By resolving such representational discrepancies, joint decoding exhibits further accuracy improvement.",
"In terms of dependency parsing accuracies, we confirm that our parsing-only models achieve state-of-the-art performance on the UD treebanks, but there are no significant differences in parsing results among parsing-only, MTL and jointly-decoded models.",
"See Appendix for detailed results.",
"Syntactic analysis in conjunction with MWE identification is an important line of research (Wehrli, 2000).",
"The span-based representations that form the basis of phrase-structure trees (as opposed to dependency trees) are arguably directly compatible with headless spans.",
"This motivates approaches using joint constituency-tree representations based on context-free grammars (Arun and Keller, 2005; Constant et al., 2013) and tree substitution grammars (Green et al., 2011, 2013).",
"Finkel and Manning (2009) add new phrasal nodes to denote named entities, enabling statistical parsers trained on this modified representation to produce both parse trees and named entity spans simultaneously.",
"Le Roux et al. (2014) use dual decomposition to develop a joint system that combines phrase-structure parsers and taggers for compound recognition.",
"These approaches do not directly transfer to dependency-based representations since dependency trees do not explicitly represent phrases.",
"In the context of dependency parsing, Eryigit et al. (2011) report that MWE annotations have a large impact on parsing.",
"They find that the dependency parsers are more accurate when MWE spans are not unified into single lexical items.",
"Similar to the phrase-structure case, Candito and Constant (2014) consider MWE identification as a side product of dependency parsing into joint representations.",
"This parse-then-extract strategy is widely adopted (Vincze et al., 2013; Nasr et al., 2015; Simk et al., 2017).",
"Waszczuk et al. (2019) introduce additional parameterized scoring functions for the arc labelers and use global decoding to produce consistent structures during arc-labeling steps once unlabeled dependency parse trees are predicted.",
"Our work additionally proposes a joint decoder that combines the scores from both parsers and taggers.",
"Alternative approaches to graph-based joint parsing and MWE identification include transition-based (Constant and Nivre, 2016) and easy-first (Constant et al., 2016) dependency parsing.",
"These approaches typically rely on greedy decoding, whereas our joint decoder finds the globally optimal solution through dynamic programming.",
"Our work only focuses on a subset of MWEs that do not have internal structures.",
"There is substantial research interest in the broad area of MWEs (Sag et al., 2002; Constant et al., 2017) including recent releases of datasets (Schneider and Smith, 2015), editions of shared tasks (Savary et al., 2017; Ramisch et al., 2018) and workshops (Savary et al., 2018, 2019).",
"We leave it to future work to extend the comparison and combination of taggers and dependency parsers to other MWE constructions.",
"Our paper provides an empirical comparison of different strategies for extracting headless MWEs from dependency parse trees: parsing, tagging, and joint modeling.",
"Experiments on the MWE-Aware English Dependency Corpus and UD 2.2 across five languages show that tagging, a widely-used methodology for extracting spans from texts, is more accurate than parsing for this task.",
"When using bi-LSTM (but not BERT) representations, our proposed joint decoder reaches higher F1 scores than either of the two other strategies, by combining scores of the two different and complementary representations of the same structures.",
"We also show that most of the gains stem from a multi-task learning strategy that shares common neural representations between the parsers and the taggers.",
"An interesting additional use-case for our joint decoder is when a downstream task, e.g., relation extraction, requires output structures from both a parser and a tagger.",
"Our joint decoder can find the highest-scoring consistent structures among all candidates, and thus has the potential to provide simpler model designs in downstream applications.",
"Our study has been limited to a few treebanks in UD partially due to large variations and inconsistencies across different treebanks.",
"Future community efforts on a unified representation of flat structures for all languages would facilitate further research on linguistically-motivated treatments of headless structures in headful dependency treebanks.",
"Another limitation of our current work is that our joint decoder only produces projective dependency parse trees.",
"To handle non-projectivity, one possible solution is pseudo-projective parsing (Nivre and Nilsson, 2005).",
"We leave it to future work to design a non-projective decoder for joint parsing and headless structure extraction.",
"We thank the three anonymous reviewers for their comments, and Igor Malioutov, Ana Smith and the Cornell NLP group for discussion and comments.",
"TS was supported by a Bloomberg Data Science Ph.D.",
"Fellowship."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"method",
"other",
"objective",
"method",
"abstain",
"objective",
"result",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Neural-based context-aware models for slot tagging have achieved state-of-the-art performance.",
"However, the presence of OOV(out-of-vocab) words significantly degrades the performance of neural-based models, especially in a few-shot scenario.",
"In this paper, we propose a novel knowledge-enhanced slot tagging model to integrate contextual representation of input text and the large-scale lexical background knowledge.",
"Besides, we use multilevel graph attention to explicitly model lexical relations.",
"The experiments show that our proposed knowledge integration mechanism achieves consistent improvements across settings with different sizes of training data on two public benchmark datasets.",
"Slot tagging is a critical component of spoken language understanding(SLU) in dialogue systems.",
"It aims at parsing semantic concepts from user utterances.",
"For instance, given the utterance I'd also like to have lunch during my flight from the ATIS dataset, a slot tagging model might identify lunch as a meal description type.",
"Given sufficient training data, recent neural-based models (Mesnil et al., 2014; Liu and Lane, 2015, 2016; Goo et al., 2018; Haihong et al., 2019; He et al., 2020) have achieved remarkably good results.",
"However, these works often suffer from poor slot tagging accuracy when rare words or OOV( out-of-vocab) words exist.",
"(Ray et al., 2018) has verified the presence of OOV words further degrades the performance of neural-based models, especially in a few-shot scenario where training data can not provide adequate contextual semantics.",
"Previous context-aware models merely focus on how to capture deep contextual semantics to aid Weiran Xu is the corresponding author.",
"in recognizing slot entities, while neglecting ontology behind the words or large-scale background knowledge.",
"Explicit lexical relations are vital to recognizing unseen words when there is not adequate training data, that is, few-shot scenarios.",
"Fig 1 gives a motivating example of slot tagging to explain the phenomenon.",
"This example suggests slot tagging requires not only understanding the complex linguistic context constraints but also reasoning explicit lexical relations via large-scale background knowledge graphs.",
"Previous state-of-the-art context-aware models (Goo et al., 2018; Haihong et al., 2019) only learn contextual information based on a multi-layer BiLSTM encoder and self-attention layer.",
"(Dugas and Nichols, 2016; Williams, 2019; Shah et al., 2019) use handcrafted lexicons (also known as gazettes or dictionaries), which are typically collections of phrases semantically related, to improve slot tagging.",
"One major limitation is that lexicons collected by domain experts are relatively small on the scale and fail to model complicated relations between words, such as relation hierarchy.",
"In this paper, we propose a novel knowledge-enhanced method for slot tagging by integrating contextual representation of input text and the large-scale lexical background knowledge, enabling the model to reason explicit lexical relations.",
"We aim to leverage both linguistic regularities covered by deep LMs and high-quality knowledge derived from curated KBs.",
"Consequently, our model could infer rare and unseen words in the test dataset by incorporating contextual semantics learned from the training dataset and lexical relations from ontology.",
"As depicted in Fig 2, given an input sequence, we first retrieve potentially relevant KB entities and encode them into distributed representations that describe global graph-structured information.",
"Then we employ a BERT (Devlin et al., 2019) encoder layer to capture context-aware representations of the sequence and attend to the KB embeddings using multi-level graph attention.",
"Finally, we integrate BERT embeddings and the desired KB embeddings to predict the slot type.",
"Our main contributions are three-fold: (1) We investigate and demonstrate the feasibility of applying lexical ontology to facilitate recognizing OOV words in the few-shot scenario.",
"To the best of our knowledge, this is the first to consider the large-scale background knowledge for enhancing context-aware slot tagging models.",
"(2) We propose a knowledge integration mechanism and use multi-level graph attention to model explicit lexical relations.",
"(3) Plenty of experiments on two benchmark datasets show that our proposed method achieves consistently better performance than various state-of-the-art context-aware methods.",
"In this work, we consider the slot tagging task in the few-shot scenario, especially for OOV tokens.",
"Given a sequence with n tokens X = { x i } ni =1 , our goal is to predict a corresponding tagging sequence Y = { y i } ni =1 .",
"This section first explains our BERT-based model and then introduces the proposed knowledge integration mechanism for inducing background commonsense.",
"The overall model architecture is illustrated in Fig 2.",
"The model architecture of BERT is a multi-layer bidirectional Transformer encoder.",
"The input representation is a concatenation of WordPiece em-Knowledge Integration Layer x 1 x 2 x n h 1 h 2 h n y 1 y 2 y n h i c 1 c 2 c m sentinel C 1 (x i ) C 2 (x i ) f i ... BiLSTM Matching Layer CRF Layer Figure 2: The overall architecture of the proposed slot tagging model.",
"beddings (Wu et al., 2016), positional embeddings, and the segment embeddings.",
"Inspired by previous RNN-based works (Mes-nil et al., 2014; Liu and Lane, 2016), we extend BERT to a slot tagging model.",
"We first feed the input sequence X = { x i } ni =1 to a pre-trained BERT encoding layer and then get final hidden states H = ( h 1 , ..., h n ) .",
"To make this procedure compatible with the original BERT tokenization, we feed each input word into a WordPiece tokenizer and use the hidden state corresponding to the first sub-word as input to the softmax classifier.",
"where h i R d 1 is the hidden state corresponding to the first sub-word of the i -th input word x i and y i is the slot label.",
"The knowledge integration mechanism aims at enhancing the deep contextual representation of input text via leveraging the large-scale lexical background knowledge, Wordnet (Miller, 1995), to recognize unseen tokens in the training set.",
"Essentially, it applies multi-level graph attention to KB embeddings with the BERT representations from the previous layer to enhance the contextual BERT embeddings with human-curated background knowledge.",
"We first introduce the KB embedding and retrieval process.",
"In this paper, we use the lexical KB, WordNet, stored as (subject, relation, object) triples, where each triple indicates a specific relation between word synsets, e.g., (state, hypernym-of, california) .",
"KB Embeddings We represent KB concepts as continuous vectors in this paper.",
"The goal is that the KB tuples ( s, r, o ) can be measured in the dense vector space based on the embeddings.",
"We adopt the BILINEAR model (Yang et al., 2014) which measures the relevance via a bilinear function: f ( s , r , o ) = s TM r o , where s , o R d 2 are the vector embeddings for s, o respectively and and M r is a relation-specific embedding matrix.",
"Then we train the embeddings using the max-margin ranking objective: (cid:88) q =( s,r,o ) T (cid:88) q (cid:48) =( s,r,o (cid:48) ) T (cid:48) max (cid:8) 0 , 1 S q + S q (cid:48) (cid:9) (2) where T denotes the set of triples in the KB and T (cid:48) denotes the negative triples that are not observed in the KB.",
"Finally we can acquire vector representations for concepts of the KB.",
"Because we mainly focus on the slot tagging task, and the datasets are relatively small for joint learning KB embeddings.",
"Furthermore, the KB contains many triplets not present in the ATIS and Snips dataset.",
"Therefore we pre-train the KB vectors and keep them fixed while training the whole model to reduce the complexity.",
"KB Concepts Retrieval We need to retrieve all the concepts or synsets relevant to the input word x i from the KB.",
"Different from (Yang and Mitchell, 2017; Yang et al., 2019), for a word x i , we first return its synsets as the first-level candidate set C 1 ( x i ) of KB concepts.",
"Then we construct the second-level candidate set C 2 ( x i ) by retrieving all the direct hyponyms of each synset in C 1 ( x i ) , as shown in the right part of Fig 2.",
"Multi-Level Graph Attention After obtaining the two-level concept candidate sets, we apply the BERT embedding h i of input token x i to attending over the multi-level memory.",
"The first-level attention, , is calculated by a bilinear operation between h i and each synset c j in the first level set C 1 ( x i ) : ij exp ( c Tj W 1 h i ) (3) Then we add an additional sentinel vector c (Yang and Mitchell, 2017) and accumulate all the embeddings as follows: s 1 i = (cid:88) j ij c j + i c (4) ATIS Snips Vocabulary Size 722 11,241 Percentage of OOV words 0.77% 5.95% Number of Slots 120 72 Training Set Size 4,478 13,084 Development Set Size 500 700 Testing Set Size 893 700 Table 1: Statistics of ATIS and Snips datasets.",
"where i is similar to ij and (cid:80) j ij + i = 1 .",
"Here s 1 i is regarded as a one-hop knowledge state vector for it only represents its directly linked synsets.",
"Therefore, we perform the second-level graph attention to encode the hyponyms of its direct synsets to enrich the information of original synsets.",
"Intuitively the second-level attention over the hyponyms can be viewed as a relational reasoning process.",
"Because once a synset belongs to an entity type, its hyponyms always conform to the same type.",
"Likewise, the second-level attention over C 2 ( x i ) is calculated: ijk exp ( c Tjk W 2 h i ) (5) where c j is the j -th synset linked to token x i and c jk the k -th hyponym of c j .",
"Then we concat multi-level knowledge-aware vector s 1 i , s 2 i , and original BERT representation h i , and output f i = [ s 1 i , s 2 i , h i ] .",
"We also add a BiLSTM matching layer which takes as input the knowledge-enriched representations f i .",
"Then we forward the hidden states to a CRF layer and predict the final results.",
"The training objective is the sum of log-likelihood of all the words.",
"Datasets To evaluate our approach, we conduct experiments on two public benchmark datasets, ATIS (Tur et al., 2010) and Snips (Coucke et al., 2018).",
"ATIS contains 4,478 utterances in the training set and 893 utterances in the test set, while Snips contains 13,084 and 700 utterances, respectively.",
"The percentage of OOV words between the training and test datasets is 0.77%(ATIS) and 5.95%(Snips).",
"Samples in Snips are from different topics, such as getting weather and booking a restaurant, resulting in a larger vocabulary.",
"By contrast, samples in ATIS are all about flight information with similar vocabularies across them.",
"Therefore, Snips is much more complicated, mainly due to data diversity and the large vocabulary.",
"The full statistics are shown in the Table 1.",
"To simulate the few-shot scenarios, we down-sample the original training sets of ATIS and Snips to different extents while keeping valid and test sets fixed.",
"We aim to evaluate the effectiveness of integrating external KB under the settings of varied sizes of training data available.",
"Evaluation We evaluate the performance of slot tagging using the F1 score metric.",
"In the experiments, we use the English uncased BERT-base model, which has 12 layers, 768 hidden states, and 12 heads.",
"The hidden size for the BiLSTM layer is set to 128.",
"Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 1e-5.",
"The dropout probability is 0.1, and the batch size is 64.",
"We finetune all hyperparameters on the valid set.",
"Attention-Based (Liu and Lane, 2016) uses an RNN layer and a self-attention layer to encode the input text.",
"Slot-Gated (Goo et al., 2018), which has two variants, Full Atten and Intent Atten , applies the information of intent detection task to enhance slot tagging.",
"SF-ID Network (Haihong et al., 2019) designs a multiple iteration mechanism to construct bi-directional interrelated connections between slot tagging and intent detection.",
"Most of the previous methods consider improving the performance of slot tagging by joint learning with intent detection.",
"However, the effectiveness of background knowledge for slot tagging is still unexplored.",
"Consequently, our proposed approach intends to integrate the large-scale lexical background knowledge, WordNet, to enhance the deep contextual representation of input text.",
"We hope to further improve the performance of slot tagging, especially in the few-shot scenario where there is no plenty of training data available.",
"1 3.3 Overall Results We display the experiment results in Table 2, where we choose two model architectures RNN and BERT as the encoding layer.",
"Table 2 shows that our proposed knowledge integration mechanism significantly outperforms the baselines for both datasets, demonstrating that explicitly integrating the large-scale background knowledge and contextual representation can benefit slot tagging effectively.",
"Moreover, the improvement of 0.72% over strong baseline BERT on Snips is considerably higher than 0.27% on ATIS.",
"Considering the distinct complexity of the two datasets, the probable reason is that a simpler slot tagging task, such as ATIS, does not require much background knowledge to achieve good results.",
"Because the vocabulary of ATIS is extremely smaller than that of Snips, therefore the context-aware models are capable of providing enough cues for recognizing rare or OOV words.",
"Hence, our method makes a notable difference in a scenario where samples are linguistically diverse, and large vocab exists.",
"The results also demonstrate that incorporating external knowledge will not bring in much noise since we use a knowledge sentinel for the better tradeoff between the impact of background knowledge and information from the context.",
"1 We do not choose (Williams, 2019) as a baseline since only performs experiments on private industrial datasets and does not open source.",
"We can hardly figure out the details of manually collecting lexicons from the dataset.",
"RNN-based models are 95.17(+0.46) on ATIS and 89.30(+1.51) on Snips, where the scores in the brackets are the absolute improvements arisen by KB.",
"Compared to the BERT-based models, 95.98(+0.27) on ATIS and 95.17(+0.72) on Snips, the RNN-based model achieves more significant improvements in BERT-based models.",
"We believe BERT can effectively transfer prior linguistic context constraints, so that background knowledge benefits RNN-based models more.",
"BERT does improve the model's ability to solve the OOV problem since it has learned linguistic knowledge from the large corpus.",
"However, our method focuses more on the effect of using human-curated structured background knowledge and further enhances BERT in a distinct way.",
"Fig 3 shows the relative improvement percentages on ATIS and Snips using different sizes of training data.",
"Results substantiate knowledge integration better facilitates few-shot slot tagging.",
"This is because traditional context-aware models can not learn enough contextual semantics well while only given several samples.",
"Explicit lexical relations become essentially necessary when there is not adequate training data, especially for rare words or OOV words.",
"Background KB enables the model to reason explicit lexical relations and helps recognize rare and unseen words.",
"Meanwhile, incorporating background knowledge can also enhance the original representation of BERT, which can provide direct lexical relations.",
"To study the effect of each component of our method, we conduct ablation analysis under the 10% training data setting (Table 3).",
"We can see that knowledge integration is crucial to the improvements.",
"Besides, the first-level graph attention acquires better performance gain than the second-level attention.",
"We assume that directly linked synsets are more significant than the hyponyms.",
"The matching layer and CRF also play a role.",
"The reason why the RNN matching layer matters is partly to build explicit interactions between knowledge vectors and context vectors.",
"We present a novel knowledge integration mechanism of incorporating background KB and deep contextual representations to facilitate the few-shot slot tagging task.",
"Experiments confirm the effectiveness of modeling explicit lexical relations, which has not yet been explored by previous works.",
"Moreover, we find that our method delivers more benefits to data scarcity scenarios.",
"We hope to provide new guidance for the future slot tagging work.",
"The authors would like to thank the reviewers for their valuable comments.",
"This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC Artif-ical Intelligence Project No.",
"MCM20190701."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Many natural language questions require qualitative, quantitative or logical comparisons between two entities or events.",
"This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions by integrating logic rules and neural models.",
"Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.",
"Improving the global consistency of predictions, our approach achieves large improvements over previous methods in a variety of question answering (QA) tasks including multiple-choice qualitative reasoning, cause-effect reasoning, and extractive machine reading comprehension.",
"In particular, our method significantly improves the performance of RoBERTa-based models by 1-5% across datasets.",
"We advance state of the art by around 5-8% on WIQA and QuaRel and reduce consistency violations by 58% on HotpotQA.",
"We further demonstrate that our approach can learn effectively from limited data.",
"1 1 Introduction Comparison-type questions (Tandon et al., 2019; Tafjord et al., 2019; Yang et al., 2018) ask about relationships between properties of entities or events such as cause-effect, qualitative or quantitative reasoning.",
"To create comparison questions that require inferential knowledge and reasoning ability, annotators need to understand context presented in multiple paragraphs or carefully ground a question to the given situation.",
"This makes it challenging to annotate a large number of comparison questions.",
"Most current datasets on comparison questions are much smaller than standard machine reading comprehension (MRC) datasets (Rajpurkar 1 Our code and data is available at https://github. com/AkariAsai/logic_guided_qa . Q: The ceramic vase was less flexible than the plastic ball so it was A: more breakable Q: The ceramic vase was more flexible than the plastic ball so it was A: less breakable Q: If it is silent, does the outer ear collect less sound waves? A: more [positive causal relationship] Q: If the outer ear collect less sound waves, is less sound being detected? A : more [positive causal relationship] Q: If it is silent, is less sound being detected? A: more [positive causal relationship] RoBERTa more breakable more breakable RoBERTa more more less Conflict Conflict Figure 1: Inconsistent predictions by RoBERTa. Top row shows an example of symmetric inconsistency and the second row shows an example of transitive inconsistency. The examples are partially modified. et al., 2016; Joshi et al., 2017).",
"This poses new challenges to standard models, which are known to exploit statistical patterns or annotation artifacts in these datasets (Sugawara et al., 2018; Min et al., 2019a).",
"Importantly, state-of-the-art models show inconsistent comparison predictions as shown in Figure 1.",
"Improving the consistency of predictions has been previously studied in natural language inference (NLI) tasks (Minervini and Riedel, 2018; Li et al., 2019), but has not been addressed in QA.",
"In this paper, we address the task of producing globally consistent and accurate predictions for comparison questions leveraging logical and symbolic knowledge for data augmentation and training regularization.",
"Our data augmentation uses a set of logical and linguistic knowledge to develop additional consistent labeled training data.",
"Subsequently, our method uses symbolic logic to incorporate consistency regularization for additional supervision signal beyond inductive bias given by data augmentation.",
"Our method generalizes previous consistency-promoting methods for NLI tasks (Minervini and Riedel, 2018; Li et al., 2019) to adapt to substantially different question formats.",
"Our experiments show significant improvement over the state of the art on a variety of QA tasks: a classification-based causal reasoning QA, a multiple choice QA for qualitative reasoning and an extractive MRC task with comparisons between entities.",
"Notably, our data augmentation and consistency constrained training regularization improves performance of RoBERTa-based models (Liu et al., 2019) by 1.0%, 5.0% and 2.5% on WIQA, QuaRel and HotpotQA.",
"Our approach advances the state-of-the-art results on WIQA and QuaRel with 4.7 and 8.4% absolute accuracy improvement, respectively, reducing inconsistent predictions.",
"We further demonstrate that our approach can learn effectively from limited labeled data: given only 20% of the original labeled data, our method achieves performance on par with a competitive baseline learned with the full labeled data.",
"Data augmentation has been explored in a variety of tasks and domains (Krizhevsky et al., 2009; Cubuk et al., 2019; Park et al., 2019).",
"In NLP, using back-translation (Yu et al., 2018) or dictionary based word replacement (Zhang et al., 2015) has been studied.",
"Most relevant to our work, Kang et al. (2018) study NLI-specific logic and knowledge-based data augmentation.",
"Concurrent to our work, Gokhale et al. (2020) study visual QA models' ability to answer logically composed questions, and show the effectiveness of logic-guided data augmentation.",
"Our data augmentation does not rely on task-specific assumptions, and can be adapted to different formats of QA task.",
"We further leverage consistency-promoting regularization, which gives improvements in accuracy and consistency.",
"Improving prediction consistency via training regularization has been studied in NLI tasks.",
"Minervini and Riedel (2018) present model-dependent first-order logic guided adversarial example generation and regularization.",
"Li et al. (2019) introduce consistency-based regularization incorporating the first-order logic rules.",
"Previous approach is model-dependent or relies on NLI-specific rules, while our method is model-agnostic and is more generally applicable by combining it with data augmentation.",
"Regularizing loss to penalize violations of structural constraints in models' output has been also studied in previous work on constraint satisfaction in structured learning (Lee et al., 2019; Ganchev et al., 2010).",
"Our work regularizes models to produce globally consistent predictions among augmented data following logical constraints, while those studies incorporates structured prediction models following linguistics rules.",
"We present the components of our QA method: first-order logic guided data augmentation (Sec-tion 3.1 and Section 3.2), and consistency-based regularization (Section 3.3).",
"For globally consistent predictions in QA, we require responses to follow two important general logical rules: symmetric consistency and transitive consistency , which are illustrated in Figure 1 and are formally described below.",
"Let q, p, a be a question, a paragraph and an answer predicted by a model.",
"A is a set of answer candidates.",
"Each element of A can be a span in p , a class category, or an arbitrary answer choice.",
"X = { q, p, a } represents a logic atom.",
"Symmetric consistency In a comparison question, small surface variations such as replacing words with their antonyms can reverse the answer, while keeping the overall semantics of the question as before.",
"We define symmetry of questions in the context of QA as follows: ( q, p, a ) ( q sym , p, a sym ) , where q and q sym are antonyms of each other, and a sym is the opposite of the ground-truth answer a in A .",
"For example, the two questions in the first row of Figure 1 are symmetric pairs.",
"We define the symmetric consistency of predictions in QA as the following logic rule: ( q, p, a ) ( q sym , p, a sym ) , (1) which indicates a system should predict a sym given ( q sym , p ) , if it predicts a for ( q, p ) .",
"Transitive consistency.",
"Transitive inference between three predicates A, B, C is represented as: A B B C then A C (Gazes et al., 2012).",
"In the context of QA, the transitive examples are mainly for causal reasoning questions that inquire about the effect e given the cause c .",
"The second row of Figure 1 shows an example where transitive consistency is violated.",
"For two questions q 1 and q 2 in which the effect of q 1 ( = e 1 ) is equal to the cause of q 2 ( = c 2 ), we define the transitive consistency of predictions as follows: ( q 1 , p, a 1 ) ( q 2 , p, a 2 ) ( q trans , p, a trans ) .",
"Given a set of training examples X in the form of ( q, p, a ) , we automatically generate additional examples X aug = { q aug , p, a aug } using symmetry and transitivity logical rules.",
"The goal is to augment the training data so that symmetric and transitive examples are observed during training.",
"We provide some augmented examples in Table 1.",
"Augmenting symmetric examples To create a symmetric question, we convert a question into an opposite one using the following operations:",
"(a) replace words with their antonyms,",
"(b) add, or",
"(c) remove words.",
"For",
"(a), we select top frequent adjectives or verbs with polarity (e.g., smaller, increases ) from training corpora, and expert annotators write antonyms for each of the frequent words (we denote this small dictionary as D ).",
"More details can be seen in Appendix A. For",
"(b) and",
"(c), we add negation words or remove negation words (e.g., not ).",
"For all of the questions in training data, if a question includes a word in D for the operation",
"(a), or matches a template (e.g., which * is which * is not ) for operations",
"(b) and",
"(c), we apply the operation to generate q sym .",
"2 We obtain a sym by re-labeling the answer a to its opposite answer choice in A (see Appendix B).",
"Augmenting transitive examples We first find a pair of two cause-effect questions X 1 = ( q 1 , p, a 1 ) and X 2 = ( q 2 , p, a 2 ) , whose q 1 and q 2 consist of 2 We observe that",
"(b)(c) are less effective than",
"(a) in WIQA or QuaRel, while especially",
"(b) contributes to the performance improvements on HotpotQA as much as",
"(a) does.",
"( c 1 , e 1 ) and ( c 2 , e 2 ) , where e 1 = c 2 holds.",
"When a 1 is a positive causal relationship , we create a new example X trans = ( q 3 , p, a 2 ) for q 3 = ( c 1 , e 2 ) .",
"Sampling augmented data Adding all consistent examples may change the data distribution from the original one, which may lead to a deterioration in performance (Xie et al., 2019).",
"One can select the data based on a model's prediction inconsistencies (Minervini and Riedel, 2018) or randomly sample at each epoch (Kang et al., 2018).",
"In this work, we randomly sample augmented data at the beginning of training, and use the same examples for all epochs during training.",
"Despite its simplicity, this yields competitive or even better performance than other sampling strategies.",
"3 3.3 Logic-guided Consistency Regularization We regularize the learning objective (task loss, L task ) with a regularization term that promotes consistency of predictions (consistency loss, L cons ).",
"The first term L task penalizes making incorrect predictions.",
"The second term L cons 4 penalizes making predictions that violate symmetric and transitive logical rules as follows: L cons = sym L sym + trans L trans , (4) where sym and trans are weighting scalars to balance the two consistency-promoting objectives.",
"3 We do not add X aug if the same pair has already exist.",
"4 We mask the L cons for the examples without symmetric or transitive consistent examples.",
"Previous studies focusing on NLI consistency (Li et al., 2019) calculate the prediction inconsistency between a pair of examples by swapping the premise and the hypothesis, which cannot be directly applied to QA tasks.",
"Instead, our method leverages consistency with data augmentation to create paired examples based on general logic rules.",
"This enables the application of consistency regularization to a variety of QA tasks.",
"Inconsistency losses The loss computes the dissimilarity between the predicted probability for the original labeled answer and the one for the augmented data defined as follows: L sym = | log p ( a | q, p ) log p ( a aug | q aug , p ) | .",
"Likewise, for transitive loss, we use absolute loss with the product T-norm which projects a logical conjunction operation ( q 1 , p, a 1 ) ( q 2 , c, a 2 ) to a product of probabilities of two operations, p ( a 1 | q 1 , p ) p ( a 2 | q 2 , p ) , following Li et al. (2019).",
"We calculate a transitive consistency loss as: L trans = | log p ( a 1 | q 1 , p ) + log p ( a 2 | q 2 , p ) log p ( a trans | q trans , p ) | .",
"Annealing The model's predictions may not be accurate enough at the beginning of training for consistency regularization to be effective.",
"We perform annealing (Kirkpatrick et al., 1983; Li et al., 2019; Du et al., 2019).",
"We first set { sym,trans } = 0 in Eq.",
"(4) and train a model for epochs, and then train it with the full objective.",
"Datasets and experimental settings We experiment on three QA datasets: WIQA (Tandon et al., 2019), QuaRel (Tafjord et al., 2019) and HotpotQA (oracle, comparison questions 5 ) (Yang et al., 2018).",
"As shown in Table 1, these three datasets are substantially different from each other in terms of required reasoning ability and task format.",
"In WIQA, there are 3,238 symmetric examples and 4,287 transitive examples, while 50,732 symmetric pairs and 1,609 transitive triples are missed from the original training data.",
"HotpotQA and QuaRel do not have any training pairs requiring consistency.",
"Our method randomly samples 50, 80, 90% of the augmented data for WIQA, QuaRel and HotpotQA, resulting in 24,715/836/3,538 newly created training examples for those datasets, respectively.",
"We use standard F1 and EM scores for performance evaluation on HotpotQA and use accuracy for WIQA and QuaRel.",
"We report a violation of consistency following Minervini and Riedel (2018) to evaluate the effectiveness of our approach for improving prediction consistencies.",
"We compute the violation of consistency metric v as the percentage of examples that do not agree with symmetric and transitive logical rules.",
"More model and experimental details are in Appendix.",
"Main Results Table 2 demonstrates that our methods ( DA and DA + Reg ) constantly give 1 to 5 points improvements over the state-of-the-art RoBERTa QA's performance on all three of the datasets, advancing the state-of-the-art scores on WIQA and QuaRel by 4.7% and 8.4%, respectively.",
"On all three datasets, our method signifi-WIQA Input RoBERTa DA DA+Reg p Sound enters the ears of a person.",
"cantly reduces the inconsistencies in predictions, demonstrating the effects of both data augmentation and regularization components.",
"Notably on WIQA, RoBERTa shows violation of consistency in 13.9% of the symmetric examples and 10.0% of the transitive examples.",
"Our approach reduces the violations of symmetric and transitive consistencies to 8.3% and 2.5%, respectively.",
"Results with limited training data Table 2 also shows that our approach is especially effective under the scarce training data setting: when only 20% of labeled data is available, our DA and Reg together gives more than 12% and 14% absolute accuracy improvements over the RoBERTa baselines on WIQA and QuaRel, respectively.",
"Ablation study We analyze the effectiveness of each component on Table 3.",
"DA and Reg each improves the baselines, and the combination performs the best on WIQA and QuaRel.",
"DA (standard) follows a previous standard data augmentation technique that paraphrases words (verbs and adjectives) using linguistic knowledge, namely Word-Net (Miller, 1995), and does not incorporate logical rules.",
"Importantly, DA (standard) does not give notable improvement over the baseline model both in accuracy and consistency, which suggests that logic-guided augmentation gives additional inductive bias for consistent QA beyond amplifying the number of train data.",
"As WIQA consists of some transitive or symmetric examples, we also report the performance with Reg only on WIQA.",
"The performance improvements is smaller, demonstrating the importance of combining with DA .",
"Qualitative Analysis Table 4 shows qualitative examples, comparing our method with RoBERTa baseline.",
"Our qualitative analysis shows that DA+Reg reduces the confusion between opposite choices, and assigns larger probabilities to the ground-truth labels for the questions where DA shows relatively small probability differences.",
"On HotpotQA, the baseline model shows large consistency violations as shown in Table 2.",
"The HotpotQA example in Table 4 shows that RoBERTa selects the same answer to both q and q sym , while DA answers correctly to both questions, demonstrating its robustness to surface variations.",
"We hypothesize that the baseline model exploits statistical pattern, or dataset bias presented in questions and that our method reduces the model's tendency to exploit those spurious statistical patterns (He et al., 2019; Elkahky et al., 2018), which leads to large improvements in consistency.",
"We introduce a logic guided data augmentation and consistency-based regularization framework for accurate and globally consistent QA, especially under limited training data setting.",
"Our approach significantly improves the state-of-the-art models across three substantially different QA datasets.",
"Notably, our approach advances the state-of-the-art on QuaRel and WIQA, two standard benchmarks requiring rich logical and language understanding.",
"We further show that our approach can effectively learn from extremely limited training data.",
"This research was supported by ONR N00014-18-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, Sloan Fellowship, and The Naka-jima Foundation Fellowship.",
"We thank Antoine Bosselut, Tim Dettmers, Rik Koncel-Kedziorski, Sewon Min, Keisuke Sakaguchi, David Wadden, Yizhong Wang, the members of UW NLP group and AI2, and the anonymous reviewers for their insightful feedback."
] | [
"abstain",
"result",
"objective",
"result",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"result",
"result",
"result",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other"
] |
[
"Abstract Graph convolutional networks (GCNs) have been applied recently to text classification and produced an excellent performance.",
"However, existing GCN-based methods do not assume an explicit latent semantic structure of documents, making learned representations less effective and difficult to interpret.",
"They are also transductive in nature, thus cannot handle out-of-graph documents.",
"To address these issues, we propose a novel model named inductive Topic Variational Graph Auto-Encoder (T-VGAE), which incorporates a topic model into variational graph-auto-encoder (VGAE) to capture the hidden semantic information between documents and words.",
"T-VGAE inherits the interpretability of the topic model and the efficient information propagation mechanism of VGAE.",
"It learns probabilistic representations of words and documents by jointly encoding and reconstructing the global word-level graph and bipartite graphs of documents, where each document is considered individually and decoupled from the global correlation graph so as to enable inductive learning.",
"Our experiments on several benchmark datasets show that our method outperforms the existing competitive models on supervised and semi-supervised text classification, as well as unsupervised text representation learning.",
"In addition, it has higher interpretability and is able to deal with unseen documents.",
"Recently, graph convolutional networks (GCNs)(Kipf and Welling, 2017; Velickovic et al., 2018) have been successfully applied to text classification tasks (Peng et al., 2018a; Yao",
"et al., 2019; Liu et al., 2020; Wang et al., 2020).",
"In addition to the local information captured by CNN or RNN, GCNs learn word and document representations by taking into account the global correlation information embedded in the corpus-level graph, where words and documents are nodes connected by indexing or citation relations.",
"However, the hidden semantic structures , such as latent topics in documents (Blei et al., 2003; Yan et al., 2013; Peng et al., 2018b), is still ignored by most of these methods (Yao et al., 2019; Huang et al., 2019; Liu et al., 2020; Zhang et al., 2020), which can improve the text representation and provide extra interpretability (in which the probabilistic generative process and topics make more sense to humans compared to neural networks, i.e. topics can be visually represented by top-10 or 20 most probable word clusters).",
"Although few studies such as (Wang et al., 2020) have proposed incorporating a topic structure into GCNs, the topics are extracted in advance from the set of documents, independently from the graph and information propagation among documents and words.",
"We believe that the topics should be determined in accordance with the connections in the graph.",
"For example, the fact that two words are connected provides extra information that these words are on a similar topic(s).",
"Moreover, existing GCN-based methods are limited by their transductive learning nature, i.e. a document can be classified only if it is already seen in the training phase (Wang et al., 2020; Yao et al., 2019; Liu et al., 2020).",
"The lack of inductive learning ability for unseen documents is a critical issue in practical text classification applications, where we have to deal with new documents.",
"It is Table 1: Comparison with related work.",
"intuitive to decouple documents with the global graph and treat each document as an independent graph (Huang et al., 2019; Zhang et al., 2020; Ding et al., 2020; Zhang and Zhang, 2020; Xie et al., 2021).",
"However, no attempt has been made to address both aforementioned issues.",
"To address these issues, we incorporate the topic model into variational graph auto-encoder (VGAE), and propose a novel framework named inductive Topic Variational Graph Auto-Encoder (T-VGAE).",
"T-VGAE first learns to represent the words in a latent topic space by embedding and reconstructing the word correlation graph with the GCN probabilistic encoder and probabilistic decoder.",
"Take the learned word representations as input, a GCN-based message passing probabilistic encoder is adopted to generate document representations via information propagation between words and documents in the bipartite graph.",
"We compare our model with existing related work in Table 1. Different from previous approaches, our method uni-fies topic mining and graph embedding learning with VGAE, thus can fully embed the relations between documents and words into dynamic topics and provide interpretable topic structures into representations.",
"Besides, our model builds a document-independent word correlation graph and a word-document bipartite graph for each document instead of a corpus-level graph to enable inductive learning.",
"1. We propose a novel model T-VGAE based on topic models and VGAE, which incorporates latent topic structures for inductively document and word representation learning.",
"This makes the model more effective and interpretable.",
"2. we propose to utilize the auto-encoding variational Bayes (AEVB) method to make efficient black-box inference of our model.",
"3. Experimental results on benchmark datasets demonstrate that our method outperforms the existing competitive GCN-based methods on supervised and semi-supervised text classification tasks.",
"It also outperforms topic models on unsupervised text representation learning.",
"Recently, GCNs have been applied to various NLP tasks (Zhang et al., 2018; Vashishth et al., 2019).",
"For example, TextGCN (Yao et al., 2019) was proposed for text classification, which enriches the corpus-level graph with the global semantic information to learn word and document embeddings.",
"Inspired by it, Liu et al. (Liu et al., 2020) further considered syntactic and sequential contextual information and proposed TensorGCN.",
"However, none of them utilized the latent semantic structures in the documents to enhance text classification.",
"To address the issue, (Wang et al., 2020) proposed dynamic HTG (DHTG), in an attempt to integrate the topic model into graph construction.",
"DHTG learned latent topics from the document-word correlation information (similar to traditional topic models), which will be used for GCN based document embedding.",
"However, the topics in DHTG were learned independently from the word relation graph and the information propagation process in the graph, in which word relations are ignored.",
"Moreover, the existing GCN-based methods also require a pre-defined graph with all the documents and cannot handle out-of-graph documents, thus limiting their practical applicability.",
"To deal with the inductive learning problem, (Huang et al., 2019; Zhang et al., 2020; Ding et al., 2020; Zhang and Zhang, 2020) proposed to consider each document as an independent graph for text classification.",
"However, the latent semantic structure and interpretability are still ignored in these methods.",
"Different from previous approaches, we aim to deal with both issues of dynamic topic structure and inductive learning.",
"We propose to combine the topic model and graph based information propagation in a unified framework with VGAE to learn interpretable representations for words and documents.",
"There are also studies trying to enhance topic models with efficient message passing in the graph data structure of GCNs.",
"GraphBTM (Zhu et al., 2018) proposed to enrich the biterm topic model (BTM) with the word co-occurrence graph encoded with GCNs.",
"To deal with data streams, (Van Linh et al., 2020) proposed graph convolutional topic model (GCTM), which introduces a knowledge graph modeled with GCNs to the topic model.",
"(Yang et al., 2020) presented Graph Attention TOpic Network (GATON) for correlated topic modeling.",
"It tackles the overfitting issue in topic modeling with a generative stochastic block model (SBM) and GCNs.",
"In contrast with these studies, we focus on integrating the topic model into GCN-based VGAE for supervised learning tasks and derive word-topic and document-topic distributions simultaneously.",
"Variational Graph Auto-encoders (VGAEs) have been widely used in graph representation learning and graph generation.",
"The earliest study (Kipf and Welling, 2016) proposed VGAE method, which extended variational auto-encoder (VAE) on graph structure data for learning graph embedding.",
"Based on VGAE, (Pan et al., 2018) introduced an adversarial training to regularize the latent variables and further proposed adversarially regularized variational graph autoencoder (ARVGA).",
"(Hasanzadeh et al., 2019) incorporated semi-implicit hierarchical variational distribution into VGAE (SIG-VAE) to improve the representation power of node embeddings.",
"(Grover et al., 2019) proposed Graphite model that integrated an iterative graph refinement strategy into VGAE, inspired by low-rank approximations.",
"However, to the best of our knowledge, our model is the first effort to apply VGAE to unify the topic learning and graph embedding for text classification, thus can provide better interpretability and overall performance.",
"Formally, we denote a corpus as C , which contains D documents and the ground truth labels Y 2 c = { 1 , ..., M } of documents, where M is the total number of classes in the corpus.",
"Each document t 2 C is represented by a sequence of words t = { w 1 , ..., w n t } ( w i 2 v ) , where n t is the number of words in document t and v is the vocabulary of size V .",
"From the whole corpus, we build a word correlation graph G = ( v, e ) containing word nodes v and edges e , to capture the word co-occurrence information.",
"Similar to previous work (Yao et al., 2019), we utilize the positive point mutual information (PPMI) to calculate the correlation between two word nodes.",
"Formally, for two words ( w i , w j ) , we have PPMI ( w i , w j ) = max( log p ( w i , w j ) p ( w i ) p ( w j ) , 0) (1) where p ( w i , w j ) is the probability that ( w i , w j ) co-occur in the sliding window and p ( w i ) , p ( w j ) are the probabilities of words w i and w j in the sliding window.",
"They can be empirically estimated as P ( w i , w j ) = n ( w i ,w j ) n and P ( w i ) = n ( w i ) n , where n ( w i , w j ) is the number of co-occurrences of ( w i , w j ) in the sliding windows, n ( w i ) is the number of occurrences of w i in the sliding windows and n the total number of sliding windows.",
"For two word nodes ( w i , w j ) , the weight of the edge between them can be defined as: A vi,j = ( PPMI ( w i , w j ) , i 6 = j 1 , i = j (2) where A v 2 RV V is the adjacency matrix which represents the word correlation graph structure G .",
"Different from the existing studies (Yao et al., 2019; Liu et al., 2020; Wang et al., 2020) that consider all documents and words in a heterogeneous graph, we propose to build a separate graph for each document to enable inductive learning.",
"Typically, documents can be represented by the document-word matrix A d 2 RD V , in which the row A di = { x i 1 , ..., x iv } 2 R 1 V represents the document i , and x ij is the TF-IDF weight of the word j in document i .",
"The decoupling of documents from a global pre-defined graph enables our method to handle new documents.",
"Based on A v and A d , we propose the T-VGAE model, as shown in Figure 1. It is a deep generative model with structured latent variables based on GCNs.",
"We consider that the word co-occurrence graph A and the bipartite graph A dt of each document t are generated from the random process with two latent",
"variables z v 2 RV K and z dt 2 R 1 K , where K denotes the number of latent topics.",
"The generating process for A v , A d and Y are as follows (see Figure",
"2(a)): v A v z Y d A d z (cid:84) DV",
"1. For each word i in vocabulary v , draw the latent variable z vi from the prior p ( z vi ) 2. For each observed edge A vi,j between words i and j , draw A vi,j from conditional distribution p ( A vi,j | z vi , z vj ) 3. For each document t in corpus C :",
"(a) Draw the latent variable z dt from the prior p ( z dt )",
"(b) Draw A dt from the conditional distribution p ( A dt | z dt , z v )",
"(c) Draw Y t from the conditional distribution p ( Y t | z dt ) where is the set of parameters for all prior distributions.",
"Here, we consider the centered isotropic multivariate Gaussian priors p ( z v ) = Q Vi =1 p ( z vi ) = Q Vi =1 N ( z vi | 0 , I ) and p ( z d ) = Q Dt =1 p ( z dt ) = Q Dt =1 N ( z dt | 0 , I ) .",
"Notice that the priors p ( z v ) and p ( z d ) are parameter free in this case.",
"According to the above generative process, we can maximize the marginal likelihood of observed graph A v , A d and Y to learn parameters and latent variables as follows: p ( A v , A d , Y | Z v , Z d , X v ) = DY t =1 p ( Y t | z dt ) p ( A dt | z dt , z v ) p ( z dt ) VY i =1 VY j =1 p ( A vi,j | z vi ( z vj ) T ) p ( z v ) (3) Because the inference of true posterior of latent variable z v and z d is intractable, we further introduce the variational posterior distribution q \u0000 ( z v , z d | A d , A v , X v ) with parameters \u0000 to approximate the true posterior p ( z v , z d ) = p ( z v ) p ( z d ) .",
"We make the structured mean-field (SMF) assumption q \u0000 ( z v , z d | A d , A v , X v ) = q \u0000 ( z v | A v , X v ) q \u0000 ( z d | A d , z v ) , where X v 2 RV M are the feature vectors of words and M is the dimension of the feature vectors (see Figure",
"2(b)).",
"We can yield the following tractable stochastic evidence lower bound (ELBO): L ( , \u0000 ; A v , A d , X v ) = E q \u0000 ( z v | A v ,X v ) [log p ( A v | z v )] + E q \u0000 ( z d | A d ,z v ) [log p ( A d | z d , z v )] + E q \u0000 ( z d | A d ,z v ) [log p ( Y | z d )] \u0000 KL [ q \u0000 ( z v | A v , X v ) || p ( z v )] \u0000 KL [ q \u0000 ( z d | A d , z v ) || p ( z d )] (4) where the first three terms are the reconstruction terms, and the latter two terms are the Kullback-Leibler (KL) divergences of variational posterior distributions and true posterior distributions.",
"Using auto-encoding variational Bayes (AVB) approach (Kingma and Welling, 2013), we are able to parametrize the variational posteriors q \u0000 and true posteriors p with the GCN-based probabilistic encoder and decoder, to conduct neural variational inference (NVI).",
"For the latent variable z v , we make the mean-field approximation that: q \u0000 ( z v | A v , X v ) = Q Vi =1 q \u0000 ( z vi | A v , X v ) .",
"For simplify the model inference, we consider the multivariate normal variational posterior with a diagonal covariance matrix as previous neural topic models (Miao et al., 2016; Bai et al., 2018)that: q \u0000 ( z vi | A v , X v ) = N ( z vi | vi , diag (( \u0000 vi ) 2 )) , where vi , ( \u0000 vi ) 2 are the mean and diagonal covariance of the multivariate Gaussian distribution.",
"We use the graph convolutional neural network to parametrize the above posterior and inference z v with the input graph A v and feature vectors X v : ( H v ) l +1 = ( A v ( H v ) l ( W v ) l ) v = ( A v ( H v ) l +1 ( W v ) l +1 ) log \u0000 v = ( A v ( H v ) l +1 ( W v \u0000 ) l +1 ) (5) where v , \u0000 v are matrices of vi , \u0000 vi , l is the number of GCN layers, we use one layer in our experiments, { W v , W v \u0000 } 2 \u0000 are weight matrices, is the ReLU, A v = ( D v ) \u0000 12 A v ( D v ) \u0000 12 is the symmetrically normalized adjacent matrix of the word graph, and D v denotes the corresponding degree matrix.",
"The input of GCN is the feature vectors X v which is initialized as the identity matrix I , i.e., ( H v ) 0 = X v = I , same as in (Yao et al., 2019).",
"Then, z v can be naturally sampled as follows according to the reparameterization trick (Kingma and Welling, 2013): z v = v + \u0000 v \u0000 , where \u0000 is the element-wise product, and N (0 , I ) is the noise variable.",
"Through the message propagation of the GCN layer, words that co-occur frequently tend to achieve similar representations in the latent topic space.",
"Similar to z v , we also have: q \u0000 ( z d | A d , z v ) = DY t =1 q \u0000 ( z dt | A dt , z v ) q \u0000 ( z dt | A dt , z v ) = N ( z dt | dt , diag (( \u0000 dt ) 2 )) (6) where dt , ( \u0000 dt ) 2 are the mean and diagonal covariance of the multivariate Gaussian distribution.",
"Although there are two types of nodes word and document in the bipartite graph A d , we mainly focus on learning representations of document nodes based on the representations of word nodes learned from A v in this step.",
"Therefore, we propose the unidirectional message passing (UDMP) process on A d , which propagates the information from word nodes to documents: H dt = ( P Vi =1 A dti z vi W d ) where is the Relu activation function, W d is the weight matrix.",
"Then, we parametrize the posterior and inference z d based on UDMP: d = UDMP ( A d , z v , W d ) log \u0000 d = UDMP ( A d , z v , W d \u0000 ) (7) where d , \u0000 d are matrices of dt , ( \u0000 dt ) 2 , UDMP is the message passing as in Equation 4, W d , W d \u0000 are weight matrices.",
"Similarly, we sample z d as follows z d = d + \u0000 d \u0000 \" , where \" N (0 , I ) is the noise variable.",
"Through the propagation mechanism of UDMP, documents which share similar words tend to yield similar representations in the latent topic space.",
"Although T-VGAE can learn topics z v and document-topic representations z d as in traditional topic models, we do not focus on proposing a novel topic model, but aim to combine the topic model with VGAE, to improve word and document representations with latent topic semantic and provide probabilistic interpretability.",
"Moreover, rather than learning topics and document-topic representations from the document-word feature A d as LDA topic models (Blei et al., 2003), we propose to learn word-topic representations z v from word co-occurrence matrix A v , and then infer document-topic representations z d based on the document-word feature A d and word-topic representations z v , which is similar to the Biterm topic model (Yan et al., 2013).",
"With the learned z v and z d , ideally, the observed graph A v and A d can be reconstructed through a decoding process.",
"For A v , we assume P ( A v | z v ) conforms to a multivariate Gaussian distribution, whose mean parameters are generated from the inner product of the latent variable z v : P ( A v | z v ) = VY i =1 p ( A vi | z v ) p ( A vi | z v ) = VY i =1 N ( A vi | ( z vi ( z v ) T ) , I ) (8) where is the nonlinear activation function.",
"Similarly, the inner product between z v and z d is used to generate A d , which is sampled from the multivariate Gaussian distribution: P ( A d | z d , z v ) = DY i =1 p ( A di | z di , z v ) P ( A di | z di , z v ) = DY i =1 N ( A di | ( z di ( z v ) T ) , I ) (9) For categorical labels Y , we assume p ( Y | z d ) follows a multinomial distribution P ( Y | z d ) = Mul ( Y | f y ( z d )) , whose label probability vectors are generated from z d , where f y is the multi-layer neural network.",
"For each document t , the prediction is given by y t = argmax y 2 c P ( y | f y ( z dt )) .",
"We can rewrite Equation 4 to yield the final variational objective function:",
"L ( , \u0000 ) VX i =1 VX j =1 log p ( A vi,j | z vi , z vj ) + DX t =1 log p ( A dt | z dt , z v ) + log p ( Y t | z dt ) \u0000 KL [ q \u0000 ( z v ) || p ( z v )] \u0000 KL [ q \u0000 ( z d ) || p ( z d )] (10)",
"with following reconstruction terms and KL divergences:",
"log p ( A vi | z v ) || A vi \u0000 ( z vi ( z v ) T ) || 2 log p ( A dt | z dt , z v ) || A dt \u0000 ( z dt ( z v ) T ) || 2 log p ( Y t | z dt ) Y t log y t + (1 \u0000 Y t ) log(1 \u0000 y t ) KL [ q \u0000 ( z vi ) || p ( z vi )] 1 2 VX j =1 (( vij ) 2 +( \u0000 vij ) 2 \u0000 (1 + log( \u0000 vij ) 2 )) KL [ q \u0000 ( z dt ) || p ( z dt )] 1 2 VX j =1 (( d tj ) 2 +( \u0000 d tj ) 2 \u0000 (1 + log( \u0000 d tj ) 2 ))",
"Through maximizing the objective with stochastic gradient descent, we jointly learn the latent word and document representations, which can ef-ficiently reconstruct observed graphs and predict ground truth labels.",
"In this section, to evaluate the effectiveness of our proposed T-VGAE, experiments are conducted on both supervised and semi-supervised text classification tasks, as well as unsupervised topic modeling tasks.",
"We conduct experiments on five commonly used text classification datasets: 20NewsGroups, Ohsumed, R52 and R8, and MR. We use the same data preprocessing as in (Yao et al., 2019).",
"The overview of the five datasets is depicted in Table 2. 4.1.2 Baselines We compare our method with the following two categories of baselines: text classification : 1)TF-IDF+LR: the classical logistic regression method based on TF-IDF features.",
"2) CNN (Kim, 2014): the convolutional neural network based method with pre-trained word embeddings.",
"3) LSTM (Liu et al., 2016): the LSTM based method with pre-trained word embeddings.",
"4) SWEM (Shen et al., 2018): the word embedding model with pooling strategies.",
"5) fastText (Joulin et al., 2016): the averages word embeddings for text classification.",
"6) Graph-CNN (Peng et al., 2018a): a graph CNN model based on word embedding similarity graphs 7) LEAM (Wang et al., 2018): the label-embedding attentive models with document embeddings based on word and label descriptions.",
"8) TextGCN (Yao et al., 2019): a GCN model with a corpus-level graph to learn word and document embeddings.",
"9) DHTG (Wang et al., 2020): a GCN model with a dynamic hierarchical topic graph based on the topic model.",
"1 Its code is not released yet, therefore we only report the test micro precision here.",
"(Miao et al., 2016): a deep neural variational document topic model.",
"3) AVITM (Srivastava and Sutton, 2017): an autoencoding variational Bayes (AEVB) topic model based on LDA.",
"4) GraphBTM (Zhu et al., 2018): an enriched biterm topic model (BTM) with the word co-occurrence graph encoded by GCN.",
"Following (Yao et al., 2019), we set the hidden size K of latent variables and other neural network layers as 200 and set the window size in PPMI as 20.",
"The dropout is only utilized in the classifier, and is set to 0 .",
"85 .",
"We train our model for a maximum of 1000 epochs with Adam (Kingma and Ba, 2015) under learning rate 0 .",
"05 .",
"10% of the data set is randomly sampled and spared as the validation set for model selection.",
"The parameter settings of all baselines are the same as their original papers or implementations.",
"all the baselines on each dataset, which proves the effectiveness of our proposed methods.",
"Compared with TextGCN, our method yields better performance in both datasets.",
"It demonstrates the importance of integrating the latent semantic structures in text classification.",
"It is also observed from the superior performance of DHTG when compared with TextGCN.",
"However, DHTG only learns from the document-word correlation while our method fully exploits both word-word and document-word correlation information, resulting in a significant improvement over DHTG.",
"This proves the effectiveness of unified topic modeling and graph representation learning in text classification.",
"Moreover, there are no test documents involved during the training of our method, which shows the inductive learning ability of our method, different from TextGCN and DHTG which requires a global graph including all documents and words.",
"encoder, to demonstrate the impact of a different order of word-word correlation information in A v .",
"On datasets R52 and R8, our method achieves the best performance when the layer number is 1 .",
"This is different from TextGCN and DHTG, which generally have the best performance with 2 layer GCN.",
"A possible reason is that our model has already considered one-hop document-word relation information when encoding document-word graph A d .",
"If the layer number is set to 1 when encoding A v , it actually integrates two-hop neighborhood information, thus achieves a similar effect to TextGCN and DHTG.",
"In Table 4, we further present the test accuracy of our method using different layers of GCN encoder, to demonstrate the impact of different orders of word-word correlation information in A v .",
"On datasets R52 and R8, our method achieves the best performance when the layer number is 1 .",
"This is different from TextGCN and DHTG, which generally have the best performance with 2 layer GCN.",
"A possible reason is that our model has already considered one-hop document-word relation information when encoding document-word graph A d .",
"If the layer number is set to 1 when encoding A v , it actually integrates two-hop neighborhood information, thus achieves a similar effect to TextGCN and DHTG.",
"Figure 3 shows the changes of the test accuracy along with different numbers of topics on five datasets.",
"We can see that the test accuracy on five datasets generally improves with the increase of the number of topics and reaches the peak when the topic number is around 200 .",
"The number of topics shows more impact on the Oshumed dataset than on the other four datasets.",
"This does not seem to be related to the number of classes in the dataset.",
"We suspect it has to do with the nature of the text (medical domain vs. other domains).",
"In Figure 4, we further present the semi-supervised classification test accuracy on datasets 20NG and R8 where different proportions ( 1% , 5% , 10% and 20% ) of the original training set are used.",
"We can see that, in cases where labeled samples are limited, our model still consistently outperforms all the baselines.",
"Compared with other methods, TextGCN and our model can preserve good performance with few labeled samples ( 1% , 5% ).",
"This illustrates the effect of label propagation in GCN for semi-supervised learning.",
"When compared with TextGCN, our model yields better performance because of its inductive learning capability and the incorporation of the latent topic semantics.",
"We further evaluate the performance of models on unsupervised topic modeling tasks.",
"We generally assume that the more topics are coherent, the more they are interpretable.",
"Following (Srivastava and Sutton, 2017), We use the average pairwise PMI of the top 10 words in each topic and the perplexity with the ELBO as quality measures of topics.",
"We show in Table 6 the measures under different topic numbers in the 20NG dataset.",
"We remove the supervised loss of our method and the result of GraphBTM is not presented for unable to learn document topic representation for each document.",
"In the table, we can see that our model outperforms the others in terms of topic coherence, which could be attributed to the combination of word co-occurrence graph and message passing in GCN.",
"The message passing leads to similar representations of words that co-occur frequently in the latent topic space, thus improves the semantic coherence of learned topics, as shown in Table 5 that related words tend to belong to the same topic.",
"Our method also benefits from document-word correlation, and yield better performance when compared with GraphBTM which encode bi-term graph via GCN.",
"We utilize t-SNE to visualize the latent test document representations of the 20NG dataset learned by our model, DHTG and TextGCN in Figure 5, in which each dot represents a document and each color represents a category.",
"Our method yields the best clustering results compared with the others, which means the topics are more consistent with pre-defined classes.",
"It shows the superior interpretability of our method for modeling the latent topics along with both word co-occurrence graph and document-word graph when compared with DHTG.",
"In this paper, we proposed a novel deep latent variable model T-VGAE via combining the topic model with VGAE.",
"It can learn more interpretable representations and leverage the latent topic semantic to improve the classification performance.",
"T-VGAE inherits advantages from the topic model and VGAE: probabilistic interpretability and efficient label propagation mechanism.",
"Experimental results demonstrate the effectiveness of our method along with inductive learning.",
"As future work, it would be interesting to explore better-suited prior distribution in the generative process.",
"It is also possible to extend our model to other tasks, such as information recommendation and link prediction.",
"This research is supported by the CSC Scholarship offered by China Scholarship Council.",
"We would like to thank the anonymous reviewers for their constructive comments.",
"We thank MindSpore for the partial support of this work, which is a new deep learning computing framework 1 ."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Functional Distributional Semantics is a recently proposed framework for learning distributional semantics that provides linguistic interpretability.",
"It models the meaning of a word as a binary classifier rather than a numerical vector.",
"In this work, we propose a method to train a Functional Distributional Semantics model with grounded visual data.",
"We train it on the Visual Genome dataset, which is closer to the kind of data encountered in human language acquisition than a large text corpus.",
"On four external evaluation datasets, our model outperforms previous work on learning semantics from Visual Genome.",
"1 1 Introduction The target of distributional semantics models is to understand and represent the meanings of words from their distributions in large corpus.",
"Many approaches learn a numerical vector for each word, which encodes its distributional information.",
"They can be roughly divided into two categories: frequency-based methods such as co-occurrence matrix (Sahlgren, 2006), and prediction-based methods such as Word2vec (Mikolov et al., 2013).",
"More recently, progress has been made in learning word representations in a specific context, which are also called contextualized embeddings.",
"Examples include ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019).",
"Functional Distributional Semantics is a framework that not only provides contextualized semantic representations, but also provides more interpretability.",
"It was first proposed by Emerson and Copestake (2016), and it explicitly separates the modeling of words and the modeling of objects and events.",
"This is a fundamental distinction in predicate logic.",
"While logic is not necessary for all NLP 1 Our code and models are publicly available at: https: //github.com/williamLyh/PixieVGModel tasks, it is an essential tool for modeling many semantic phenomena (for example, see: Cann, 1993; Allan, 2001; Kamp and Reyle, 2013).",
"For semantic research questions, having a logical interpretation is a clear advantage over vector-based models.",
"We will explain the framework in Section 2.2.",
"Another issue with distributional semantic models, as discussed by Emerson (2020c), is the symbol grounding problem if meanings of words are defined in terms of other words, the definitions are circular.",
"During human language acquisition, words are learned while interacting with the physical world, rather than from text or speech alone.",
"An important goal for a semantic theory is to explain how language relates to the world, and how this relationship is learned.",
"We focus on the Visual Genome dataset, not only because it provides relatively fine-grained annotations, but also it is similar to realistic circumstance encountered during language acquisition, as we will explain in Section 2.3.",
"Our main theoretical contribution is to adapt the Functional Distributional Semantics framework to better suit visual data.",
"This is a step approaching the completion of long-term goal: leveraging previous work (Emerson, 2020a), we could joint train the Functional Distributional Semantics model with both textual and visual data.",
"In order to make it compatible with modern techniques for machine vision, while retaining its logical interpretability, we replace the RBM of previous work with a Gaussian MRF, as explained in Section 3.",
"Our main empirical contribution is to demonstrate the effectiveness of the resulting model.",
"In Section 4.1, we intrinsically evaluate the major components of our model, to see how well they fit the training data.",
"In Section 4.2, we evaluate our model on four external evaluation datasets, comparing against previous approaches to learning from Visual Genome, as well as strong text-based baselines.",
"Not only do we confirm Herbelot (2020)'s finding that learning from grounded data is more 3976 data-efficient than learning from text alone, but our model outperforms the previous approaches, demonstrating the value of our functional approach.",
"There is extensive research on learning language semantics from grounded visual data.",
"Visual-Semantic Embedding and Visual Concept Learning in Visual Question Answering are two representative frameworks.",
"Some works under these frameworks share the idea with our Functional Distributional Semantics model that textual labels are modeled as classifiers over the semantic space.",
"Visual-Semantic Embedding (Frome et al., 2013) learns joint representations of vision and language in a common visual-semantic space.",
"Kiros et al. (2014) proposed to unify the textual and visual embeddings via multimodal neural-based language models.",
"Ren et al. (2016) models images as points in the Visual-Semantic space, while text are Gaussian distributions over them.",
"Visual Concept Learning contributes to various visual linguistic applications, such as image captioning (Karpathy and Fei-Fei, 2015) and Visual Question Answering (Antol et al., 2015).",
"Some works in applying neural symbolic approach to VQA share similar ideas of learning visual concepts with our model.",
"For example, Mao et al. (2018) learn neural operators to capture attributes (concepts) of objects and map them into attribute-specific space.",
"Then questions are parsed into executable programs.",
"Han et al. (2019) further learn the relations between objects as metaconcepts.",
"Our work differs from them in two main aspects.",
"Firstly, our framework supports truth-conditional semantics, as explained in Section 2.2, and therefore provides more logical interpretability.",
"Unlike the above works which always assume images are given, we use a generative model which allows us to perform inference on textual labels alone, as illustrated in Fig. 3 and explained in Section 3.4.",
"Secondly, we learn semantics from the Visual Genome dataset, which is considered more similar to the data encountered during language acquisition, as explained in Section 2.3.",
"Functional Distributional Semantics was first proposed by Emerson and Copestake (2016).",
"The framework takes model-theoretic semantics as a starting point, defining meaning in terms of truth .",
"Given an individual (also called an entity ), and given a predicate (the meaning of a content word), we can ask whether the predicate is true of that individual.",
"Note that an individual could be a person, an object, or an event , following neo-Davidsonian event semantics (Davidson, 1967; Parsons, 1990).",
"Functional Distributional Semantics therefore separates the modeling of words and individuals.",
"An individual is represented in a high-dimensional feature space.",
"The term pixie refers to the representation of an individual (Emerson and Copestake, 2017).",
"A predicate is formalized as a binary classifier over pixies.",
"It assigns the value true if an individual with those features could be described by the predicate, and it assigns false otherwise.",
"Such a classifier is called a semantic function .",
"The model is separated into a world model and a lexicon model .",
"The lexicon model consists of semantic functions.",
"Following situation semantics (Barwise and Perry, 1983), the world model defines a distribution over situations .",
"Each situation consists of a set of individuals, connected by semantic roles.",
"In our work, we only consider two types of semantic roles: ARG1 and ARG2.",
"For example, the sentence a computer is on a desk' describes a situation with three individuals: the computer, the desk, and the event of the computer being on the desk.",
"The computer is the ARG1 of the event, and the desk is the ARG2, as shown in Figs.",
"1 and",
"2. Unlike other distributional models, Functional Distributional Semantics is interpretable in formal semantic terms, and supports first-order logic (Emerson, 2020b).",
"Emerson (2020a) proposed an autoencoder-like structure which can be trained efficiently from semantic dependency graphs.",
"Because individuals are explicitly modeled, grounding the pixies is more theoretically sound than grounding word vectors.",
"The framework has clear potential for learning grounded semantics, which we explore in this paper.",
"The Visual Genome dataset contains over 108,000 images and five different formats of annotations, including regions, attributes, relations, object instances and question answering.",
"In this work, we only consider the relations, which are formulated as predicate triples.",
"Each triple contains two objects in the image and one relation between them.",
"The objects are identified with bounding boxes, as illus-3977 Figure 1: An example image in Visual Genome, annotated with the relation [Computer', ON', Desk'] trated in Fig.",
"1. The object predicates are nouns or noun phrases, and the relation predicates are verbs, prepositions or prepositional phrases.",
"Many works use Visual Genome as a grounded data source.",
"For example, Fukui et al. (2016) use it to ground its visual question answering system.",
"Furthermore, the fine-grained annotations make Visual Genome a compelling dataset for studying lexical semantics.",
"As discussed by Herbelot (2020), Visual Genome is similar in size to what a young child is exposed to, and the annotations are similar to simple utterances encountered during early language acquisition.",
"Kuzmenko and Herbelot (2019) and Herbelot (2020) learn semantics from the annotations, while discarding the images themselves.",
"They trained word embeddings with a count-based method and a Skip-gram-based method, respectively.",
"This methodology, of extracting word relations from an annotated image dataset, was also analyzed and justified by Schlangen (2019).",
"In fact, Vero and Copestake (2021) analyze the different modalities in Visual Genome in terms of information gain, and conclude that, for enriching a textual model, the relational information provides more potential than the visual information.",
"To our knowledge, there has been no previous attempt to use grounded visual data to train a Functional Distributional Semantics model, nor to utilize the visual information of Visual Genome to learn natural language semantics.",
"We will explain the probabilistic structure of our model in Section 3.1, and how we train the components in Sections 3.2 and 3.3.",
"In Section 3.4, we present an inference model to infer latent pixies from words and the context.",
"We define a graphical model which jointly generates pixies and predicates, as shown in Fig.",
"2. It has two parts.",
"The world model is shown in the top blue box, which models the distribution of situations, or in other words, the joint distribution of pixies.",
"It is an undirected graphical model, with probabilistic dependence according to the ARG1 and ARG2 roles, as further explained in Section 3.2.",
"The lexicon model is shown in the bottom red box, which models each predicate as a semantic function.",
"It is a directed graphical model.",
"For each pixie, it produces a probability of truth for each predicate (which are not observed), as well as generating a single predicate (which is observed), as further explained in Section 3.3.",
"Our framework can perform contextualized inference of predicate triples, where the world model provides contextual dependency while the lexicon model focuses on individual predicate, as further expalined in Section 3.4.",
"Given a labeled image triple, the model can be trained by maximizing the likelihood of generating the data, including both observed predicates and observed pixies.",
"The likelihood can be split into two parts, as shown in Eq.",
"1, where s is a situation (a pixie for each individual), and g is a semantic dependency graph (a predicate for each individual).",
"The first term is the likelihood of generating the observed situation, modeled by the world model.",
"The second term is the likelihood of generating the dependency graph given an observed situation, modeled by the lexicon model.",
"Therefore, we can optimize parameters of the two parts separately.",
"log P ( s, g ) = log P ( s ) + log P ( g | s ) (1) 3.2 World Model The world model learns the joint distribution of pixies, as shown in the top half of Fig.",
"2. The individuals are grounded by images, so we can obtain the pixie vectors by extracting visual features for individuals from their corresponding images.",
"For object pixies, they are grounded by their corresponding bounding boxes.",
"For event pixies, Visual Genome does not have labeled bounding boxes for them and their meaning tends to be more abstract, so we use the whole image to ground them.",
"As a feature extractor, we use ResNet101, a Convolutional Neural Network (CNN) pre-trained on ImageNet.",
"To further reduce redundant dimensions, we perform PCA on the last layer of the CNN.",
"We take the output of PCA as the pixie space X .",
"A situation s is a collection of pixies within a semantic graph.",
"In this work, we only consider graphs with three nodes, connected by the roles ARG1 and ARG2, to match the structure of Visual Genome relations.",
"In previous work, the world model was implemented as a Restricted Boltzmann Machine (RBM).",
"However, an RBM uses binary-valued vectors, which is not compatible with the real-valued vectors produced by a CNN.",
"Furthermore, an RBM does not give normalized probabilities, which means that computationally expensive techniques are required, such as MCMC, used by (Emerson and Copestake, 2016), or Belief Propagation, used by (Emerson, 2020a).",
"We model situations with a Gaussian Markov Random Field (MRF).",
"For an n -dimensional pixie space, this gives a 3 n -dimensional Gaussian distribution, with parameters and for the mean and covariance.",
"As shown in the first term of Eq.",
"1, we would like to maximize P ( s ) .",
"For a Gaussian distribution, the maximum likelihood estimate (MLE) has a closed-form solution, which is simply the sample mean and sample covariance.",
"However, because we assume the left and right pixies in Fig. 2 are conditionally independent given the event pixie, we force the top right and bottom left pixie blocks of the precision matrix 1 to be zero.",
"We raise this assumption for the consideration of applying the Functional Distributional Semantics model to larger graphs with more individuals in the future.",
"The assumption does not affect performance on word similarity datasets, but it slightly damages performance on contextual inference datasets.",
"Detailed results and discussion are given in Appendix A.4.",
"The lexicon model learns a list of semantic functions, each corresponds to a word in predicate vocabulary V .",
"The semantic function t r ( x ) for a given predicate r is a logistic regression classifier over the pixie space, with a weight vector v r .",
"From the perspective of deep learning, this is a single neural net layer with a sigmoid activation function.",
"As shown in Eq.",
"3, the output is a probabilistic truth value ranging between (0 , 1) .",
"As shown in the second row of Fig. 2, all semantic functions are applied to each pixie.",
"Based on the probabilities of truth, a single predicate is generated.",
"The probability of generating a specific predicate r for a given pixie x is computed as shown in Eq.",
"4. The more likely a predicate is to be true, the more likely it is to be generated.",
"The lexicon model is optimized to maximize log P ( g | s ) , the log-likelihood of generating the predicates given the pixies.",
"This can be done by gradient descent.",
"When learning from Visual Genome, pixies are grounded by images.",
"However, when applying the model to text, the pixies are latent.",
"We provide an inference model to infer latent pixie distributions given observed predicates.",
"This inference model is used in Section 4.2 on textual evaluation datasets.",
"Exact inference of the posterior P ( s | g ) is intractable, because this requires integrating over the high-dimensional latent space of s .",
"This is a common problem when working with probabilistic models.",
"Therefore we use a variational inference algorithm to approximate the posterior distribution 3979 P ( s | g ) with a Gaussian distribution Q ( s ) .",
"For sim-plicity,we assume that each dimension of Q ( s ) is independent, so its covariance matrix is diagonal.",
"In Fig. 3, the graphical model illustrates this assumption, as there is no connection among the pixie nodes in the middle row.",
"Following the procedure of variational inference, the approximate distribution Q ( s ) is optimized to maximize the Evidence Lower Bound (ELBO), given in Eq.",
"5. This can be done by gradient descent.",
"L = EQ (cid:104) log P ( g | s ) (cid:105) D KL (cid:0) Q ( s ) || P ( s ) (cid:1) (5) The first term measures how well Q ( s ) matches the observed predicates, according to the lexicon model P ( g | s ) .",
"The second term measures how well Q ( s ) matches the world model P ( s ) .",
"We would like to emphasize the likelihood of generating the observed predicates, so we down-weight the second term with a hyper-parameter , similarly to a VAE (Higgins et al., 2017).",
"Detailed analysis on the effects of is discussed in Appendix A.7.",
"Exactly computing the first term is intractable.",
"Emerson (2020a) used a probit approximation, but we instead follow Daunizeau (2017), who derived the more accurate approximations given in Eqs.",
"6 and 7, where x has mean and variance .",
"The second approximation is particularly important, as we aim to maximize the log-likelihood.",
"The second term of Eq.",
"5 is the Kullback-Leibler (KL) divergence between two Gaussians, which has the closed-form formula given in Eq.",
"8, where k is the total dimensionality.",
"DKL ( Q || P ) = 1 2 (cid:104) log | P | | Q | k + tr ( 1 P Q ) +( Q P ) T 1 P ( Q P ) (cid:105) (8) As illustrated in Fig. 3, variational inference allows us to calculate quantities such as the probability that an animal which has a tail is a horse.",
"To obtain the inferred distribution for a single pixie, we need to marginalize the situation distribution Q ( s ) .",
"From the independence assumption, this simply means taking the parameters for the desired pixie.",
"Then we can apply the semantic function for r on P Q R Y Z X \"animal\" \"has\" \"tail\" [ \"animal\": 0.58 , horse\": 0.43, \"bear\": 0.35, \"dog\": 0.29, \"cat\": 0.22, ] \"animal\" \"has\" \"paw\" [ \"animal\": 0.56 , bear\": 0.42, \"horse\": 0.36, \"cat\": 0.35, \"dog\": 0.31, ] Figure 3: Graphical inference model: The pixies X , Y and Z in the middle row are jointly inferred from the observed predicates P , Q and R in the bottom row, using variational inference.",
"Although Q ( s ) assumes independence, its parameters are jointly inferred based on all predicates.",
"This is because the KL-divergence in Eq.",
"8 depends on P , which is nonzero between each pair of pixies linked by a semantic role.",
"For example, in Fig. 3, the truth of horse' for X depends on the observed predicate tail' or paw'.",
"This is not a direct dependence between words, but rather relies on three intermediate representations (the three pixies), all of which are expressed in terms of visual features.",
"The first term of the ELBO connects the semantic function for tail' or paw' to the variational parameters for Z .",
"The second term of the ELBO connects the variational parameters for Z and Y (based on the world model covariance for ARG2) as well as Y and X (based on the world model covariance for ARG1).",
"Finally the semantic function for horse' is applied to the variational distribution for X .",
"and an animal with paws is more likely to be a bear.",
"We notice that the truth values are generally low for all semantic functions.",
"Even the highest truth is only around 0.58.",
"This illustrates that the model is not very certain, which might be expected since the model is performing inference on visual features, but the training image data is noisy.",
"For some evaluation datasets, we need to perform inference given a single predicate.",
"This can be done by marginalizing the joint distribution.",
"Which pixie variable to choose, out of the three, should depend on the Part-Of-Speech (POS) of the word.",
"For nouns, the pixie node X or Z should be used, as a noun should play the role of ARG1 or ARG2.",
"For verbs and prepositions, the node Y should be used, as they usually describe the relation.",
"To train our model, we follow the same preprocessing and filtering of Visual Genome as Herbelot (2020).",
"Details of pre-processing and hyperparameters are given in the appendix.",
"In this section, we examine whether a Gaussian MRF is a suitable choice for the world model, and whether the pixies in the pixie space are linearly separable such that the logistic semantic functions can successfully classify them.",
"The world model learns a Gaussian distribution for the observed situations.",
"In this section, we justify this choice by evaluating the fitting errors.",
"Fig. 4 shows density histograms for two example pixie dimensions and their corresponding best-fit (MLE) Gaussian curves.",
"The left histogram is an example for a majority of the pixie dimensions, which is tightly matched by the best-fit Gaussian.",
"In other cases, as shown on the right, there are im-balanced tails and asymmetry.",
"Despite their skewness and kurtosis, which make them look more like a Gamma distribution, they are still generally bell-shaped and the departure is not so heavy.",
"To quantify the errors, we measure the Wasser-stein distance, the area of the histogram missing from the best-fit Gaussian.",
"Across all 100 pixie dimensions, the mean percentage missing is 7% with a variance of 1% .",
"A more flexible model might give better modeling performance, which could be a future improvement direction.",
"Nonetheless, we consider this level of error to be acceptably low.",
"Value Figure 4: Density histograms for two selected pixie dimensions, across the 2.8M training instances.",
"Best-fit Gaussian curves of the histograms are shown in red.",
"In this experiment, we investigate if our approach to model the semantic functions as logistic regression classifiers is suitable.",
"In particular, a logistic regression classifier is a linear classifier, which means if the data is not linearly separable, it would have inferior performance.",
"We computed the Area Under Curve for the Receiver Operating Characteristic (AUC-ROC), for all predicates in the vocabulary.",
"For each predicate we randomly select equal amount of negative example pixies with its positive examples.",
"The average score is 0.79 for object predicates, and 0.58 for event predicates.",
"We also present the ROC for a few example predicates in Fig.",
"5. We can see that object classifiers have generally better performance.",
"The classifier for racket' shows slightly worse performance than the others, whose reason might be its lower frequency.",
"Compared to object predicates, the semantic func-3981 tions for event predicates generally perform worse.",
"There are two potential reasons which could be improved in future work.",
"Firstly, we used visual features generated from the whole image to represent the event pixie, which is often not specific enough to identify the event.",
"Secondly, a logistic regression classifier might not be sophisticated enough for this classification problem.",
"In this section, we use external semantic evaluation datasets, to give a direct comparison against previous work, and to test whether our model can generalize beyond the training data.",
"We evaluate on two lexical similarity datasets in Section 4.2.2, and two contextual datasets in Section 4.2.3.",
"We compare against two types of baseline: models trained on a large corpus and models trained on Visual Genome.",
"For these datasets, our model must assign similarity scores for predicate or triple pairs, which we compute as follows.",
"The pixie values are inferred from the first predicate or triple in the pair.",
"Then all semantic functions from the predicate vocabulary are applied to that pixie.",
"Then the ranking of the second predicate in the pair over all potential predicates in the evaluation dataset is taken as the similarity score.",
"Therefore, smaller ranking means higher similarity between predicates.",
"Finally, because there are discrepancies between vocabularies used in Visual Genome and the evaluation datasets, we follow Herbelot (2020) in filtering the evaluation datasets according to the Visual Genome vocabulary, and use the filtered datasets to evaluate all models.",
"For the two lexical datasets, we exactly follow Herbelot's filtering conditions to give a direct comparison.",
"For the contextual datasets, this filtering is too strict, resulting in zero vocabulary coverage.",
"For these datasets, we apply looser filtering, with details given in the appendix.",
"This also requires retraining our model and the Visual Genome baselines on a more loosely filtered training set.",
"Visual Genome Baselines : We re-implement two previously proposed models learning distributional semantics from Visual Genome, described in Section 2.3.",
"A simple count-based model was proposed by Kuzmenko and Herbelot (2019), which we refer to as VG-count.",
"Herbelot (2020) improved on this and proposed EVA, a Skip-gram model trained on the same kind of co-occurrence data.",
"We also implement an image-retrieval baseline which we refer to as VG-retrieval.",
"This baseline simply retrieves all image boxes whose annotations match the indexing predicate.",
"Visual features are extracted in the same method as our model, as described in 3.2, and then averaged across all retrieved images to obtain the representation for a given predicate.",
"This baseline illustrates the performance can be achieved when only using the visual information of Visual Genome.",
"Large Corpus Baselines : We trained two Skip-gram Word2vec models (Mikolov et al., 2013) using 1 billion and 6 billion tokens from Wikipedia, using Gensim ( Rehurek and Sojka, 2010).",
"We will refer to them as Word2vec-1B and Word2vec-6B.",
"The window sizes are set to be 10 in two directions, so they contextualize with far more words than our model.",
"We also use Glove (Pennington et al., 2014) trained on 6 billion Wikipedia tokens as another strong baseline, which we refer to as Glove-6B.",
"For all three baselines, the dimensionality is set to 300.",
"Compared to the large corpus baselines, our model has fewer parameters per word (100 vs. 300), and is trained on far fewer data points (2.8M relation triples vs. 1B or 6B tokens).",
"We use two lexical similarity/relatedness datasets, MEN (Bruni et al., 2014) and Simlex-999 (Hill et al., 2015), both of which give scores for pairs of words.",
"MEN contains 3000 word pairs, and SimLex-999 contains 999 pairs.",
"After filtering for the Visual Genome vocabulary, we have 584 pairs for MEN and 169 pairs for SimLex-999.",
"MEN evaluates relatedness, while SimLex-999 evaluates similarity.",
"For example, coffee' and cup' are related, but not similar.",
"Capturing similarity rather than relatedness is hard for most text-based distributional semantics models because they build concept representations based on their co-occurrence in corpora, which generally reflects relatedness but not similarity.",
"However, similarity might be more directly reflected in terms of visual features which can be captured by our model.",
"The results are shown in Tab.",
"1. Our model outperforms the two baselines trained on Visual Genome, and matched the performance of Word2vec-1B (the difference is statistically insignificant, p> 0 . 5 ).",
"median similarity score to the out-of-vocabulary pairs), it still achieves 0.304.",
"Using the loosely filtered training set, our model can achieve the even higher score of 0.670 (on the same strictly filtered subset of MEN).",
"This illustrates that one limit of our model's performance is the size of the Visual Genome dataset.",
"In contrast, the performance of Word2vec does not improve much as the training data increases from 1B to 6B, which suggests there is a limit on how much can be learnt from local textual co-occurrence information alone.",
"On SimLex-999, our model achieves 0.431, which outperforms all baselines.",
"Compared to Glove-6B, the strongest baseline, it is weakly significant ( p< 0 . 15 ).",
"This might justify our point that there is advantage of learning similarity from visual features.",
"Additionally, our model can use parameters and data more effectively and efficiently than Word2vec and Glove, achieving better performance with less training data and fewer parameters.",
"Compared with VG-count and EVA, our model can understand more semantics because it learns from the visual information.",
"While compared with VG-retrieval, our model can leverage textual co-occurrence.",
"As far as we know, we have achieved a new state of the art on learning lexical semantics from Visual Genome.",
"Combining results across all four datasets (including the contextual datasets below), the difference between our model and EVA is highly significant ( p< 0 . 001 ).",
"We consider two contextual evaluation datasets.",
"GS2011 (Grefenstette and Sadrzadeh, 2011) gives similarities of verbs in a given context.",
"Each data point is a pair of subject-verb-object triples, where only the verbs are different.",
"For example, [ta-ble',show', result'] and [table', express', re-sult'] are judged highly similar.",
"The dataset has 199 distinct triple pairs and 2500 judgment records from different annotators.",
"The evaluation metric is Spearman correlation across all judgments.",
"As Van de Cruys et al. (2013) point out, the second verb in each pair is often nonsensical when combined with the corresponding subject and object.",
"Therefore, we only compare the triple pairs in a single direction, inferring pixies from the first triple and applying the second verb's semantic function.",
"RELPRON (Rimell et al., 2016) evaluates compositional semantics.",
"It contains a list of terms, each associated with around 10 properties.",
"Each property is a noun modified by a subject or object relative clause.",
"For example, the term theater' has the subject property [building', show', film'] and object property [audience', exit', building'].",
"The task is to find the correct properties for each term, evaluated as Mean Average Precision (MAP).",
"The development set contains 65 terms and 518 properties; the test set, 73 terms and 569 properties.",
"Under the loosely filtered condition, our subset of GS2011 contains 252 similarity judgments; RELPRON, 57 terms and 150 properties.",
"Rimell et al. (2016) find that vector addition performs surprisingly well at combining contextual information.",
"Therefore, for all baselines, we represent a triple by taking the addition of the three word representations.",
"As aforementioned, we retrain our model and the VG baselines with loosely filtered data.",
"The results are shown in Tab.",
"1. The corpus models outperform the VG models.",
"However, this is perhaps expected given that the vocabulary in GS2011 and RELPRON is more formal, and even when they are covered in Visual Genome, their frequencies are low: for RELPRON, 54% of the covered vocabulary has frequency below 100, com-3983 0.0 0.2 0.4 0.6 0.8 1.0 False positive rate 0.0 0.2 0.4 0.6 0.8 1.0 T r u e p o s i t i v e r a t e cat blanket racket computer box flag in behind Figure 6: ROC curves of the semantic functions for selected predicates, for the truth-regularized model.",
"pared to only 6% for MEN.",
"Furthermore, GS2011 evaluates similarity of verbs, but we saw in Section 4.1.2 that our model is less accurate for verbs.",
"However, our model outperforms all VG baselines on both datasets.",
"This suggests that our model is less affected by data sparsity.",
"For the baselines, if a training triple contains multiple rare predicates, the sparsity problem is compounded.",
"However, our model relies on the images, whose visual features are shared across the whole training set.",
"To make the probabilistic truth values more interpretable, Emerson (2020a) proposes a regularization term which penalizes the model if all truth values stay close to 0.",
"This would modify the loss function in Eq.",
"4, to give Eq.",
"10, with a hyperparameter that we set to 0.5.",
"We find that adding the log-truth term improves performance on intrinsic evaluation, but decreases performance on extrinsic evaluation.",
"Applying the analysis in Section 4.1.2, the average AUC-ROC is 0.86 for object predicates and 0.60 for event predicates.",
"This is illustrated in Fig. 6 for the same example predicates as Fig.",
"5. In contrast, when evaluating on MEN and SimLex-999, this model achieves only 0.602 and 0.381 respectively.",
"On GS2011 and RELPRON, the model achieves lower performance of 0.112 and 0.056.",
"The log-truth term makes predicates true over larger regions of pixie space.",
"As shown by the intrinsic evaluation, this is helpful when considering each classifier individually.",
"However, the regions of different predicates also overlap more, which seems to hurt their overall performance on the external datasets.",
"To quantify this, we calculate the total truth of all predicates, for 1000 randomly selected images.",
"For the original version of our model, on average 0.83 predicates are true for an image.",
"This is slightly below 1, illustrating the problem Emerson aimed to avoid.",
"However, with the log-truth term, it becomes 25.5, which may have over-corrected the problem.",
"In this paper, we proposed a method to train a Functional Distributional Semantics model with visual data.",
"Our model outperformed the previous works and achieved a new state of the art on learning natural language semantics from Visual Genome.",
"Further to this, our model achieved better performance than Word2vec and Glove on Simlex-999 and matched Word2vec-1B on MEN.",
"This shows that our model can use parameters and data more efficiently than Word2vec and Glove.",
"Additionally, we also showed that our model can successfully be used to make contextual inferences.",
"As future work, we could leverage previous work to jointly train the Functional Distributional Semantics model with both visual and textual data, such that we could improve the vocabulary coverage and have better understanding of abstract words."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"objective",
"result",
"method",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"method",
"method",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"result"
] |
[
"Car-focused navigation services are based on turns and distances of named streets, whereas navigation instructions naturally used by humans are centered around physical objects called landmarks.",
"We present a neural model that takes OpenStreetMap representations as input and learns to generate navigation instructions that contain visible and salient landmarks from human natural language instructions.",
"Routes on the map are encoded in a locationand rotation-invariant graph representation that is decoded into natural language instructions.",
"Our work is based on a novel dataset of 7,672 crowd-sourced instances that have been verified by human navigation in Street View.",
"Our evaluation shows that the navigation instructions generated by our system have similar properties as human-generated instructions, and lead to successful human navigation in Street View.",
"Current navigation services provided by the automotive industry or by Google Maps generate route instructions based on turns and distances of named streets.",
"In contrast, humans naturally use an efficient mode of navigation based on visible and salient physical objects called landmarks.",
"As shown by Tom and Denis (2004), route instructions based on landmarks are easier processed and memorized by humans.",
"May et al. (2003) recommend that in pedestrian navigation systems, landmarks should be used as the primary means of providing directions.",
"Another navigation scenario where landmarks are useful is if GPS tracking is poor or not available, and if information is inexact regarding distances (e.g., in human estimates) or street names (e.g., for users riding a bicycle).",
"We present a neural model that takes a real-world map representation from OpenStreetMap 1 as input and generates navigation instructions that contain salient landmarks, learned directly from human natural language instructions.",
"In our framework, routes on the map are learned by discretizing the street layout, connecting street segments with adjacent points of interest, thus encoding visibility of landmarks, and encoding the route and surrounding landmarks in a locationand rotation-invariant graph.",
"Based on crowd-sourced natural language instructions for such map representations, a graph-to-text mapping is learned that decodes graph representations into natural language route instructions that contain salient landmarks.",
"Our work is accompanied by a dataset of 7,672 instances of routes in OpenStreetMap and corresponding crowd-sourced natural language instructions.",
"The navigation instructions were generated by workers on the basis of maps including all points of interest, but no street names.",
"They were verified by different workers who followed the navigation instructions on Google Street View 2 .",
"Experimental results on randomly sampled test routes show that our graph-to-text model produces landmarks with the same frequency found in human reference instructions.",
"Furthermore, the time-normalized success rate of human workers finding the correct goal location on Street View is 0.664.",
"Since these routes can have a partial overlap with routes in the training set, we further performed an evaluation on completely unseen routes.",
"The rate of produced landmarks drops slightly compared to human references, and the time-normalized success rate also drops slightly to 0.629.",
"While there is still room for improvement, our results showcase a promising direction of research, with a wide potential of applications in various existing map 1 www.openstreetmap.org 2 www.google.com/streetview Figure 1: The data collection is split into two tasks.",
"applications and navigation systems.",
"The main contributions of this paper are: We collect and publish a large scale dataset of natural language landmark navigation instructions that are validated by human navigation runs in Street View.",
"We present a method to represent geospatial routes as a graph and propose an appropriate graph-to-text architecture that learns to generate navigation instructions from real-world data.",
"Mirowski et al. (2018) published a subset of Street View covering parts of New York City and Pittsburgh.",
"Street View is a navigable environment that is build from connected real-world 360 panoramas.",
"This data is used by Hermann et al. (2020) to train a visual agent to follow turn-by-turn instructions generated by Google Maps API.",
"Chen et al. (2019) published a Street View dataset 3 with more recent and higher resolution panorama images that covers the lower half of Manhattan.",
"They further introduce the Touchdown task that has the goal to navigate 3 www.streetlearn.cc Street View in order to find a hidden teddy bear.",
"The data for that task is obtained from annotation workers that follow a predefined route in Street View and write down navigation instructions along the way.",
"A central difference between Touchdown and our dataset is the annotation modality: Touchdown annotators use panorama images along the route, while our instruction writers only see the rendered route on a map.",
"See Section 4.3 for a more detailed discussion.",
"Our work puts the task of natural language navigation upside down by learning to generate humanlike navigation instructions from real-world map data instead of training an agent to follow human generated instructions.",
"Prior work in this area has used rule-based systems to identify landmarks (Rousell and Zipf, 2017) or to generate landmark-based navigation instructions (Drager and Koller, 2012; Cercas Curry et al., 2015).",
"Despite having all points of interest on the map available, our approach learns to verbalize only those points of interest that have been deemed salient by inclusion in a human navigation instruction.",
"Previous approaches that learn navigation instructions from data have been confined to simplified grid-based representations of maps for restricted indoor environments (Daniele et al., 2017).",
"de Vries et al. (2018) tackles the problem in a more sophisticated outdoor environment but the model fails to verbalize useful instructions when conditioned on more than one possible landmark.",
"Other work generates navigation instructions from indoor panoramas along a path but provides no explicit evaluation like human navigation success.",
"They rather use the instructions to augment the training routes for a vision and language navigation agent (Fried et al., 2018).",
"The task addressed in our work is that of automatically generating Natural Language Landmark Navigation Instructions (NLLNI) from real-world open-source geographical data from OpenStreetMap.",
"The instructions are generated a priori (Janarthanam et al., 2012) for the whole route.",
"Training data for NLLNI was generated by human crowdsourcing workers who were given a route on an OpenStreetMap rendering of lower Manhattan, with the goal of producing a succinct natural language instruction that does not use street names or exact distances, but rather is based on landmarks.",
"Landmarks had to be visible on the map and included, e.g., churches, cinemas, banks, shops, and public amenities such as parks or parking lots.",
"Each generated navigation instruction was validated by another human crowdsourcing worker who had to reach the goal location by following the instruction on Google Street View.",
"NLLNI outputs are distinctively different from navigation instructions produced by OpenRoute-Service, Google Maps, or car navigation systems.",
"While these systems rely on stable GPS signals such that the current location along a grid of streets can be tracked exactly, we aim at use cases where GPS tracking is not available, and knowledge of distances or street names is inexact, for example, pedestrians, cyclists, or users of public transportation.",
"The mode of NLLNI is modeled after human navigation instructions that are naturally based on a small number of distinctive and visible landmarks in order to be memorizable while still being informative enough to reach the goal.",
"A further advantage of NLLNI is that they are based on map inputs which are more widely available and less time dependent than Street View images.",
"Because there is no large scale dataset for NLLNI that is generated from map information only, we collect data via crowdsourcing.",
"The annotator is shown a route on the map and writes navigation instructions based on that information (Figure 1, top).",
"We take the approach of Chen et al. (2019) and determine correctness of navigation instructions by showing them to other annotators that try to reach the goal location in Street View (Figure 1, bottom).",
"We use the static Street View dataset provided by Chen et al. (2019).",
"This allows us to make the experiments in this work replicable.",
"Because the panorama pictures were taken at the end of 2017, we export an OpenStreetMap extract of Manhattan from that time.",
"OpenStreetMap (OSM) is an open source collection of geodata that can be used to render maps of the world.",
"It features detailed street layouts and annotations for points of interest (POI) like amenities, infrastructure or land use 4 .",
"We discretize the street layout by creating a node every ten meters along the roads.",
"The resulting structure is further referenced to as the OSM graph with nodes consisting of street segments.",
"Based on that graph, we sample routes of length between 35 and 45 nodes.",
"A route is the shortest path between its start and end node.",
"It includes a minimum of three intersections (i.e., a node with more than two edges) and ends in proximity to a POI.",
"We further assure that it is possible to follow the route in Street View by verifying that a corresponding subgraph exists in the Street View graph.",
"We use Amazon Mechanical Turk (AMT) 5 to acquire annotators.",
"Before working on the actual tasks, workers were required to pass a tutorial and qualification test.",
"The tutorial introduces the tasks, teaches basic mechanics of Street View and explains meaning of map icons.",
"A feature of AMT and additional IP address 6 lookup ensures that annotators are located in the United States.",
"This increases the probability of working with native English speakers and people familiar with US street environments.",
"We paid $0.35 per navigation instructions task and $0.20 for the navigation run 4 openstreetmap.org/wiki/Map_Features 5 www.mturk.com 6 IP addresses were not saved and are not part of the dataset.",
"task.",
"Furthermore, we paid a bonus of $0.15 for successfully reaching the goal location and $0.25 for validated navigation instructions.",
"The amounts were chosen on the basis of $10/hour.",
"The annotation procedure involved two phases.",
"First, an annotator wrote navigation instructions for a given route.",
"Afterwards, a different annotator used the instructions to navigate to the goal location.",
"If one of two annotators did so successfully, the navigation instructions were considered valid.",
"Navigation Instructions Task As shown in Figure 1 (top), the annotator sees a route on a map which is rendered without street names.",
"Workers were told to write navigation instructions as if a tourist is asking for directions in a neighborhood you are familiar with and to mention landmarks to support orientation.",
"The navigation instructions were written in a text box below the map which is limited to 330 characters.",
"Navigation Run Task Figure 1 (bottom) shows the Street View interface with navigation instructions faded-in at the bottom.",
"It is possible to look around 360 and movement is controlled by the white arrows.",
"In addition there is a button on the bottom left to backtrack which proved to be very helpful.",
"The initial position is the start of the route facing in the correct direction.",
"The annotators fin-ish the navigation run with the bottom right button either when they think the goal location is reached or if they are lost.",
"The task is successful if the annotator stops the run within a 25 meter radius around the goal location.",
"The data collection resulted in 7,672 navigation instructions that were manually validated in Street View .",
"For additional 1,059 instructions, the validation failed, which amounts to a validation rate of 88%.",
"Of the validated instructions, 1,033 required a second try in the navigation run task.",
"On average, instructions are 257 characters long, with a minimum length of 110, and a maximum of 330 characters.",
"We release the segmented OSM graph, the routes in that graph paired with the collected navigation instructions, and the data split used in <street> <8> <street><9> <street><neighbor> <street><neighbor> <street><10> <street><11> <street><last> <street><neighbor> <street><neighbor> 359 0 0 6 91 271 359 0 88 <street> <5> <street> <7> <street> <6> <street> <neighbor> <poi> <poi> <k_name_1> Gramercy <k_name_2> Park 1 3 <tag_key> leisure <tag_value> park 269 90 133 90 90 90 35270 <k_name_1>Big <poi><poi> <tag_key>amenity <k_name_2>Daddy's <tag_value>restaurant 332 214 <street><1> <street><2> <street><3> <street><4> <street> <neighbor> <poi><poi> <tag_key>amenity 0 1 358 <tag_value>place_of_worship 89 0 <k_name_1>The <k_name_2>Brotherhood <k_name_3>Synagogue 270 321 219 <poi><poi> <tag_key>amenity <k_name_1>Bar fl y <tag_value>bar 63 90 <poi><poi> <tag_key>amenity <tag_value>bicycle_rental 90 90 90 node token node type directed edge 90 directed edge with angle Figure 2: Graph representation of the route in Figure 3.",
"our experiments 7 .",
"Table 1 gives a comparison of different datasets with natural language landmark navigation instructions.",
"Our dataset is the only one that uses only map information to generate navigation instructions.",
"The advantage of relying solely on map data is the global availability and longevity of the encoded features.",
"In contrast, navigation instructions written from Street View include temporary features like construction utilities, street advertisements, or passing vehicles.",
"Table 2 shows a qualitative linguistic analysis of the navigation instructions of different datasets.",
"In general, navigation instructions are driven by giving directions in imperative formulation while referencing to entities along the route.",
"In contrast to the Touchdown task where including store names was prohibited, the entities in our instructions are often referenced to by their name.",
"Although the instruction writers in our setting did not see the route in first person perspective, objects are vastly referenced to in egocentric manner (egocentric with respect to the navigating agent).",
"This is because the annotator knows 7 www.cl.uni-heidelberg.de/ statnlpgroup/map2seq/ the starting direction and can infer the facing direction for the rest of the route.",
"Because the initial facing direction in Touchdown is random, the first part of their instructions is about rotating the agent.",
"This explains the higher number of occurrences of the state verification phenomenon.",
"In our dataset, state verification is usually used to ensure the correct stopping position.",
"The different setting of data collection is also reflected by the temporal condition phenomenon.",
"Annotators of Touchdown write down instructions while navigating Street View and thus experience the temporal component first hand, while our annotators have a time independent look at the route.",
"The underlying OSM geodata of the rendered map is an XML tree of nodes located in the latitude-longitude coordinate system.",
"The nodes are composed into ways and polygons 8 .",
"These elements in connection with their annotations are used to render the visual map.",
"In the next subsection we propose our approach to represent a route and its surrounding map features as a graph that includes all necessary information for generating landmark navigation instructions.",
"The second subsection describes the neural graph-to-text architecture that is trained to learn inductive representations of the individual route graphs and to decode navigation instructions from them.",
"The basis of the graph for a single route is the OSM subgraph (Section 4.1) that includes the ac-8",
"ac-8 www.openstreetmap.org/wiki/Elements",
"tual route nodes.",
"Further, neighboring street segment nodes are added.",
"This is depicted in Figure 3 as green and blue circles, respectively.",
"In order to decide on the visibility of the POIs, we employ a technique similar to that of Rousell and Zipf (2017).",
"For each street segment, the POIs in a radius of 30 meters are identified.",
"If a line drawn between the street segment and the POI is not interrupted by a building polygon, the POI is considered visible from that particular street segment.",
"If the POI itself is (inside) a polygon, then the line is drawn to the closest point on the POI polygon.",
"The orange circles in Figure 3 show the results of the visibility check and how they naturally fit into the graph structure.",
"Each point of interest in OSM has one or more tags in the form of key and value pairs.",
"They store properties like type or name.",
"Note that we only determine the geometric visibility of the POIs and do not incorporate any hand-crafted salience scores as to what would be a good landmark.",
"Instead, saliency of a landmark is implicitly learned from natural language verbalization of the POI in the human-generated instruction.",
"An example graph representation of the route in Figure 3 is given in Figure 2.",
"Formally, a route representation is a directed graph G = ( V , E ) , where V denotes the set of nodes and E the set of edges.",
"A node v consists of a node type v t and a node token v w .",
"There are V t node types and V w node tokens.",
"Street segments are of type < street > .",
"A point of interest has the node type < poi > .",
"An OSM tag key has the node type < tag key > and an OSM tag value has the node type < tag value > .",
"The node token further specifies nodes in the graph.",
"Street segments that belong to the route have a node token < P > according to their sequential position P. The last route segment has the special token < last > .",
"Other street segment nodes have the < neighbor > token.",
"The actual key and value literals of an OSM tag are the node tokens of the respective node.",
"The OSM name tag is split into multiple nodes with type < k name N > where N is the word position and the node token is the word at that position.",
"All adjacent street segment nodes are connected with an edge in both directions.",
"If a POI is visible from a particular street segment, there is an edge from the corresponding POI node to that street segment node.",
"Each POI node is connected with their tag key nodes.",
"A tag value node is connected to its corresponding tag key node.",
"The name tag nodes of the same POI are connected with each other.",
"Some edges have a geometric interpretation.",
"This is true for edges connecting a street segment with either a POI or with another street segment.",
"These edges ( u, v ) EA , EA E have a label attached.",
"The label ang ( u, v ) is the binned angle between the nodes relative to route direction.",
"The continuous angle [0 , 360 ) is assigned to one of 12 bins. Each bin covers 30 with the first bin starting at 345 . The geometric distance between nodes is not modeled explicitly because street segments are equidistant and POI visibility is determined with a maximum distance. The proposed representation of a route and its surroundings as a directed graph with partially geometric edges is locationand rotation-invariant, which greatly benefits generalization. 5.2 Graph-to-Text Architecture By representing a route as a graph, we can frame the generation of NLLNI from maps as a graph-to-text problem. The encoder learns a neural representation of the input graph and the sequence decoder generates the corresponding text. The architecture follows the Transformer (Vaswani et al., 2017) but uses graph attentional layers (Velickovic et al., 2018) in the encoder. Graph attention injects the graph structure by masking (multi-head) self-attention to only attend to nodes that are first-order neighbors in the input graph. The geometric relations between some nodes are treated as edge labels which are modeled by distinct feature transformation matrices during node aggregation (Schlichtkrull et al., 2018). The input to a layer of the encoder is a set of node representations, x = { x 1 , x 2 , . . . , x N } , x i R d m , where N is the number of nodes and d m is the model size. Each layer l : R d m R d m takes x and produces new node representations x (cid:48) . The input to the first layer is constructed from the concatenation of type and token embedding: x i = ReLU ( WF [ E Tv ti || E Wv wi ]) where WF R 2 d m d m is a weight matrix, ET R d m and EW R d m are embedding matrices for node types and node tokens, respectively.",
"The output of a single graph attention head is the weighted sum of neighboring node representations: x i = (cid:88) j | ( v j ,v i ) E ij ( W Ur ( i,j ) x j ) (1) The weight coefficient is computed as ij = softmax j ( e ij ) = exp( e ij ) (cid:80) k | ( vk,vi ) E exp( e ik ) where e ij BLEU Len.",
"measures the compatibility of two node representations: e ij = LeakyReLU ( a T [ WV x i || W Ur ( i,j ) x j ]) (2) where a R 2 d h , WV R d m d h , d h = d m /h is the attention head dimension and h is the number of heads.",
"In the case of a geometric relation between nodes, the weight matrix W Ur ( i,j ) R d m d h is selected according to the angle label between the nodes: r ( i, j ) = ang ( u i , u j ) , otherwise r ( i, j ) = unlabeled .",
"The output of each head is concatenated and after a skip connection forwarded to the next encoder layer.",
"The encoder layer is applied L times and the final node representations x are used in the decoder context attention mechanism.",
"Thus, no modification of the Transformer decoder is necessary and L decoder layers are used.",
"Further, the decoder can copy node tokens from the input into the output sequence (See et al., 2017).",
"The described architecture is able to model all aspects of the input graph.",
"Graph attention models directed edges.",
"Edge labels model the geometric relation between nodes.",
"Heterogeneous nodes are represented by their type embedding and token embedding.",
"The sequentiality of the route is encoded by tokens ( < 1 > , < 2 > , ...) of the respective nodes.",
"This is analogous to absolute position embeddings which provide word order information for text encoding (Vaswani et al., 2017; Devlin et al., 2019).",
"We consider two baselines.",
"A rule based system that uses a single heuristic to construct instructions by stringing together all POIs and intersections along the route, and following each intersection by the turning direction.",
"Similar, POIs are followed by 'left' or 'right' depending on which side BLEU Len.",
"of the street they appear.",
"The end of the route is signaled by the 'stop' token.",
"The second baseline is a seq2seq (sequence-to-sequence) model that is trained on pairs of rule based navigation instructions and crowdsourced instructions.",
"The seq2seq model follows the Transformer architecture (Vaswani et al., 2017) with copy mechanism and is trained with the same hyperparameters as the graph-to-text model.",
"Examples are given in Figure 4.",
"We construct a graph for each route as described above.",
"On average there are 144 nodes in a graph and 3.4 edges per node.",
"There are 8 different node types and a vocabulary of 3,791 node tokens.",
"The hyperparameters for the graph-to-text architecture are set as follows: The embedding and hidden size is set to 256.",
"We use 6 encoder and decoder layers with 8 attention heads.",
"Cross entropy loss is optimized by Adam (Kingma and Ba, 2015) with a learning rate of 0.5 and batch size of 12.",
"The embedding matrix for node tokens and output tokens is shared.",
"Additionally we experiment with pretraining the graph-to-text model with above mentioned rule based instructions as target.",
"This teaches the model sequentiality of route nodes and basic interpretation of the angle labels.",
"We generate 20k instances for pretraining and further fine tune on the human generated instances.",
"Both models and the seq2seq baseline are trained on 5,667 instances of our dataset.",
"The best weights for each model are selected by token accuracy based early stopping on the 605 development instances.",
"Length is the average length in number of tokens.",
"Landmarks is the number of landmark occur-reference: At the light with Fridays on the corner, turn right.",
"Continue down the long street to the next light with Nine West on the right corner, then turn left.",
"Go to the next light with Brooks Brothers on the right corner, then turn right and stop.",
"rule based: Starbucks Coffee left subway entrance right Best Buy Mobile left Yankees right bus stop left bus stop left light right The Michelangelo left TGI Fridays left Pizza Hut left Bobby Van 's left park right Men 's Wearhouse left fountain left fountain left subway entrance left light left Nine West right Rockefeller Center left subway entrance right Brooks Brothers right light right stop seq2seq: Go straight to the light and make a left.",
"Go straight to the next light and make a left.",
"Go straight to the light and make a right.",
"Stop one step after turning with Brooks Brothers to your right.",
"graph2text: Walk to the light with TGI Fridays on the corner and turn right.",
"Walk down the long block to the next light with Nine West on the left corner, then turn left.",
"Walk to the next light with Brooks Brothers on the far right corner, then turn right.",
"g2t+pretrain: Turn right at the first set of lights with TGI Fridays on the left corner.",
"Pass a park on the right and turn left at the lights.",
"Pass the fountain on the right and turn right at the lights.",
"Take two steps and stop.",
"Brooks Brothers is on the right corner.",
"rences per instance.",
"Occurrences are identified by token overlap between navigation text and tag values of POIs along the route.",
"E.g., landmarks in the instructions in Figure 1 are: Dunkin Donuts, Bubble Tea & Crepes, Chipotle, Broadway Hotel .",
"SDTW is success weighted by normalized Dynamic Time Warping (Ilharco et al., 2019).",
"Distance between two nodes is defined as meters along the shortest path between the two nodes and threshold distance is 25 meters.",
"SR is the first try success rate in the navigation run task.",
"Success is achieved if the human navigator stops within a radius of 25 meters around the goal.",
"SNT is success weighted by navigation time: 1 N (cid:80) Ni =1 S i t i t i , where S i is a binary success indicator that is 1 if the annotator stops within a 25 meter radius around the goal.",
"t i is the time until the navigation run is finished.",
"We empirically estimate the expected navigation time t i as 1.3 seconds 9 per node in the route.",
"This estimation ranges from 45.5 seconds for routes with 35 nodes to 58.5 seconds for routes with 45 nodes.",
"SNT is inspired by SPL (Anderson et al., 2018a) but considers trajectory time instead of trajectory length.",
"Results of our experimental evaluation are shown in Table 3 and 4.",
"We evaluate on unseen data, i.e., routes without any overlap with routes in the training set, and on partially seen data, i.e., routes 9 Average over all successful navigation runs in the dataset.",
"randomly sampled from the training area with partial overlaps.",
"10 For the baseline models we perform the human evaluation on a 200 instances subset of the full 700 instances test set.",
"On the partially seen test set with 200 instances, our proposed graph-to-text models outperform the baseline models in terms of the success based metrics.",
"In the unseen setup, the rule based baseline achieves a better success rate, but falls short when success is weighted by navigation time.",
"This result shows that the instructions generated by the rule based system are exact by including all possible landmarks, but obviously do not resemble natural language and high evaluation time suggests that they are hard to read.",
"Despite moderate BLEU scores and reasonable amount of produced landmarks, the seq2seq baseline fails to generate useful navigation instructions.",
"The pretrained graph-to-text model performs better than its plain counterpart in the unseen setup.",
"It produces more correct landmarks and higher success rates.",
"In the extended evaluation the pretrained graph-to-text model is compared with the reference on 700 instances in each test set.",
"Under the central evaluation metric of success normalized by time (SNT), our model reaches .664 and .629 on partially seen and unseen test data, respectively.",
"An example output for each system together with the input map is shown in Figure 4.",
"The rule based instruction is complete, but ignores saliency 10 The data split is shown in the Appendix.",
"of landmarks and is hard to read.",
"The seq2seq baseline generates a navigation instruction that sounds human-like and also includes salient landmarks found on the map.",
"However, the directions are incorrect in this example.",
"The graph-to-text based models get the directions right and produce fluent natural language sentences.",
"They include landmarks at the correct sequential position.",
"A further qualitative evaluation of instructions generated by the graph-to-text models is given in the Appendix.",
"We presented a dataset and suitable graph-to-text architecture to generate landmark navigation instructions in natural language from OpenStreetMap geographical data.",
"Our neural model includes novel aspects such as a graphical representation of a route using angle labels.",
"Our dataset consists of a few thousand navigation instructions that are verified for successful human navigation.",
"The dataset is large enough to train a neural model to produce navigation instructions that are very similar in several aspects to human-generated instructions on partially seen test data.",
"However, performance naturally drops on unseen data including new types of landmarks in new combinations.",
"We would like to thank Christian Buck and Massimiliano Ciaramita for initial fruitful discussions about this work.",
"The research reported in this paper was supported by a Google Focused Research Award on Learning to Negotiate Answers in MultiPass Semantic Parsing."
] | [
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"When learning POS taggers and syntactic chunkers for low-resource languages, different resources may be available, and often all we have is a small tag dictionary, motivating type-constrained unsupervised induction.",
"Even small dictionaries can improve the performance of unsupervised induction algorithms.",
"This paper shows that performance can be further improved by including data that is readily available or can be easily obtained for most languages, i.e., eye-tracking, speech, or keystroke logs (or any combination thereof).",
"We project information from all these data sources into shared spaces, in which the union of words is represented.",
"For English unsupervised POS induction, the additional information, which is not required at test time, leads to an average error reduction on Ontonotes domains of 1.5% over systems augmented with state-of-the-art word embeddings.",
"On Penn Treebank the best model achieves 5.4% error reduction over a word embeddings baseline.",
"We also achieve significant improvements for syntactic chunk induction.",
"Our analysis shows that improvements are even bigger when the available tag dictionaries are smaller.",
"It is a core assumption in linguistics that humans have knowledge of grammar and that they use this knowledge to generate and process language.",
"Reading, writing, and talking leave traces of this knowledge and in psycholinguistics this data is used to analyze our grammatical competencies.",
"Psycholinguists are typically interested in falsifying a specific hypothesis about our grammatical competencies and therefore collect data with this hypothesis in mind.",
"In NLP, we typically require big, representative corpora.",
"NLP usually has in-Lea Frermann carried out this work while at the University of Edinburgh.",
"duced the models from expensive corpus annotations by professional linguists, but recently, a few researchers have shown that data traces from human processing can be used directly to improve NLP models (Klerke et al., 2016; Barrett et al., 2016; Plank, 2016).",
"In this paper, we investigate whether unsupervised POS induction and unsupervised syntactic chunking can be improved using human text processing traces.",
"We also explore what traces are beneficial, and how they are best combined.",
"Our work supplements psycholinguistic research by evaluating human data on larger scale than usual, but more robust unsupervised POS induction also contributes to NLP for low-resource languages for which professional annotators are hard to find, and where instead, data from native speakers can be used to augment unsupervised learning.",
"We explore three different modalities of data reflecting human processing plus standard, pre-trained distributional word embeddings for comparison, but also because some modalities might fare better when combined with distributional vectors.",
"Data reflecting human processing come from reading (two different eye-tracking corpora), speaking (prosody), and typing (keystroke log-ging).",
"We test three different methods of combining the different word representations:",
"a) canonical correlation analysis (CCA) (Faruqui and Dyer, 2014b) and",
"b) singular value decompo-sision and inverted softmax feature projection (SVD+IS) (Smith et al., 2017) and",
"c) simple concatenation of feature vectors.",
"Contributions We present experiments in unsupervised POS and syntactic chunk induction using multi-modal word representations, obtained from records of reading, speaking, and writing.",
"Individually, all modalities are known to contain syntactic processing signals, but to the best of our 2028 knowledge, we are the first to combine them in one model.",
"Our work extends on previous work in several respects:",
"(a) We compare using data traces from gaze, speech, and keystrokes.",
"(b) We consider three ways of combining such information that do not require access to data from all modalities for all words.",
"(c) While some previous work assumed access to gaze data at test time, our models do not assume access to any modalities at test time.",
"(d) We evaluate how much the additional information helps, depending on the size of the available tag dictionary.",
"(e) While related work on keystrokes and prosody focused on a single feature, all our word representations are multidimensional and continuous.",
"Eye-tracking data reflect the eye movements during reading and provide millisecond-accurate records of the readers fixations.",
"It is well established that the duration of the fixations reflect the processing load of the reader (Rayner, 1998).",
"Words from closed word classes are usually fix-ated less often and for shorter time than words from open word classes (Rayner and Duffy, 1988).",
"Psycholinguistics, however, is generally not interested in covering all linguistic categories, and psycholinguists typically do not study corpora, but focus instead on small suites of controlled examples in order to explore human cognition.",
"This is in contrast with NLP.",
"Some studies have, however, tried to bridge between psycholinguistics and NLP.",
"Demberg and Keller (2008) found that eye movements reflected syntactic complexity .",
"Barrett and Sgaard (2015a) and Barrett and Sgaard (2015b) have tried torespectivelypredict a full set of syntactic classes and syntactic functions across domains in supervised setups.",
"Barrett et al. (2016), which is the work most similar to ours, used eye-tracking features from the Dundee Corpus (Kennedy et al., 2003), which has been augmented with POS tags by Barrett et al. (2015).",
"They tried for POS induction both on token-level and type-level features.",
"They found that eyetracking features significantly improved tagging accuracy and that type-level eye-tracking features helped more than token-level.",
"We use the same architecture as Barrett et al. (2016).",
"Keystroke logs also reflect the processing durations, but of writing.",
"Pauses, burst and revisions in keystroke logs are used to investigate the cognitive process of writing (Matsuhashi, 1981; Baaijen et al., 2012).",
"Immonen and Makisalo (2010) found that for English-Finnish translation and monolingual Finnish text production, predicate phrases are often preceded by short pauses, whereas adpositional phrases are more likely to be preceded by long pauses.",
"Pauses preceding noun phrases grow with the length of the phrase.",
"They suggest that the difference is explained by the processing of the predicate begins before the production of the clause starts, whereas noun phrases and adpositional phrases are processed during writing.",
"Pre-word pauses from keystroke logs have been explored with respect to multi-word expressions (Goodkind and Rosenberg, 2015) and have also been used to aid shallow parsing (Plank, 2016) in a multi-task bi-LSTM setup.",
"Prosodic features provide knowledge about how words are pronounced (tone, duration, voice etc.).",
"Acoustic cues have already been used to improve unsupervised chunking (Pate and Goldwater, 2011) and parsing (Pate and Goldwater, 2013).",
"Pate and Goldwater (2011) cluster the acoustic signal and use cluster label as a discrete feature whereas Pate and Goldwater (2013) use a quantized word duration feature.",
"Plank (2016) and Goodkind and Rosenberg (2015) also used a single keystroke feature (keystroke pre-word pause) and the former study also discretized the feature.",
"Our work, in contrast, uses acoustic and keystroke features as multidimensional, continuous word representations.",
"In our experiments, we begin with five sets of word representations: prosody, keystroke, gaze as recorded in the GECO corpus, gaze as recorded in the Dundee corpus, as well as standard, text-based word embeddings from eigenwords.",
"See below for details and references.",
"All modalities except the pre-trained word embeddings reflect human processing of language.",
"For all modalities, we use type-level-averaged features of lower-cased word types.",
"The choice of using type-averaged features is motivated by Barrett et al. (2016), who tried both token-level and type-averaged eye-tracking features for POS induction and found that type-level gaze features worked better than token-level.",
"Type-averaged features also have the advantage of not relying on access to the auxillary data at test 2029 Eigenwords Dundee Geco Keystroke Prosody n word types 46973 9109 5817 2198 598 Main P r o s od y K e ys t r o k e G e c o D undee E i gen v e c t o r O t he r 1.2 4.1 5.9 12.5 100.0 3.9 13.5 18.2 100.0 46.0 8.9 28.4 100.0 48.1 57.2 15.7 100.0 44.5 55.9 63.0 100.0 81.1 71.7 83.0 97.8 20 40 60 80 100 Figure 1 : The percentage of overlapping word types for pairs of modalities.",
"Overlapping words are used for projecting word representations into a shared space.",
"Read column-wise.",
"E.g. when combining eigenwords and prosody, only 1.2% of the 46973 eigenvector word types are overlapping (bottom left), and 97.8% of the 598 prosody word types are overlapping (top right).",
"time.",
"Type-level averages are simply looked up in an embedding file for all previously seen words.",
"On the other hand, type-level features obviously do not represent ambiguities, e.g., beat as a verb and a noun separately.",
"All our features, except log-transformed word frequencies were normalized.",
"We run unsupervised induction experiments for all ( 2 5 1 = 31 ) combinations of our five data sources on the development sets to determine which data types contribute to the task.",
"We consider three different ways of combining modalities, two of which learn a projection into a shared space using word overlap as supervision, and one simply concatenates the embedding spaces.",
"The combination methods are further described in 4.",
"We list the number of word types per modality and percentage of pair-wise overlapping words in Figure 1.",
"We only use existing data from native speaking participants, for reproducibility and in order not to get learner effects ie.",
"biases introduced by non-native speakers.",
"3.2-3.5 describe each modality in detail, and how we compute the word representations.",
"3.1 describes a set of basic features used in all of our experiments.",
"Like Li et al. (2012), we append a small set of basic features to all our feature sets: features relating to orthography such as capitalization, digits and suffix.",
"Furthermore we append log word frequency and word length.",
"Word frequencies per million are Modality n found pairs Weigh.",
"Table 1 : Results on word association norms from wordvectors.org Correlation weighted by number of found pairs per word embedding type.",
"obtained from British National corpus (BNC) frequency lists (Kilgarriff, 1995).",
"Word length and word frequency explain around 70% of the variance in the eye movement (Carpenter and Just, 1983) and are therefore also important for estimating the impact of gaze features beyond such information.",
"Plank (2016) used keystroke features for shallow chunking and did not find any benefit of normalizing word length by pre-word pause before typing each word, but Goodkind and Rosenberg (2015) did find a strong logarithmic relationship between word length and pre-word pause as well as between word frequency and pre-word pause.",
"We use two different eye-tracking corpora.",
"The GECO corpus (Cop et al., 2017) and the Dundee Corpus (Kennedy et al., 2003) are the two largest eye movement corpora with respect to word count.",
"We use the native English part of the GECO corpus and the English part of the Dundee Corpus.",
"The GECO corpus is publicly available 1 and the Dundee Corpus is available for research purposes.",
"Participants and data The Dundee Corpus is described in Kennedy and Pynte (2005).",
"The Dundee Corpus consists of the eye movements of 10 readers as they read the same 20 newspaper articles.",
"For GECO, all 14 participants in the native English part read a full Agatha Christie novel.",
"Both corpora contain > 50 .",
"000 words per reader.",
"All participants for both corpora are adult, native speakers of English and skilled readers.",
"Self-paced reading Both eye-tracking corpora reflect natural reading by making the reading self-paced and using naturally-occurring, contextual-ized text.",
"1 http://expsy.ugent.be/downloads/geco/ 2030 Features Eye movementslike most features reflecting human processingare very susceptible to experiment-specific effects e.g. instructions and order effects such as fatigue.",
"Furthermore, the GECO corpus has a slightly different eye movement feature set than what we have for the Dundee corpus.",
"Therefore we treat the two eye movement corpora as two individual modalities in order to assess their individual contributions.",
"GECO has 34 features reflecting word-based processing.",
"Dundee has 30 word-based features that were extracted from the raw data and previously used for POS induction by Barrett et al. (2016).",
"For GECO, we use the features that are already extracted by the authors of the corpus.",
"Both corpora include five word-based features e.g., first fixation duration (which is a measure said to reflect early syntactic and semantic integration), total fixation time and fixation probability.",
"The Dundee Corpus has more features concerning the context words whereas GECO has pupil size and many features distinguishing the different passes over a word.",
"The prosody features are described in detail in Frermann and Frank (2017) and are freely available.",
"2 They are derived from the Brent (Brent and Siskind, 2001) and Providence (Demuth et al., 2006) portions of the CHILDES corpus (MacWhinney, 2000), comprising longitudinal datasets of raw speech directed to 22 children, and its transcription.",
"Word-level speech-text alignments were obtained automatically using forced alignment.",
"For each token-level audio snippet, a set of 88 prosody features was extracted based on a previously established feature set (Eyben et al., 2016), including standard features derived from F0F3 formants, spectral shape and rhythm features, intensity and MFCC features among others.",
"Type-level prosody features were obtained as averaged token-level features for each word type.",
"We extracted keystroke features from the publicly available data from Killourhy and Maxion (2012).",
"This data contains key hold times and pauses of all key presses of 20 subjects as they completed transcription and free composition tasks.",
"We only used data from the free composition part.",
"A pause is de-fined by the authors as the duration from keydown 2 https://github.com/ColiLea/prosodyAOA to keydown.",
"The free composition data consists of a total of 14890 typed words and 2198 word types.",
"For each word, we extracted the following features:",
"(i) average key hold duration of all characters associated with producing the word,",
"(ii) pre-word pause,",
"(iii) hold duration of space key before word,",
"(iv) pause length of space key press pause before word, and",
"(v) ratio of keypresses used in the word production to length of the final word.",
"For each word, we also included these five features for up to 3 words before.",
"In total, we have 5 4 = 20 keystroke features.",
"We use lower-cased word type averages, as with the other modalities.",
"Eigenwords are standard, pre-trained word embeddings, induced using spectral-learning techniques (Dhillon et al., 2015).",
"We used the 30-dimensional, pre-trained eigenvectors.",
"3 3.6 Preliminary evaluation Our application of these word representations and their combinations is unsupervised POS and syntactic chunk induction, but before presenting our projection methods in 4 and our experiments in 5, we present a preliminary evaluation of the different modalities using word association norms.",
"Table 1 shows the weighted correlation between cosine distances in the representations and the human ratings in the word association norm datasets available at wordvectors.org (Faruqui and Dyer, 2014a).",
"Eigenwords, not surprisingly, correlates better than the representation based on processing data with the exception of prosody.",
"The correlation with prosody is non-significant, however, because of the small sample size.",
"We now have word representations from different, complementary modalities, with very different coverages, but all including a small overlap.",
"We assume that the different modalities contain complementary human text processing traces because they reflect different cognitive processes, which motivates us to combine these different sources of information.",
"Our assumption is con-firmed in the evaluation.",
"The fact that we have very low coverage for some modalities, and the 3 http://www.cis.upenn.edu/ungar/ eigenwords/ 2031 fact that we have an overlap between all our vocabularies, specifically motivates an approach, in which we use the intersection of word types to learn a projection from two or more of these modalities into a shared space.",
"Obviously, we can also simply concatenate our representations, but because of the low coverage of some modalities and because co-projecting modalities has some regularization effect, we hypothesize that it is better to learn a projection into a shared space.",
"This hypothesis is verified by the results in 6.",
"The simplest way of combining the modalities is concatenating the corresponding vectors for each word.",
"The different modalities have different di-mensionalities, so we would need to perform dimensionality reduction to sum or average vectors, and the non-overlapping words don't allow for e.g. taking the outer product, so we simply concatenate the vectors instead.",
"We use 0 for missing values.",
"4.2 and 4.3 describe two different projection methods for projecting the representations in the different modalities into a shared space.",
"We use the intersection of the lower-cased vocabulary for the alignment, i.e., as a supervision signal.",
"For example, if the words man , dog and speak exist in both eigenword and keystroke data, from these 2 x 3 vectors, CCA estimate the transformation for the vectors for house , cat and boy , which (in this example) only exists in the keystroke data.",
"Canonical Correlation Analysis (CCA), as originally proposed by Hotelling (1936), is a method of finding the optimum linear combination between two sets of variables, so the set of variables are transformed onto a projected space while the correlation is maximized.",
"We use the implementation of Faruqui and Dyer (2014b) made for creating bilingual embeddings.",
"We use modalities instead of languages.",
"The size of the projected space is smaller than or equal to the original dimension.",
"We incrementally combine modalities and project them to new, shared spaces using the intersection of the lower-cased vocabulary.",
"We add them by the order of word type count starting with the modality with most word types.",
"For the first projection only, we reduce the size of the projected space.",
"We set the ratio of the first projected space (only two modalities) to 0.6 based on POS induction results on development data using the setup described in 5.",
"As an alternative to CCA, but closely related, we also use a projection method proposed and implemented by Smith et al. (2017), which uses singular value decomposition and inverted softmax (SVD+IS).",
"This method uses a reference space, rather than projecting all modalities into a new space.",
"Smith et al. (2017) apply SVD+IS to obtain an orthogonal transformation matrix that maps the source language into the target language.",
"In addition, in order to estimate their confidence on the predicted target, they use an inverted softmax function for determining the probability that a target word translates back into a source word.",
"Like for CCA, we incrementally project datasets onto each other starting with the most word-type rich modality.",
"We use the highest dimensionality of any of our representations (88 di-mensions).",
"This section presents our POS and syntactic chunk induction experiments.",
"We present the datasets we used in our experiments, the sequence tagging architecture, based on second-order hidden Markov models, as well as the dictionary we used to constrain inference at training and test time.",
"For unsupervised POS induction, we use Ontonotes 5.0 (Weischedel et al., 2013) for training, development and test.",
"We set all hyper-parameters on the newswire ( NW ) domain, optimizing performance on the development set.",
"Size of the development set is 154,146 tokens.",
"We run individual experiments on each of the seven domains, with these hyper-parameters, reporting performance on the relevant test set.",
"The domains are broadcast conversation ( BC ), broadcast news ( BN ), magazines ( MZ ), newswire ( NW ), the Bible ( PT ), telephone conversations ( TC ), and weblogs ( WB ).",
"We also train and test unsupervised POS induction on the CoNLL 2007 (Nivre et al., 2007) splits of the Penn Treebank (Marcus et al., 1993) using the hyper-parameter settings from Ontonotes.",
"We mapped all POS labels to Google's coarse-grained, universal POS tagset (Petrov et al., 2012).",
"For model selection, we select based both on best results on Ontonotes 2032 Rules DET NP VERB VP NOUN | PRONOUN | NUM NP .",
"Table 2 : Heuristics for expanding our POS dictionary to chunks",
"NW development as well as Penn Treebank development sets.",
"For syntactic chunk induction, we use the bracketing data from Penn Treebank with the standard splits for syntactic chunking.",
"We tune hyperpa-rameters for chunking on the development set and select best models based on the development result.",
"We used a modification of the implementation of a type-constrained, second-order hidden Markov model with maximum entropy emissions from Li et al. (2012) (SHMM-ME).",
"It is a second-order version of the first order maximum entropy HMM presented in (Berg-Kirkpatrick et al., 2010) with the important addition that it is constrained by a crowd-sourced tag dictionary (Wiktionary).",
"This means that for all words in the Wiktionary, the model is only allowed to predict one of the tags listed for it in Wiktionary The same model was used in Barrett et al. (2016) to improve unsupervised POS inducing using gaze data from the Dundee Corpus, and in Bin-gel et al. (2016) to augment an unsupervised POS tagger with features from fMRI recordings.",
"The number of EM iterations used for inducing our taggers was tuned using eigenvector embeddings on the development data, considering values 1..50.",
"PoS performance peaked at iterations 30 and 31.",
"We use 30 in all our POS experiments.",
"For syntactic chunking, we use 48 iterations, which led to the best performance on the PTB development data using only eigenword embeddings.",
"The Wiktionary constrains the predicted tags in our model.",
"The better the Wiktionary, the better the predictions.",
"For POS-tagging we used the same Wiktionary Feature set TA No embeddings 60.32 Eigenwords 59.26 Best combined models CCA Dun GECO Pros 63.33 * SVD+IS GECO Key Pros 62.91* Concat Eig GECO Key 61.16 Table 3 : Chunk tagging accuracy.",
"Best models from CCA, SVD+IS and concatenation.",
"Model section on development set.",
"* p < .",
"001 Mcnemar midp test when comparing to no embeddings.",
"p < .",
"001 Mcnemar midp test when comparing to",
"Eigenwords.) dump 4 that Li et al. (2012) used in their original experiments.",
"The Wiktionary dump associated word types with Google's universal parts-of-speech labels.",
"For chunking, Wiktionary does not provide direct information about the possible labels of words.",
"We instead apply simple heuristics to relate POS information to syntactic chunking labels.",
"Since we already know the relation between words and POS labels from Wiktionary, we can compute the transitive closure in order to obtain a dictionary relating words with syntactic chunking labels.",
"We present the heuristics in Table 2.",
"Note that the rules are rather simple.",
"We do not claim this is the best possible mapping.",
"We are relying on these simple heuristics only to show that it is possible to learn syntactic chunkers in an unsupervised fashion by relying on a combination of features from different modalities and a standard, crowd-sourced dictionary.",
"All our POS tagging accuracies can be seen in Table 4.",
"Our first observation is that human processing data helps unsupervised POS induction.",
"In fact, the models augmented with processing data are consistently better than the baseline without vector representations, as well as better than only using distributional word embeddings.",
"Generally, CCA seems to find the best projection into a common space for system combinations.",
"For Penn Treebank, the CCA-aligned model is the best and this result is significant ( p < 4 https://code.google.com/archive/p/wikily-supervised-pos-tagger/ 2033 Ontonotes PTB Feature set BC BN MZ NW PT TC WB avg No embeddings 83.1 84.41 85.32 84.94 85.14 77.8 85.93 83.81 82.83 Eigenwords 83.16 84.68* 85.48 85.07 85.31 78.07 85.88 83.95 83.38* Best Ontonotes NW models CCA Eig Dun 83.45 * 84.99 * 85.79* 85.38 * 85.2 77.99 86.38 * 84.17 84.28 * SVD+IS Dun GECO Key 83.24 84.76 86.22 * 85.33* 85.44 77.84 85.95 84.11 84.25* Concat Eig Dun GECO 83.39* 84.78* 85.8* 85.36* 85.45 78.38 * 86.21 84.19 83.91* Best PTB models CCA Eig Dun 83.45 * 84.99 * 85.79* 85.38 * 85.2 77.99 86.38 * 84.17 84.28 * SVD+IS Dun Key 83.24 84.59 86.12* 85.28* 85.39 77.90 85.86 84.05 84.24* Concat Eig Pros 83.22 84.54 85.67 85.01 84.98 77.98 85.97 83.91 84.22* Table 4 : POS tagging accuracies for baselines and the model combinations that performed best on newswire development data ( NW ).",
"Best performance per domain is boldfaced.",
"*) p < .",
"001 McNemar midp test when compared to the no embeddings condition for the corresponding test set.",
") p < .",
"001 McNemar midp test when compared to eigenwords for the corresponding test set.",
".",
"001 ) when comparing both to no embeddings and eigenwords.",
"For Ontonotes 5.0, CCA is better than the other projection methods in 4/7 domains, but when averaging, concatenation gets the higher result.",
"The standard embeddings are often part of the best combinations, but the human processing data contributes with important information; in 4/7 domains as well as on PTB data, we see a significantly better performance ( p < . 001 ) with a combination of modalities when comparing to eigenwords.",
"Aligning Dundee with eigenwords is the best POS model both according to the Ontonotes 5.0 NW development set and the Penn Treebank development set.",
"Dundee is the most frequent modality in the six best POS induction models with five appearances.",
"Eigenwords is second most frequent with four appearances.",
"The syntactic chunking accuracies are in Table 3.",
"Also here CCA is the better combination method.",
"For chunking, all combined models are better than no embeddings and eigenwords.",
"The improvement is significant compared to no embeddings for concatenation p < .",
"001 .",
"For CCA, the result is significantly better than no embeddings and eigenwords.",
"For chunking, GECO data appears in all best models and is thus the most frequent modalities.",
"Keystroke and prosody appears in two best models each.",
"Table 5 : Graph similarities in [0 , ) , 0 = identical.",
"Nearest neighbor graphs We include a detailed analysis of subgraphs of the nearest neighbor graphs in the embedding spaces of keystrokes, Dundee, GECO, and CCA projection of all modalities.",
"Specifically, we consider the nearest neighbor graphs among the 15 most frequent unam-bigous nouns, according to Wiktionary.",
"5 See Figure 2 for plots of the nearest neighbor graphs.",
"The prosody features containing less than 600 word types only contained 2 of the 15 nouns and is therefore not included in this analysis.",
"Projecting word representations into a shared space using linear methods assumes approximate isomorphism between the embedding spaces or at least their nearest neighbor graphs.",
"We use the VF2 algorithm (Cordella et al., 2001) to verify that the subgraphs are not isomorphic, but this can also be seen directly from Figure 2.",
"Neither keystroke and gaze embeddings, nor the two different gaze-induced embeddings are isomorphic.",
"5 Wiktionary is a crowd-sourced, imperfect dictionary, and one of the unambiguous nouns is spends , which, we assume, you are more likely to encounter as a verb.",
"modalities Figure 2 : Nearest neighbor graphs for 15 frequent nouns.",
"Since none of the modalities induce isomorphic nearest neighbor graphs, this does not tell us much about similarities between modalities.",
"To quantify the similarity of non-isomorphic graphs, we use eigenvector similarity Shigehalli and Shet-tar (2011), which we calculate by computing the Laplacian eigenvalues for the nearest neighbors, and for each graph, find the smallest k such that the sum of the k largest eigenvalues is < 90% of the eigenvalues.",
"We then take the smallest k of the two, and use the sum of the squared differences between the largest k eigenvalues as our similarity metric.",
"Using this metric to quantify graph similarity, we see in Table 5 that, not surprisingly, the gaze graphs are the most similar.",
"The projected space is more similar to the gaze spaces, but balances gaze and keystroke information.",
"The GECO embeddings agree more with the keystrokes than the Dundee embeddings does.",
"t-SNE plots We take words thataccording to the Wiktionarycan only have one tag and sort them by BNC frequency (Kilgarriff, 1995) in descending order.",
"For these words and their POS tags we get the feature vector of the POS model yielding the highest result on both Ontonotes and PTB: CCA-projected eigenwords and Dundee features.",
"For the first 200 occurrences of the frequency-sorted list, we reduce dimensionality using t-Distributed Stochastic Neighbor Embedding (t-SNE) (Maaten and Hinton, 2008) and plot the result.",
"Figure 3 shows that 200 most frequent content words cluster with respect to their POS tag, somewhat distinguishing verbs from nouns and adjectives from adverbs in CCA space.",
"Our Wiktionary for English contains POS information for 72,817 word types.",
"Word types have 6.2 possible POS categories on average meaning we have over 450.000 entries in our POS dictionary.",
"For Penn Treebank, 70.0% of wordtypes of the test set are covered by the dictionary.",
"For the chunking data, 70.4% of wordtypes of the test set are covered by the dictionary.",
"The English Wiktionary is thus much bigger than wik-tionaries for low-resource language (Garrette and Baldridge, 2013).",
"How big a dictionary is needed to achieve good performance, and can we get away with a smaller dictionary if we have processing data?",
"This section explores the performance of the model as a function of the Wiktionary size.",
"We sorted the Wiktionary by word frequency obtained from BNC (Kilgarriff, 1995) and increased the Wiktionary size for the best POS system starting with 0 (no dictionary).",
"For each Wiktionary size, we compare with the baseline without access to processing data and eigenwords.",
"The learning curve can be seen in Figure 4a and Figure 4b.",
"We observe that having entries for the most frequent words is a lot better than having no dictionary, and that the difference between our best system and the baseline exists across all dictionary sizes.",
"With 10,000 entries, all systems seems to reach a plateau.",
"Genres and domains When collecting our human language processing data, we did not control for genre.",
"Our data sets span child-directed speech, free text composition, and skilled adults reading fiction and newspaper articles.",
"The 2035 20 10 0 10 20 30 20 10 0 10 20 NOUN VERB",
"Figure 3 : t-SNE plots of CCA-projected eigen dundee features for pairs of tags.",
"(b) 0-150,000 entries Figure 4 : Learning curve assuming Wiktionary entries for k most frequent words, comparing our best PoS induction system against our baseline.",
"On Ontonotes WB development data, 30 training iterations.",
"Dundee corpus (newspaper articles) matches the genre of at least some of the Ontonotes test set.",
"Immonen and Makisalo (2010) found that for keystroke, genre does seem to have an effect on average pause length, be it sentence initial, word initial, clause initial or phrase initial.",
"Texts organized linearlye.g. reports and narrativesrequire less pausing than texts with a global approach, like expository, persuading and generalizing text.",
"Our results show that human processing features transfer across genres, but within-genre data would probably be beneficial for results.",
"Richer representations The type-level features we use, do not take context into account, and the datasets we use, are too small to enrich our representations.",
"Human processing data is more and more readily available, however.",
"Eye trackers are probably built into the next generation of consumer hardware, and speech records and keystroke logs are recordable with existing technology.",
"We have shown how to improve unsupervised POS induction and syntactic chunking significantly using data reflecting human language processing.",
"Our model, which is a second-order hidden Markov model, is the first to combine multidimensional, continuous features of eye movements, prosody and keystroke logs.",
"We have shown that these features can be combined using projection techniques, even when they only partially overlap in word coverage.",
"None of our models require access to these features at test time.",
"We experimented with all combinations of modalities, and our results indicate that eye tracking is useful for both chunking and POS induction.",
"Finally, we have shown that the potential impact of human processing data also applies in a low-resource setting, i.e., when available tag dictionaries are small.",
"Thanks to Desmond Elliott for valuable ideas.",
"This research was partially funded by the ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"result",
"result",
"other",
"other"
] |
[
"In open-domain question answering, questions are highly likely to be ambiguous because users may not know the scope of relevant topics when formulating them.",
"Therefore, a system needs to find possible interpretations of the question, and predict one or multiple plausible answers.",
"When multiple plausible answers are found, the system should rewrite the question for each answer to resolve the ambiguity.",
"In this paper, we present a model that aggregates and combines evidence from multiple passages to adaptively predict a single answer or a set of question-answer pairs for ambiguous questions.",
"In addition, we propose a novel round-trip prediction approach to iteratively generate additional interpretations that our model fails to find in the first pass, and then verify and filter out the incorrect question-answer pairs to arrive at the final disambiguated output.",
"Our model, named REFUEL , achieves a new state-of-the-art performance on the AMBIGQA dataset, and shows competitive performance on NQ-OPEN and TriviaQA.",
"The proposed round-trip prediction is a model-agnostic general approach for answering ambiguous open-domain questions, which improves our REFUEL as well as several baseline models.",
"We release source code for our models and experiments at https://github.",
"com/amzn/refuel-open-domain-qa .",
"Open-domain Question Answering (QA) is the task of answering questions using a collection of passages with diverse topics (Chen et al., 2017; Guu et al., 2020; Karpukhin et al., 2020).",
"Open-domain questions are highly likely to be ambiguous because people may not have the knowledge of relevant topics when formulating them.",
"For example, in Figure 1, the prompt question What's the most Work done during an internship at AWS AI. Prompt Question (Google search query): What's the most points scored in an NBA game? Disambiguated QA Pairs: Q 1 : What's the most points scored in an NBA game by combined team? / A 1 : 370 Q 2 : What's the most points scored in an NBA game by a single team? / A 2 : 186 Q 3 : What's the most points scored in an NBA game by an individual? / A 3 : 100 Relevant Wikipedia Page 1 : The highest-scoring regular season game is the triple-overtime game between ... the two teams combined to score 370 points, with the pistons defeating the nuggets 186184 ... Relevant Wikipedia Page 2 : Wilt Chamberlain scored an nba-record 100 points ... Figure 1: An example from the AMBIGQA (Min et al., 2020) dataset. The Prompt Question is gathered from Google search queries and has three interpretations upon reading Wikipedia. Disambiguated QA Pairs are the full set of acceptable answers, paired with the disambiguated rewriting of the prompt question. points scored in an NBA game? is ambiguous because the score in this question could be interpreted as the combined score in a game (Q 1 A 1 ), score from a single team (Q 2 A 2 ), or score from an individual player (Q 3 A 3 ).",
"Therefore, a system needs to adaptively predict a single answer, or a set of equally plausible answers when the question has multiple interpretations.",
"When a set of multiple answers is predicted, an unambiguous rewriting of the question that leads to each answer should also be provided to clarify each interpretation.",
"Min et al. (2020) decompose this problem into two subtasks.",
"Given the prompt question and Wikipedia passages, the first subtask, Answer Prediction , consists in predicting one or several plausible answers, depending on whether this question is ambiguous or not.",
"If multiple answers are predicted, the second subtask, Question Disambiguation , requires generating a disambiguated question for each of the plausible answers.",
"They propose SPANSEQGEN , which first retrieves and reranks passages using the prompt question, and then adopts a BART pre-trained sequence-to-sequence model (Lewis et al., 2020a) to generate all plausible answers, conditioned on the concatenation of the prompt question and top 8 passages.",
"For the question disambiguation subtask, based on BART, they first pre-train a question generation model on NQ-OPEN (Kwiatkowski et al., 2019), a large-scale open-domain QA dataset, to generate the question given the answer and top 8 passages.",
"Then they fine-tune it as a question disambiguation model to generate the disambiguated question conditioned on the prompt question, answer, and passages.",
"There are three main drawbacks to SPANSEQGEN .",
"Firstly, a complete coverage of all relevant passages is essential for predicting all plausible answers of the ambiguous question.",
"However, SPANSEQGEN only takes 8 passages for answer prediction so some of the most informative passages might be excluded.",
"Secondly, for the question disambiguation subtask, there is a mismatch between question generation pre-training on NQ-OPEN and question disambiguation finetuning on AMBIGQA there is no question to disambiguate in question generation pre-training, which makes the pre-training task somewhat misaligned with fine-tuning.",
"Thirdly, SPANSEQGEN predicts a much smaller average number of answers compared to the ground truth data (1.17 vs. 2.19).",
"To address these issues, we propose REFUEL , Round-trip Evidence FUsion via gEneration with retrievaL, a new framework for answering ambiguous open-domain questions.",
"To ensure a broad coverage of relevant knowledge of the question, REFUEL reads 12 times more passages (100 in our experiments) than SPANSEQGEN by using Fusion-in-Decoder (Izacard and Grave, 2020) that processes each passage individually in the encoder, and then fused their encodings together in the decoder.",
"For the question disambiguation subtask, we propose a token-deletion pre-training task to transform NQ-OPEN into an ambiguous QA setting by randomly deleting an informative span for each question.",
"Thus, pre-training and fine-tuning tasks are well aligned.",
"Additionally, we add an insertion-based weighted loss to emphasize the newly inserted tokens in the disambiguated question, which helps the model on learning to resolve the ambiguity.",
"Finally, we propose a round-trip prediction approach to find additional interpretations that REFUEL fails to predict in the first pass.",
"We continuously feed the generated questions into REFUEL until there are no new answers predicted from our model.",
"While this round-trip prediction can improve the recall of answers, we refine the quality of predicted QA pairs by filtering them with the conditional probability of the answers estimated by an answer-generation model.",
"Our REFUEL achieves a new state-of-the-art on the AMBIGQA dataset, outperforming the previous best model SPANSEQGEN by 9.1% in answer prediction F1 and 4.4% in Edit-F1 score for question disambiguation.",
"When directly doing inference on NQ-OPEN and TriviaQA, REFUEL not only predicts the single answer precisely but also finds multiple interpretations if the question is ambiguous.",
"Moreover, human evaluation shows that REFUEL can correctly generate more QA pairs on all three datasets.",
"Finally, the proposed round-trip prediction is a model-agnostic general approach for answering ambiguous questions, which improves our REFUEL as well as several baseline models up to 3.7% for the overall performance.",
"The main contributions of this work, which are fundamental to significantly push the state-of-the-art in answering ambiguous questions, can be summarized as follows: 1. We present an evidence aggregation approach that can effectively use a large number of passages to uncover more candidate interpretations of the ambiguous question.",
"2. We propose a token-deletion pre-training task to reduce the mismatch between pre-training and fine-tuning for question disambiguation.",
"The insertion-based weighted loss further helps to capture answer-relevant constraints.",
"3. We propose a round-trip prediction approach to find more interpretations missed in the first prediction pass, which we further refine using a conditional-probability-based filtering approach.",
"REFUELREFUEL answers questions through a three-step process illustrated in Figure 2:",
"1. The Passage Retrieval & Reranking module retrieves question-relevant passages from the whole Wikipedia corpus.",
"Then the retrieved passages are further reranked (Sec. 2.1).",
"2. Taking the reranked passages and the prompt question as input, our single pass QA pair generation model makes the first prediction Prompt Question Q !",
"pass to predict a single answer or a set of disambiguated QA pairs (Sec. 2.2).",
"3. Our proposed Round-Trip Prediction can find more interpretations missed in the first prediction pass, which we further refine using a conditional-probability-based filtering approach (Sec. 2.3).",
"We use Dense Passage Retriever (DPR) (Karpukhin et al., 2020) for retrieval.",
"First, we split all Wikipedia pages into 100-token passages, resulting in 24M passages in total.",
"Then DPR maps all passages into d -dimensional vectors, computes the representation of the prompt question, and retrieves N passages whose vectors are closest to the question vector (we use N=1000).",
"After retrieving N passages for the prompt question, we fine-tune BERT (Devlin et al., 2019) to rerank these passages.",
"Taking the concatenation of the prompt question and each passage as input, the reranker allows a token-level cross-attention between the prompt question and passages.",
"The relevance score is then derived by taking the [CLS] vector of the input sequence into a linear layer.",
"After reranking, the QA pair generation model takes the top K passages as inputs (we use K=100).",
"The single pass QA pair generation step includes an Answer Prediction module and a Question Disambiguation module.",
"Firstly, taking the reranked passages and the prompt question Q p as input, the Answer Prediction module generates one or multiple plausible answers A 1 , ..., A m .",
"If multiple plausible answers are found, the prompt question is treated as ambiguous so that the Question Disambiguation module generates a disambiguated question Q di for each predicted answer A i .",
"Note that our general pipeline in Figure 2 does not limit the implementation of Answer Prediction module and Question Disambiguation module, and it can work for our REFUEL as well as several baselines (shown in Sec. 4.3).",
"Our implementation is detailed in Sec. 3. 2.3 Round-Trip Prediction During answering ambiguous questions, it might be difficult to find every possible interpretation in the first prediction pass, and existing work (Min et al., 2020) predicts 47% less answers compared with the ground truth.",
"Therefore, we propose round-trip prediction , which includes a Round-Trip Generation step and a Language Model Verification Step.",
"Round-Trip Generation.",
"Keeping the same retrieved passages, we continuously feed the generated disambiguated questions into the Answer Prediction module to check if any new answers are generated, and generate their corresponding disambiguated questions until there are no newly predicted answers.",
"As exemplified in Figure 2, ( Q d 1 , A 1 ) , ( Q d 2 , A 2 ) are two disambiguated QA pairs of the ambiguous prompt question Q p after the first prediction pass.",
"When feeding Q d 1 to the Answer Prediction module again ( 1 st Round-Trip Prediction), we find that besides the previously predicted answer A 1 , a new answer candidate A 3 is predicted.",
"Then we generate its corresponding question Q d 3 accordingly.",
"This loop continues until Prompt Question Q !",
"Language Model Verification.",
"Through the Round-Trip Generation, we generate a bunch of QA pairs from the ambiguous prompt question, but some of them are incorrect.",
"Here we adopt a verification process to filter out these incorrect predictions.",
"Recent works in synthetic QA pair generation (Alberti et al., 2019; Puri et al., 2020) use an Exact Match (EM) Verification approach to prune the QA pairs.",
"They separately train a QA model as the verification model, and drop the predicted ( q, a ) when the verification model's answer a (cid:48) (cid:54) = a .",
"However, this EM Verification approach is only suitable for factoid reading comprehension tasks such as SQuAD (Rajpurkar et al., 2016), in which the QA model has near-human accuracy so that it will not falsely filter out too many correct QA pairs.",
"In open-domain QA, the current best model can only have 51.4% EM accuracy on the NQ-OPEN dataset (Izacard and Grave, 2020).",
"Instead of using hard filtering, we employ a Language Model (LM) Verification approach that is similar to the LM filtering method of Shakeri et al. (2020).",
"LM Verification is a conditional-probability-based approach to filter out QA pairs softly.",
"In LM Verification, we first train a conditional language model using the gold disambiguated QA pairs from AMBIGQA.",
"The conditional language model is trained to estimate the likelihood of an answer given the golden disambiguated question.",
"Once training is done, it is used to score the generated QA pair ( q, a ) from REFUEL , which is the likelihood of the answer a given the question q and passages, LM score = N a i =1 log p ( a i | q, passages ) , (1) where N a is the length of the generated answer.",
"Finally, we rerank all predicted QA pairs according to the LM score, and drop the QA pairs according to a threshold Th = 6 .",
"1 .",
"The threshold is tuned according using the development set.",
"3.1 Answer Prediction SPANSEQGEN (Min et al., 2020) concatenates the prompt question and top reranked passages into a single sequence for BART encoding, which is extremely limited by the maximum input sequence length of BART (1024 subwords, equivalent to 8 passages).",
"Consequently, SPANSEQGEN finds fewer interpretations of the prompt question compared to the ground truth (1.17 vs 2.19).",
"To ensure a broad coverage of retrieved & reranked passages, our Answer Prediction module uses the Fusion-in-Decoder approach (Izacard and Grave, 2020), which allows us to scale the number of processed passages.",
"As shown in Figure 3, our BART-based Answer Prediction module BARTAP encodes the concatenation of the prompt question and each passage independently.",
"Then all encoded token-level representations are concatenated into a single sequence, and the BARTAP decoder performs attention over all passages to aggregate and combine evidence.",
"Finally, the BARTAP decoder generates a sequence of plausible answers token-by-token, separated by [SEP] .",
"Since there is no cross-passage attention in the encoder, BARTAP encoder reduces the computation from quadratic in the number of input passages to linear complexity.",
"As a result, it can process 12 times larger number of input passages (up to 100 passages, 16000 subwords) than SPANSEQGEN .",
"Given that AMBIGQA is a small dataset with only 10k training samples, we first pre-train BARTAP on NQ-OPEN to predict a single answer, then fine-tune it on AMBIGQA to predict one or multiple answers.",
"If multiple answers are predicted, the Question Disambiguation module is activated to generate a disambiguated rewriting of the prompt question for each predicted answer.",
"Because we do not know which input passage is the key evidence to derive the predicted answer, the Question Disambiguation module takes the same passages in the Answer Prediction stage as inputs.",
"Similar to the Answer Prediction module BARTAP , our Question Disambiguation module BARTQD processes the inputs under the same fashion except that BARTQD encoder additionally takes the predicted answer A i from BARTAP in the input (shown in Figure 3).",
"Token-Deletion Pre-training.",
"Similar to the training scheme of the Answer Prediction module, we also want to leverage the large-scale NQ-OPEN data for pre-training.",
"One straightforward way is to train a question generation model on NQ-OPEN that generates questions given the passages and answer, and then fine-tune it for question disambiguation on AMBIGQA given the prompt question, answer, and passages.",
"However, there is no input question to disambiguate in the question generation pre-training task, it leads to a mismatch between pre-training and fine-tuning.",
"Ablation study shows this way of pre-training has almost no help for question disambiguation (Section 4.5).",
"To reduce the mismatch issue between pretraining and fine-tuning, we propose a Token-Deletion Pre-training task.",
"The idea is to construct synthetic ambiguous questions in pre-training to reduce the mismatch.",
"Given a question Q from NQ-OPEN , we randomly delete an informative span from it, resulting in a partial question Q s .",
"This partial question is designed to simulate the ambiguous question Q p in the fine-tuning stage.",
"Then the token-deletion pre-training target is to recover the complete question Q from the partial question Q s , answer, and passages.",
"In this way, the token-deletion pre-training aligns the fine-tuning phase.",
"Prompt questions are usually rewritten by adding new constraints including event/entity references, properties, answer types, etc.",
"For example, the disambiguated question Q 1 in Figure 1 inserts by a combined team after the ambiguous prompt question.",
"Therefore, we define the informative span as the span containing at least one of the following Part-of-Speech tags: 'ADJ', 'NOUN', 'NUM', 'PROPN', 'SYM', 'VERB'.",
"The length of the span is uniformly sampled in [1 , 5] .",
"Insertion-based Weighted Loss.",
"Since the disambiguated question is a small modification from the ambiguous prompt question, most tokens can be directly copied from the input.",
"Here we introduce an insertion-based weighted loss to put more emphasis on the newly added tokens of the disambiguated question, which could be the key to disambiguate the prompt question.",
"Given the prompt question Q p , we find the newly inserted tokens from the disambiguated question Q d : { q in } .",
"The final loss for fine-tuning BARTQD is a combination of the original negative log-likelihood loss on all question tokens augmented with a term that adds weight on the likelihood of inserted tokens: L = L nll (cid:88) q j { q in } log ( q j | A, Q p , Psg ) , (2) where L nll = (cid:80) ni =1 log ( q i | A, Q p , Psg ) , n is the number of tokens in the disambiguated question, = 3 .",
"5 is a hyperparameter tuned on the dev.",
"set.",
"Dataset.",
"We conduct main experiments on the AMBIGQA dataset (Min et al., 2020).",
"AMBIGQA is constructed to address the ambiguity of questions in open-domain QA.",
"It samples 14,042 questions from NQ-OPEN , a large-scale open-domain QA dataset in which each question has a single answer (Kwiatkowski et al., 2019), and asks annotators to search for, navigate and read multiple Wikipedia pages to find as many interpretations as possible.",
"As a result, each question is annotated with either a single answer or multiple disambiguated QA pairs, depending on how many interpretations can be found.",
"The train, development, and test (not public) dataset sizes are 10036, 2002, 2004, respectively 1 .",
"On average, there are 2.1 distinct answers per question in AMBIGQA.",
"To test the generalization ability of REFUEL on any possibly ambiguous questions, we additionally evaluate it on two open-domain QA datasets: NQ-OPEN and TriviaQA (Joshi et al., 2017).",
"Implementation Details are in Appendix A. We release source code for our models and experiments at https: //github.com/amzn/refuel-open-domain-qa .",
"Evaluation Metrics.",
"Let ( q 1 , a 1 ) , ..., ( q m , a m ) be m QA pair predictions, ( q 1 , a 1 ) , ..., ( q n , a n ) be n gold QA pairs, each predicted QA pair ( q i , a i ) is evaluated in order by a correctness score towards all gold QA pairs: c i = 1 ( a i = a j ) f ( q i , q j ) , where f ( q i , q j ) is a similarity function for questions.",
"( q j , a j ) will not be further used to evaluate 1 Leaderboard: https://nlp.cs.washington.",
"other predicted QA pairs as it is used for ( q i , a i ) .",
"The overall correctness is calculated by F1 between predictions and references, P f = (cid:80) mi =1 c i m , R f = (cid:80) mi =1 c i n , F1 f = 2 P f R f P f + R f .",
"All examples are evaluated for the answer prediction subtask, in which f function always yields 1. This metric is denoted as F1 ans (all).",
"For the subset of examples with multiple gold QA pairs, both answer prediction subtask and question disambiguation subtask are evaluated.",
"The answer prediction metric only computed on this subset is denoted as F1 ans (multi).",
"To evaluate question disambiguation performance, BLEU (Papineni et al., 2002) and EDIT-F1 is used for the function f , denoted as F1 BLEU and F1 EDIT-F1 , respectively.",
"EDIT-F1 compute the F1 score of added and deleted uni-grams from the prompt question to the predicted disambiguated question towards references.",
"Main Results.",
"Performance on the dev.",
"and hidden test set of AMBIGQA is shown in Table 1. Even without having round-trip prediction, REFUEL (w/o RTP) outperforms SPANSEQGEN on both the answer prediction subtask and question disambiguation subtask by a large margin.",
"Moreover, the round-trip prediction indeed further improves the performance by finding more and better QA pairs, going from 1.55 to 1.72 pairs per prompt question on the dev.",
"set.",
"A comprehensive analysis on the round-trip prediction is discussed in Sec 4.3.",
"Controlled Comparison with SPANSEQGEN .",
"Besides round-trip prediction, REFUEL has two advantages over SPANSEQGEN in terms of input passages: (1) We retrieve top N=1000 passages (instead of 100 in SPANSEQGEN ) to get a higher answer recall at top 100 passages (improved from Model N K #QAs F1 ans F1 EDIT-F1 SPANSEQGEN 100 8 1.17 39.7 7.2 SPANSEQGEN * 100 8 1.14 41.7 7.1 REFUEL (w/o RTP) 100 8 1.42 44.7 10.0 REFUEL (w/o RTP) 100 100 1.54 45.4 10.7 REFUEL (w/o RTP) 1000 100 1.55 48.4 11.2 Table 2: Dev.",
"86.2 to 89.7).",
"(2) REFUEL takes K=100 input passages whereas SPANSEQGEN takes at most 1024 subwords (K 8).",
"To establish a controlled and fair comparison, we remove the round-trip prediction part of REFUEL , and feed REFUEL (w/o RTP) with the same input passages used in SPANSEQGEN (N=100, K=8).",
"Results are shown in Table 2. We find (1) Under the same number of passages, REFUEL (w/o RTP) (N=100, K=8) still outperforms SPANSEQGEN and generates more and better QA pairs; (2) REFUEL (w/o RTP) benefits from increasing the answer recall of retrieval stage (N = 100 1000 ), as well as allowing more input passages (K = 8 100 ).",
"how well does REFUEL answer any open-domain questions, we evaluate REFUEL on NQ-OPEN and Triv-Models",
"+ Round-Trip Generation & LM Verification 1.28 ( 12.3%) 42.4* 29.9* 13.0* 7.4* 49.8* ( 2.1%) Table 4: Effect of round-trip prediction to harvest more interpretations (QA pairs) on the development set of AMBIGQA.",
"and denotes the improvement gain over the model without round-trip prediction.",
"*: The model with Round-Trip Generation & LM Verification is significantly better than the same model without it under a paired bootstrap test with 10 5 samples ( p-value < 0.05).",
"iaQA without finetuning on these datasets.",
"When REFUEL predicts multiple answers, we take the first predicted answer for EM evaluation; we also introduce a new Oracle EM metric which treat the prediction is correct if the gold answer matches any predicted answers for the current question.",
"Table 3 shows that REFUEL has competitive performance even without dataset-specific finetuning.",
"When REFUEL finds multiple interpretations for questions in NQ-OPEN & TriviaQA, we manually check the quality of disambiguated QA pairs in Section 4.4.",
"We compare our proposed Round-Trip Prediction (Round-Trip Prediction = Round-Trip Generation + LM Verification) with several alternative approaches, as well as investigate its generalization ability to other models like SPANSEQGEN and DPR Reader.",
"Results are shown in Table 4. Round-Trip Generation Only.",
"We investigate the necessity of the verification process by conducting only round-trip generation to REFUEL .",
"Results show that Round-Trip Generation can generate 33.5% more QA pairs, but the lower F1 ans (all) suggests that this strategy may over-generate QA pairs when the prompt question is not ambiguous.",
"Hence, the verification process is necessary to prune some incorrect QAs.",
"LM Verification vs. EM Verification.",
"As described in section 2.3, we compare the existing EM Verification approach (Alberti et al., 2019; Puri et al., 2020) with our LM Verification.",
"Results demonstrate that EM Verification prunes too many QA pairs the number of remaining QA pairs (1.43) is even smaller than not doing round-trip prediction (1.55).",
"This validates our intuition in section 2.3 that EM Verification is not suitable for open-domain QA tasks because of the low perfor-Models Dataset #QAs #C-QAs #CD-QAs SPANSEQGENAMBIGQA 2.12 1.40 0.46 0.27 REFUEL w/o RTP AMBIGQA 2.80 1.84 0.98 0.35 REFUELAMBIGQA 3.44 2.40 1.24 0.34 REFUEL w/o RTP NQ-OPEN 2.32 1.30 0.64 0.20 REFUELNQ-OPEN 3.20 1.72 0.88 0.21 REFUEL w/o RTP TriviaQA 2.08 1.02 0.46 0.34 REFUEL TriviaQA 3.24 1.84 0.82 0.35 Table 5: Human evaluation results.",
"Generalization to Other Models.",
"We show that round-trip prediction is a model-agnostic general approach for answering possibly ambiguous open-domain questions by using it on our replicated baseline models: DPR Reader and SPANSEQGEN .",
"With the help of round-trip prediction, DPR Reader and SPANSEQGEN generates 11.7% and 12.3% more QA pairs, which result in a boost of 3.7% and 2.1% for the overall performance (Comb.).",
"Since the answers collected in AMBIGQA are not necessarily exhaustive, there is a possibility that a model generates correct interpretations but they are missed in AMBIGQA.",
"Therefore, we hire 3 workers from MTurk.com to evaluate the correctness of the answer given the generated disambiguated question and retrieved passages (instructions in Appendix C).",
"Let ( q 1 , a 1 ) , ..., ( q n , a n ) be n generated QA pairs from the same prompt question, we define two levels of correctness as follows: #C-QAs : ( q i , a i ) is considered C orrect if a i is a correct answer of q i ; #CD-QAs : ( q i , a i ) is considered correct iff.",
"(1) a i is a correct answer of q i and (2) any a j ( j (cid:54) = i ) is a wrong answer of q i .",
"#CD-QAs is designed to examine the C orrectness of ques-Pre-train Method + Fine-tune Method F1 BLEU F1 EDIT-F1 Prompt Baseline 18.9 0.0 None + QDF 16.2 10.1 None + QDF (w/ filtered passages) 16.4 9.4 QGP + QDF 15.9 10.3 TDP + QDF 16.5 10.9 TDP + QDF (w/ insertion-based loss) 16.0 11.2 Table 6: Ablation Study of REFUEL for the question disambiguation subtask on the dev.",
"tion D isambiguation because ambiguous questions can have multiple valid answers.",
"We take the majority judgement from 3 annotators for each QA pair.",
"For each dataset, we randomly sample 50 prompt questions which have multiple predicted answers, and apply the QA swapping strategy in #CD-QAs, resulting 960 question-answer-passages triples in total.",
"Results in Table 5 show that REFUEL (w/o RTP) can correctly generate 113% more QA pairs than SPANSEQGEN on #CD-QAs.",
"In addition, round-trip prediction (RTP) can find more correct interpretations across all datasets.",
"Table 6 compares our question disambiguation model with the prompt baseline and several ablations.",
"The prompt baseline directly takes the prompt question as the disambiguated prediction, so its F1 EDIT-F1 is zero.",
"However, F1 BLEU score of the prompt baseline is higher than REFUEL .",
"This suggests that F1 EDIT-F1 captures the effectiveness of question disambiguation better than F1 BLEU .",
"For our ablations, we start from only using AMBIGQA dataset (None+Q DF), and investigate whether it is helpful to only use answer-containing passages as inputs (None+Q DF w/ filtered pas-sages).",
"The worse result of the latter approach suggests that we should keep all passages for question disambiguation.",
"Second, we examine the effectiveness of pre-training.",
"We try the question generation pre-training (QG P+Q DF) and compare it with the ablation without any pre-training (None+Q DF).",
"Results show that the question generation pre-training has little help for fine-tuning.",
"By replacing the question generation pre-training QGP with our proposed token-deletion pre-training TDP, we see the results (TD P+Q DF) are better than the no pretraining ablation (None+Q DF), which implies the mismatch between pre-training and fine-tuning are somewhat reduced.",
"Finally, the insertion-based Prompt question #1: What's the most points scored in an nba game?",
"Relevant Passages: (w/ rank from retrieval & reranking) Rank 1: ... the highest-scoring regular season game is ... the two teams combined to score 370 points, with the pistons defeating the nuggets 186184 ...",
"Rank 3: wilt chamberlain scored an nba-record 100 points.",
"the highest-scoring playoff game is the double-overtime game between ... the two teams combined to score 304 points, with the trail blazers defeating the suns 153151 ...",
"loss enables REFUEL to capture the key disambiguation phrase with less copying the prompt question, resulting in a lower BLEU but higher Edit-F1.",
"Figure 4 provides example question-answer pairs generated by crowd-workers, REFUEL (w/o RTP), and REFUEL .",
"The annotator find three interpretations from the prompt question, while our single pass model REFUEL (w/o RTP) finds in total four interpretations (QA1-4).",
"Although QA2 predicted from our model is not included in the references, it is indeed a correct interpretation of the prompt question.",
"In addition, the Round-Trip Prediction approach finds two correct interpretations (QA5, QA6) which the model fails to predict on the first generation pass.",
"More cases are shown in Appendix F. 5 Related Work Open-Domain Question Answering is answering factoid questions using a huge collection of documents such as Wikipedia pages (Voorhees, 1999; Chen et al., 2017; Yang et al., 2019; Lee et al., 2019; Wang et al., 2019).",
"We are motivated by the recent proposed question ambiguity problem in open-domain QA (Min et al., 2020).",
"Different from the existing formulation of open-domain QA that each question only has a single answer, the proposed AMBIGQA task requires to predict a single answer or a set of disambiguated QA pairs depending on the ambiguity of the input question.",
"They also propose the first model SPANSEQGEN to this task, which firstly uses the dense passage retriever (Karpukhin et al., 2020) to retrieve question-relevant passages, and then adopts a retrieval-augmented generation method (Lewis et al., 2020b) to disambiguated QA pairs.",
"Our REFUEL follow Min et al. (2020)'s task formulation and overall pipeline, but there are three differences between our REFUEL and SPANSEQGEN : (1) REFUEL takes the architecture of Fusion-in-Decoder (Izacard and Grave, 2020) that can effectively use a large number of passages to uncover more candidate interpretations of the ambiguous question.",
"(2) We propose a token-deletion pretraining task to reduce the mismatch between pretraining and fine-tuning for question disambiguation.",
"The insertion-based weighted loss further helps to capture answer-relevant constraints.",
"(3) We propose a model-agnostic round-trip prediction approach to find more interpretations missed in the first prediction pass, which we further refine using a conditional-probability-based filtering approach.",
"In this paper, we present REFUEL to answer ambiguous open-domain questions.",
"REFUEL is a generative approach to aggregate and combine evidence from multiple passages for multiple rounds which can find more and better interpretations.",
"REFUEL achieves a new state-of-the-art on AMBIGQA, and shows competitive performance on NQ-OPEN and TriviaQA.",
"The proposed round-trip prediction is a general approach for answering ambiguous open-domain questions, which improves our REFUEL as well as several baseline models."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective"
] |
[
"Automatic hashtag annotation plays an important role in content understanding for microblog posts.",
"To date, progress made in this field has been restricted to phrase selection from limited candidates, or word-level hashtag discovery using topic models.",
"Different from previous work considering hashtags to be inseparable, our work is the first effort to annotate hashtags with a novel sequence generation framework via viewing the hashtag as a short sequence of words.",
"Moreover, to address the data sparsity issue in processing short microblog posts, we propose to jointly model the target posts and the conversation contexts initiated by them with bidirectional attention.",
"Extensive experimental results on two large-scale datasets, newly collected from English Twitter and Chinese Weibo, show that our model significantly outperforms state-of-the-art models based on classification.",
"1 Further studies demonstrate our ability to effectively generate rare and even unseen hashtags, which is however not possible for most existing methods.",
"Microblogs have become an essential outlet for individuals to voice opinions and exchange information.",
"Millions of user-generated messages are produced every day, far outpacing the human be-ing's reading and understanding capacity.",
"As a result, the current decade has witnessed the increasing demand for effectively discovering gist information from large microblog texts.",
"To identify the key content of a microblog post, hashtags, user-generated labels prefixed with a # (such as #NAACL and #DeepLearning ), have been * This work was mainly done when Yue Wang was an intern at Tencent AI Lab.",
"Target post for hashtag generation This Azarenka woman needs a talking to from the umpire her weird noises are totes inappropes professionally.",
"#AusOpen Replying messages forming a conversation [T1] How annoying is she.",
"I just worked out what she sounds like one of those turbo charged cars when they change gear or speed.",
"[T2] On the topic of noises, I was at the Nadal-Tomic game last night and I loved how quiet Tomic was compared to Nadal .",
"[T3] He seems to have a shitload of talent and the postmatch press conf.",
"He showed a lot of maturity and he seems nice.",
"[T4] Tomic has a fantastic tennis brain...",
"widely used to reflect keyphrases (Zhang et al., 2016, 2018) or topics (Yan et al., 2013; Hong et al., 2012; Li et al., 2016).",
"Hashtags can further ben-efit downstream applications, such as microblog search (Efron, 2010; Bansal et al., 2015), summarization (Zhang et al., 2013; Chang et al., 2013), sentiment analysis (Davidov et al., 2010; Wang et al., 2011), and so forth.",
"Despite the widespread use of hashtags, there are a large number of microblog messages without any user-provided hashtags.",
"For example, less than 15 % tweets contain at least one hashtag (Wang et al., 2011; Khabiri et al., 2012).",
"Consequently, for a multitude of posts without human-annotated hashtags, there exists a pressing need for automating the hashtag annotation process for them.",
"Most previous work in this field focuses on extracting phrases from target posts (Zhang et al., 2016, 2018) or selecting candidates from a pre-defined list (Gong and Zhang, 2016; Huang et al., 2016; Zhang et al., 2017).",
"However, hashtags usually appear in neither the target posts nor the given candidate list.",
"The reasons are two folds.",
"For one thing, microblogs allow large freedom for users to write whatever hashtags they like.",
"For another, due to the wide range and rapid change of social media topics, a vast variety of hashtags can be daily created, making it impossible to be covered by a fixed candidate list.",
"Prior research from another line employs topic models to generate topic words as hashtags (Gong et al., 2015; Zhang et al., 2016).",
"These methods, ascribed to the limitation of most topic models, are nevertheless incapable of producing phrase-level hashtags.",
"In this paper, we approach hashtag annotation from a novel sequence generation framework.",
"In doing so, we enable phrase-level hashtags beyond the target posts or the given candidates to be created.",
"Here, hashtags are first considered as a sequence of tokens (e.g., #DeepLearning as deep learning ).",
"Then, built upon the success of sequence to sequence (seq2seq) model on language generation (Sutskever et al., 2014), we present a neural seq2seq model to generate hashtags in a word-by-word manner.",
"To the best of our knowledge, we are the first to deal with hashtag annotation in sequence generation architecture .",
"In processing microblog posts, one major challenge we might face is the limited features to be encoded.",
"It is mostly caused by the data sparsity exhibited in short and informal microblog posts.",
"2 To illustrate such challenge, Table 1 displays a sample Twitter post tagged with #Au-sOpen , referring to Australian Open tennis tournament.",
"Only given the short post, it is difficult to understand why it is tagged with #AusOpen , not to mention that neither aus nor open appear in the target post.",
"In such a situation, how shall we generate hashtags for a post with limited words?",
"To address the data sparsity challenge, we exploit conversations initiated by the target posts to enrich their contexts.",
"Our approach is bene-fited from the nature that most messages in a conversation tend to focus on relevant topics.",
"Content in conversations might hence provide contexts facilitating the understanding of the original post (Chang et al., 2013; Li et al., 2015).",
"The effects of conversation contexts, useful on topic 2 For instance, the eligible length of a post on Twitter or Weibo is up to 140 characters.",
"modeling (Li et al., 2016, 2018) and keyphrase extraction (Zhang et al., 2018), have never been explored on microblog hashtag generation.",
"To show why conversation contexts are useful, we display in Table 1 a conversation snippet formed by some replies of the sample target post.",
"As can be seen, key content words in the conversation (e.g., Nadal , Tomic , and tennis ) are useful to reflect the relevance of the target post to the hashtag #AusOpen , because Nadal and Tomic are both professional tennis players.",
"Concretely, our model employs a dual encoder (i.e., two en-coders), one for the target post and the other for the conversation context, to capture the representations from the two sources .",
"Furthermore, to capture their joint effects, we employ the bidirectional attention ( bi-attention ) (Seo et al., 2016) to explore the interactions between two encoders' outputs.",
"Afterward, an attentive decoder is applied to generate the word sequence of the hashtag.",
"In experiments, we construct two large-scale datasets, one from English platform Twitter and the other from Chinese Weibo.",
"Experimental results based on both information retrieval and text summarization metrics show that our model generates hashtags closer to human-annotated ones than all the comparison models.",
"For example, our model achieves 45 .",
"03 % ROUGE-1 F1 on Weibo, compared to 25 .",
"34 % given by the state-of-the-art classification-based method.",
"Further comparisons with classification-based models show that our model, in a sequence generation framework, can better produce rare and even new hashtags.",
"To summarize, our contributions are three-fold: We are the first to approach microblog hashtag annotation with sequence generation architecture.",
"To alleviate data sparsity, we enrich context for short target posts with their conversations and employ a bi-attention mechanism for capturing their interactions.",
"In this section, we describe our framework shown in Figure",
"1. There are two major modules: a dual encoder to encode both target posts and their conversations with a bi-attention to explore their interactions, and a decoder to generate hashtags.",
"Input and Output.",
"Formally, given a target post x p formulated as word sequence (cid:104) x p 1 , x p 2 , ..., x p | x p | (cid:105) and its conversation context x c formulated as word sequence (cid:104) x c 1 , x c 2 , ..., x c | x c | (cid:105) , where | x p | and | x c | denote the number of words in the input target post and its conversation, respectively, our goal is to output a hashtag y represented by a word sequence (cid:104) y 1 , y 2 , ..., y | y | (cid:105) .",
"For training instances tagged with multiple gold-standard hashtags, we copy the instances multiple times, each with one gold-standard hashtag following Meng et al. (2017).",
"All the input target posts, their conversations, and the hashtags share the same vocabulary V .",
"Dual Encoder.",
"To capture representations from both target posts and conversation contexts, we design a dual encoder, composed of a post encoder and a conversation encoder, each taking the x p and x c as input, respectively.",
"For the post encoder, we use a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) to encode the target post x p , where its embeddings e ( x p ) are mapped into hidden states h p = (cid:104) h p 1 , h p 2 , ..., h p | x p | (cid:105) .",
"Specifically, h pi = [ h pi ; h pi ] is the concatenation of forward hidden state h pi and backward hidden state h pi for the i -th token: h pi = GRU ( e ( x pi ) , h pi 1 ) , (1) h pi = GRU ( e ( x pi ) , h pi +1 ) .",
"(2) Likewise, the conversation encoder converts conversations into hidden states h c via another Bi-GRU.",
"The dimensions of both h p and h c are d .",
"Bi-attention.",
"To further distill useful representations from our two encoders, we employ the bi-attention to explore the interactions between the target posts and their conversations.",
"The adoption of bi-attention is inspired by Seo et al. (2016), where the bi-attention was applied to extract query-aware contexts for machine comprehension.",
"Our intuition is that the content concerning the key points in target posts might have their relevant words frequently appearing in their conversation contexts, and vice versa.",
"In general, such content can reflect what the target posts focus on and hence effectively indicate what hashtags should be generated.",
"For instance, in Table 1, names of tennis players (e.g., Azarenka , Nadal , and Tomic ) are mentioned many times in both target posts and their conversations, which reveals why the hashtag is #AusOpen .",
"To this end, we first put a post-aware attention on the conversation encoder with coefficients: cij = exp( f score ( h pi , h cj )) (cid:80) | x c | j (cid:48) =1 exp( f score ( h pi , h cj (cid:48) )) , (3) where the alignment score function f score ( h pi , h cj ) = h pi W bi att h cj captures the similarity of the i -th word in the target post and the j -th word in its conversation.",
"Here W bi att R d d is a weight matrix to be learned.",
"Then, we compute a context vector r c conveying post-aware conversation representations, where the i -th value is defined as: r ci = | x c | (cid:88) j =1 cij h cj .",
"Analogously, a conversation-aware attention on post encoder is used to capture the conversation-aware post representations as r p .",
"Merge Layer.",
"Next, to further fuse representations distilled by the bi-attention on each encoder, we design a merge layer, a multilayer perceptron (MLP) activated by hyperbolic function: v p = tanh( W p [ h p ; r c ] + b p ) , (5) v c = tanh( W c [ h c ; r p ] + b c ) , (6) where W p , W c R d 2 d and b p , b c R d are trainable parameters.",
"Note that either v p or v c conveys the information from both posts and conversations, but with a different emphasis.",
"Specifically, v p mainly retains the contexts of posts with the auxiliary information from conversations, while v c does the opposite.",
"Finally, vectors v p and v c are concatenated and fed into the decoder for hashtag generation.",
"Decoder.",
"Given the representations v = [ v p ; v c ] produced by our dual encoder with bi-attention, we apply an attention-based GRU decoder to generate a word sequence y as the hashtag.",
"The probability to generate the hashtag conditioned on a target post and its conversation is defined as: P r ( y | x p , x c ) = | y | (cid:89) t =1 P r ( y t | y <t , x p , x c ) , (7) where y <t refers to ( y 1 , y 2 , ..., y t 1 ) .",
"Concretely, when generating the t -th word in hashtag, the decoder emits a hidden state vector s t R d and puts a global attention over v .",
"The attention aims to exploit indicative representations from the encoder outputs v and summarizes them into a context vector c t defined as: c t = | x p | + | x c | (cid:88) i =1 dti v i , (8) dti = exp( g score ( s t , v i )) (cid:80) | x p | + | x c | i (cid:48) =1 exp( g score ( s t , v i (cid:48) ) , (9) where g score ( s t , v i ) = s t W att v i is another alignment function ( W att R d d ) to measure the similarity between s t and v i .",
"Finally, we map the current hidden state s t of the decoder together with the context vector c t to a word distribution over the vocabulary V via: P r ( y t | y <t , x p , x c ) = softmax ( W v [ s t ; c t ]+ b v ) , (10) which reflects how likely a word to be the t -th word in the generated hashtag sequence.",
"Here W v RV 2 d and b v RV are trainable weights.",
"Learning and Inferring Hashtags.",
"During the training stage, we apply stochastic gradient descent to minimize the loss function of our entire framework, which is defined as: L () = N (cid:88) n =1 log( P r ( y n | x pn , x cn ; )) .",
"Here N is the number of training instances and denotes the set of all the learnable parameters.",
"In hashtag inference, based on the produced word distribution at each time step, word selection is conducted using beam search.",
"In doing so, we generate a ranking list of output hashtags, where the top K hashtags serve as our final output.",
"Datasets and Statistic Analysis.",
"Two large-scale experiment datasets are newly collected from popular microblog platforms: an English Twitter dataset and a Chinese Weibo dataset.",
"The Twitter dataset was built based on the TREC 2011 microblog track.",
"3 To recover the conversations, we used Tweet Search API to fetch in-reply-to relations in a recursive way.",
"The Weibo dataset was collected from January to August 2014 using Weibo Search API via searching messages with the trending queries 4 as keywords.",
"For gold-standard hashtags, we take the user-annotated hashtags, appearing before or after a post, as the reference.",
"5 The statistics of our datasets are shown in Table",
"2. We randomly split both datasets into three subsets, where 80 %, 10 %, and 10 % of the data corresponds to training, development, and test sets, respectively.",
"To further investigate how challenging our problem is, we show some statistics of the hashtags in Table 3 and the distributions of hashtag frequency in Figure",
"2. In Table 3, we observe 3 https://trec.nist.gov/data/tweets/ 4 http://open.weibo.com/wiki/Trends/ 5 Hashtags in the middle of a post are not considered here as they generally act as semantic elements (Zhang et al., 2016, 2018).",
"the large size of hashtags in both datasets.",
"Moreover, Figure 2 indicates that most hashtags only appear a few times.",
"Given such a large and imbalanced hashtag space, hashtag selection from a candidate list, as many existing methods do, might not perform well.",
"Table 3 also shows that only a small proportion of hashtags appearing in their posts, conversations, and either of them, making it inappropriate to directly extract words from the two sources to form hashtags.",
"Preprocessing.",
"For tokenization and word segmentation, we employed the tweet preprocessing toolkit released by Baziotis et al. (2017) for Twitter, and the Jieba toolkit 6 for Weibo.",
"Then, for both Twitter and Weibo, we further take the following preprocessing steps: First, single-character hashtags were filtered out for not being meaningful.",
"Second, generic tags, i.e., links, mentions (@username), and numbers, were replaced with URL MENTION, and DIGIT, respectively.",
"Third, inappropriate replies (e.g., retweet-only messages) were removed, and the remainder were chronologically ordered to form a sequence as conversation contexts.",
"Last, a vocabulary was maintained with the 30 K and 50 K most frequent words, for Twitter and Weibo, respectively.",
"Comparisons.",
"For experiment comparisons, we first consider a weak baseline RANDOM that randomly ranks hashtags seen from training data.",
"Two unsupervised baselines are also considered, where words are ranked by latent topics induced with the latent Dirichlet allocation topic model (henceforth LDA), and by their TF-IDF scores (henceforth TF-IDF ).",
"Here for TF-IDF scores, we consider the N -gram TF-IDF ( N 5 ).",
"Besides, we compare with supervised models below: EXTRACTOR : Following Zhang et al. (2018), we extract phrases from target posts as hashtags 6 https://pypi.python.org/pypi/jieba/ via sequence tagging and encode conversations with memory networks (Sukhbaatar et al., 2015).",
"CLASSIFIER : We compare with the state-of-the-art model based on classification (Gong and Zhang, 2016), where hashtags are selected from candidates seen in training data.",
"Here two versions of their classifier are considered, one only taking a target post as input (henceforth CLASSIFIER ( post only )) and the other taking the concatenation of a target post and its conversation as input (hence-forth CLASSIFIER ( post+conv )).",
"GENERATOR : A seq2seq generator (hence-forth SEQ 2S EQ ) (Sutskever et al., 2014) is applied to generate hashtags given a target post.",
"We also consider its variant augmented with copy mechanism (Gu et al., 2016) (henceforth SEQ 2S EQCOPY ), which has proven effective in keyphrase generation (Meng et al., 2017) and also takes the post as input.",
"The proposed seq2seq with the bi-attention to encode both the post and its conversation is denoted as OUR MODEL for simplicity.",
"Model Settings.",
"We conduct model tunings on the development set based on grid search, where the hyper-parameters that give the lowest objective loss are selected.",
"For the sequence generation models, the implementations are based on the OpenNMT framework (Klein et al., 2017).",
"The word embeddings, with dimension set to 200 , are randomly initialized.",
"For encoders, we employ two layers of Bi-GRU cells, and for decoders, one layer of GRU cell is used.",
"The hidden size of all GRUs is set to 300 .",
"In learning, we use the Adam optimizer (Kingma and Ba, 2014) with the learning rate initialized to 0 .",
"001 .",
"We adopt the early-stop strategy: the learning rate decreases by a de-cay rate of 0 .",
"5 till either it is below 1 e 6 or the validation loss stops decreasing.",
"The norm of gradients is rescaled to 1 if the L 2 -norm > 1 is observed.",
"The dropout rate is 0 .",
"1 and the batch size is 64 .",
"In inference, we set the beam-size to 20 and the maximum sequence length of a hashtag to 10 .",
"For CLASSIFIER and EXTRACTOR , lacking publicly available codes, we reimplement the models using Keras.",
"7 Their results are reproduced in their original experiment settings.",
"For LDA, we employ an open source toolkit lda.",
"8 Evaluation Metrics.",
"Popular information re-trival evaluation metrics F1 scores at K (F1@K) 7 https://keras.io/ 8 https://pypi.org/project/lda/ Model Twitter Weibo F1@1 F1@5 MAP RG-1 RG-4 F1@1 F1@5 MAP RG-1 RG-4 Baselines RANDOM 0.37 0.63 0.89 0.56 0.16 0.43 0.67 0.97 2.14 1.13 LDA 0.13 0.25 0.35 0.60 -0.10 0.86 0.94 3.89 TF-IDF 0.02 0.02 0.03 0.54 0.14 0.85 0.73 1.30 8.04 4.29 EXTRACTOR 0.44 -1.14 0.14 2.53 -7.64 5.20 State of the arts CLASSIFIER ( post only ) 9.44 6.36 12.71 10.75 4.00 16.92 10.48 22.29 25.34 21.95 CLASSIFIER ( post+conv ) 8.54 6.28 12.10 10.00 2.47 17.25 11.03 23.11 25.16 22.09 GENERATORSSEQ 2S EQ 10.44 6.73 14.00 10.52 4.08 26.00 14.43 32.74 37.37 32.67 SEQ 2S EQ-COPY 10.63 6.87 14.21 12.05 4.36 25.29 14.10 31.63 37.58 32.69 OUR MODEL 12.29 * 8.29 * 15.94 * 13.73 * 4.45 31.96 * 17.39 * 38.79 * 45.03 * 39.73 * Table 4: Comparison results on Twitter and Weibo datasets (in %).",
"and mean average precision (MAP) scores (Man-ning et al., 2008) are reported.",
"Here, different K values are tested on F1@K and result in a similar trend, so only F1@1 and F1@5 are reported.",
"MAP scores are also computed given the top 5 outputs.",
"Besides, as we consider a hashtag as a sequence of words, ROUGE metrics for summarization evaluation (Lin, 2004) are also adopted.",
"Here, we use ROUGE F1 for the top-ranked hashtag prediction computed by an open source toolkit pythonrouge, 9 with Porter stemmer used for English tweets.",
"For Weibo posts, scores calculated at the Chinese character level following Li et al. (2018).",
"We report the average scores for multiple gold-standard hashtags on ROUGE evaluation.",
"In this section, we first report the main comparison results in Section 4.1, followed by an in-depth comparative study between classification and sequence generation models in Section 4.2.",
"Further discussions are then presented to analyze our superiority and errors in Section 4.3.",
"Table 4 reports the main comparison results.",
"For CLASSIFIER , their outputs are ranked according to the logits after a softmax layer.",
"For EXTRACTOR , it is unable to produce ranked hashtags and thus no results are reported for F1@5 and MAP.",
"For LDA, as it cannot generate bigram hashtags, no results are presented for ROUGE-SU4.",
"In general, we have the following observations: 9 https://github.com/tagucci/ pythonrouge Hashtag annotation is more challenging for Twitter than Weibo.",
"Generally, all models perform worse on Twitter measured by different metrics.",
"The intrinsic reason is the essential language difference between English and Chinese microblogs.",
"English allows higher freedom in writing, resulting in more variety in Twitter hashtags (e.g., abbreviations are prominent like aus in #AusOpen ).",
"For statistical reasons, Twitter hashtags are more likely to be absent in either posts or conversations (Table 3), and have a more severe imbalanced distribution (Figure 2).",
"Topic models and extractive models are ineffective for hashtag annotation .",
"The poor performance of all baseline models indicates that hashtag annotation is a challenging problem.",
"LDA sometimes performs even worse than RANDOM due to its inability to produce phrase-level hashtags.",
"For extractive models, both TF-IDF and EXTRACTOR fail to achieve good results.",
"It is because most hashtags are absent in target posts, as we see in Table 3 that only 2 .",
"72 % hashtags on Twitter and 8 .",
"29 % on Weibo appear in target posts.",
"This confirms that extractive models, relying on word selection from target posts, cannot well fit the hashtag annotation scenario.",
"For the same reason, copy mechanism fails to bring noticeable improvements for the seq2seq generator on both datasets.",
"Sequence generation models outperform other counterparts.",
"When comparing GENERATORS with other models, we find the former uniformly achieve better results, showing the superiority to produce hashtags with sequence generation framework.",
"Classification models, though as the state of the art, expose their inferiority as they select labels from the large and imbalanced hashtag space (reflected in Table 3 and Figure 2).",
"Conversations are useful for hashtag generation.",
"Among the sequence generation models, OUR MODEL achieves the best performance across all the metrics.",
"The observation indicates the usefulness of bi-attention in exploiting the joint effects of target posts and their conversations, which further helps in identifying indicative features from both sources for hashtag generation.",
"However, interestingly, incorporating conversations fails to boost the classification performance.",
"The reason why OUR MODEL better exploits conversations than CLASSIFIER ( post+conv ) might be that we can attend the indicative features when decoding each word in the hashtag, which is however not possible for classification models (considering hashtags to be inseparable).",
"From Table 4, we observe that the classifiers outperform topic models and extractive models by a large margin but exhibit generally worse results than sequence generation models.",
"Here, we present a thorough study to compare hashtag classification and generation.",
"Four models are selected for comparison: two classifiers, CLASSIFIER ( post only ) and CLASSIFIER ( post+conv ), and two sequence generation models, SEQ 2S EQ and OUR MODEL .",
"Below, we explore how they perform to predict rare and new hashtags.",
"Rare Hashtags.",
"According to the hashtag distributions in Figure 2, we can see a large proportion of hashtags appearing only a few times in the data.",
"To study how models perform to predict such hashtags, in Figure 3, we display their F1@1 scores in inferring hashtags with varying frequency.",
"The lower F1 score on less frequent hashtags indicates the difficulty to yield rare hashtags.",
"The reason probably comes from the overfit-ting issue caused by limited data to learn from.",
"We also observe that sequence generation models achieve consistently better F1@1 scores on hashtags with varying sparsity degree, while classification models suffer from the label sparsity issue and obtain worse results.",
"The better performance of the former might result from the word-by-word generation manner in hashtag generation, which enables the internal structure of hashtags (how words form a hashtag) to be exploited.",
"New Hashtags.",
"To further explore the extreme situation where hashtags are absent in the training set, we experiment to see how models perform in handling new hashtags.",
"To this end, we additionally collect instances tagged with hashtags absent in training data and construct an external test set, with the same size as our original test set.",
"Considering that classifiers will never predict unseen labels, to ensure comparable performance, we only adopt summarization metrics here for evaluation and report ROUGE-1 F1 scores in Table 5.",
"As can be seen, creating unseen hashtags is a challenging task, where unsurprisingly, all models perform poorly on this task.",
"Nevertheless, sequence generation models perform much better on both datasets, e.g., at least 6.5x improvements over classification models observed on Weibo dataset.",
"For Twitter dataset, the improvements are not that large, which confirms again that hashtag annotation on Twitter is more difficult due to the noisier data characteristics.",
"In particular, compared to SEQ 2 SEQ , OUR MODEL achieves an additional performance gain in producing new hashtags by leveraging conversations with the bi-attention.",
"To further analyze our model, we conduct a quantitative ablation study, a qualitative case study, and an error analysis.",
"We then discuss them in turn.",
"Ablation Study.",
"We report the ablation study results in Table 6 to examine the relative contributions of the target posts and the conversation contexts.",
"To this end, our model is compared with its five variants below: SEQ 2S EQ ( post only ), SEQ 2S EQ ( conv only ), and SEQ 2S EQ ( post+conv ), using standard seq2seq to generate hashtags from their target posts, conversation contexts, and their concatenation, respectively; OUR MODEL ( post-att only ) and OUR MODEL ( conv-att only ), whose decoder only takes v p and v c defined in Eq.",
"(5) and Eq.",
"(6), respectively.",
"The results show that solely encoding target posts is more effective than modeling the conversations alone, but exploring their joint effects can further boost the performance, especially combined with a bi-attention mechanism over them.",
"Case Study.",
"We further present a case study on the target post shown in Table 1, where the top five outputs of some comparison models are displayed in Table 7.",
"As can be seen, only our model successfully generates aus open , the gold standard.",
"Particularly, it not only ranks the correct answer as the top prediction, but also outputs other semantically similar hashtags, e.g., sport-related terms like bbc football , arsenal , and murray .",
"On the contrary, CLASSIFIER and SEQ 2S EQ tend to yield frequent hashtags, such as just saying and jan 25 .",
"Baseline models also perform poorly: LDA produces some common single word, and TF-IDF extracts phrases in the target post, where the gold-standard hashtag is however absent.",
"aus open matches the gold-standard hashtag.",
"To analyze why our model obtains superior results in this case, we display the heatmap in Figure 4 to visualize our bi-attention weight matrix W bi att .",
"As we can see, bi-attention can identify the indicative word Azarenka in the target post, via highlighting its other pertinent words in conversations, e.g., Nadal and tennis .",
"In doing so, salient words in both the post and its conversations can be unveiled, facilitating the correct hashtag aus open to be generated.",
"Error Analysis.",
"Taking a closer look at our outputs, we find that one type of major errors comes from the unmatched outputs with gold standards, even as a close guess.",
"For example, our model predicts super bowl for a post tagged with #steel-ers , a team in super bowl.",
"In future work, the semantic similarity should be considered in hashtag evaluation.",
"Another primary type of error is caused by the non-topic hashtags, such as #fb (indicating the messages forwarded from Face-book).",
"Such non-topic hashtags cannot reflect any content information from target posts and should be distinguished from topic hashtags in the future.",
"Our work mainly builds on two streams of previous work microblog hashtag annotation and neural language generation.",
"We are in the line of microblog hashtag annotation.",
"Some prior work extracts phrases from target posts with sequence tagging models (Zhang et al., 2016, 2018).",
"Another popular approach is to apply classifiers and select hashtags from a candidate list (Heymann et al., 2008; Weston et al., 2014; Sedhai and Sun, 2014; Gong and Zhang, 2016; Huang et al., 2016; Zhang et al., 2017).",
"Unlike them, we generate hashtags with a language generation framework, where hashtags in neither the target posts nor the pre-defined candidate list can be created.",
"Topic models are also widely applied to induce topic words as hashtags (Krestel et al., 2009; Ding et al., 2012; Godin et al., 2013; Gong et al., 2015; Zhang et al., 2016).",
"However, these models are usually unable to produce phrase-level hashtags, which can be achieved by ours via generating hashtag word sequences with a decoder.",
"Our work is also closely related to neural language generation, where the encoder-decoder framework (Sutskever et al., 2014) acts as a springboard for many sequence generation models.",
"In particular, we are inspired by the keyphrase generation studies for scientific articles (Meng et al., 2017; Ye and Wang, 2018; Chen et al., 2018, 2019), incorporating word extraction and generation using a seq2seq model with copy mechanism.",
"However, our hashtag generation task is inherently different from theirs.",
"As we can see from Table 4, it is suboptimal to directly apply keyphrase generation models on our data.",
"The reason mostly lies in the informal language style of microblog users in writing both target posts and their hashtags.",
"To adapt our model on microblog data, we explore the effects of conversation contexts on hashtag generation, which has never been studied in any prior work before.",
"We have presented a novel framework of hashtag generation via jointly modeling of target posts and conversation contexts.",
"To this end, we have proposed a neural seq2seq model with bi-attention over a dual encoder for capturing indicative representations from the two sources.",
"Experimental results on two newly collected datasets have demonstrated that our proposed model significantly outperforms existing state-of-the-art models.",
"Further studies have shown that our model can effectively generate rare and even unseen hashtags.",
"This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund).",
"We thank NAACL reviewers for their insightful suggestions on various aspects of this work."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"objective",
"result",
"other",
"other"
] |
[
"[email protected]",
"Abstract",
"Paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings.",
"While cross-encoders have achieved high performances across several benchmarks, bi-encoders such as SBERT have been widely applied to sentence pair tasks.",
"They exhibit substantially lower computation complexity and are better suited to symmetric tasks.",
"In this work, we adopt a bi-encoder approach to the paraphrase identification task, and investigate the impact of explicitly incorporating predicate-argument information into SBERT through weighted aggregation.",
"Experiments on six paraphrase identification datasets demonstrate that, with a minimal increase in parameters, the proposed model is able to outperform SBERT/SRoBERTa significantly.",
"Further, ablation studies reveal that the predicate-argument based component plays a significant role in the performance gain.",
"Paraphrases are sentences that express the same or similar meanings with different wording (Bhagat and Hovy, 2013).",
"Paraphrase pairs are either fully or largely semantically equivalent.",
"For example:",
"a) Marriage equality law passed in Rhode Island",
"b) Rhode Island becomes the 10th state to enact marriage equality It is generally considered to be a symmetric task where the paraphrase relation holds in both directions (Bhagat and Hovy, 2013; Yang et al., 2019).",
"Since word order and sentence structure are crucial in determining sentence meaning, effective paraphrase models must be structure-aware and word order sensitive.",
"In light of this, paraphrase datasets have been created that are specifically designed to encourage models to consider structural differences (Xu et al., 2015; Zhang et al., 2019b).",
"For example, PIT2015 (Xu et al., 2015) consists of paraphrase pairs that are lexically diverse and non-paraphrase pairs that are lexically similar but semantically dissimilar.",
"There are generally two pre-trained based approaches for sentence pair tasks such as paraphrase identification.",
"The first is the cross-encoder approach, which involves concatenating the two input sentences and performing full-attention over the input.",
"The second is the bi-encoder approach, which adopts a conjoined twin network structure and maps each sentence onto separate representations, which can then be compared using similarity measures such as cosine.",
"Though typical cross-encoders like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) have set state-of-the-art performance on various sentence pair tasks (Zhang et al., 2021; Xia et al., 2021), they still face challenges from both extreme computational overhead for many use cases (Reimers and Gurevych, 2019; Thakur et al., 2021) and inconsistent predictions (ranging from 2.66% to 8.46% depending on specific datasets) when dealing with symmetric tasks (Chen et al., 2020).",
"In contrast, a bi-encoder approach such as Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) encodes sentences separately and generates high-quality embeddings for each of them.",
"This architecture enables sentence embeddings to be pre-computed, supporting efficient indexing and comparison between different sequences.",
"Due to the nature of bi-encoders, the symmetry property will be preserved as long as no asymmetry is introduced in subsequent layers.",
"These properties make bi-encoders appealing for the paraphrase identification task.",
"Accordingly, here, we focus on bi-encoders rather than cross-encoders.",
"very simple strategy, which is mean-pooling over all tokens, to generate sentence embeddings.",
"As previously discussed, models should ideally be sensitive to any structural differences between two sentences.",
"Relational Graph Convolutional Networks (RGCNs) (Schlichtkrull et al., 2018) have been used to introduce structural information (e.g. dependency/semantic parse trees) into SBERT and improvements have been reported on unsupervised similarity comparison tasks (Peng et al., 2021).",
"One drawback of RGCNs is the size of the parameter space.",
"For example, a single-layer RGCN can involve more than 30 million parameters.",
"Furthermore, as we will demonstrate, the performance gain on different paraphrase identification datasets is not consistent.",
"An important aspect of sentence meaning concerns its predicate-argument structure.",
"This has been utilised to generate paraphrases (Kozlowski et al., 2003) and to compare sentence meanings (Shan et al., 2009).",
"Inspired by the Self-Explain model (Sun et al., 2020) which uses a span-based framework to generate sentence embeddings, we propose a method that effectively introduces sentence structure into SBERT via the aggregation of predicate-argument spans.",
"This self-attention based aggregation allows us to gain benefits with minimal increased cost in terms of additional parameters.",
"Empirical results indicate that the proposed model yields improvements on six benchmarks for paraphrase identification.",
"Upon closer investigation, we find the predicate-argument span (PAS) component plays a crucial role in the performance gains and can be easily generalised to other models.",
"The problem of paraphrase identification has been explored now for several decades (Mihalcea et al., 2006; Kozareva and Montoyo, 2006).",
"Prior to the emergence of pre-trained models, bi-encoder structures were widely used.",
"For example, Mueller and Thyagarajan (2016) applied LSTM in a twin architecture with tied weights and used Manhattan distance to compute similarity.",
"InferSent (Con-neau et al., 2017) exploited BiLSTM in a similar twin structure with a fully-connected layer for classification over interacted sentence embeddings.",
"Although their model was mainly proposed for transfer learning, experiments showed that it achieves good performance when directly trained on in-domain data.",
"Some bi-encoders do not generate single-vector sentence embeddings and allow direct comparisons between the words in the two sentences.",
"Pang et al. (2016) proposed MatchPyramid where interaction matrix is constructed, and convolutional networks were used to extract features for final classification.",
"PMWI (He and Lin, 2016) introduced more fine-grained comparisons between words to better dissect the meaning difference.",
"ESIM (Chen et al., 2017) further utilised BiLSTM to bring contextualised token representations and allow rich interactions between tokens.",
"Researchers have further improved these models by incorporating context and structure information (Liu et al., 2019a), as well as character-level information (Lan and Xu, 2018).",
"After the emergence of pre-trained models, cross-encoders like BERT and RoBERTa have achieved state-of-the-art performance on various sentence pair tasks including paraphrase identification.",
"Zhang et al. (2019a) introduced pairwise word interaction mechanism into BERT.",
"Zhang et al. (2021) improved BERT on paraphrase tasks by using CNNs to gather local information and an auxiliary task to further bring in semantic relation information.",
"Xia et al. (2021) injected similarity matrices into BERT's attention mechanism.",
"Though improved performance can be obtained, cross-encoders have known drawbacks.",
"In particular, Reimers and Gurevych (2019) showed the extreme computation overhead of cross-encoders, and Chen et al. (2020) demonstrated that cross-encoders often give inconsistent predictions when reversing the input sentence order.",
"Based on these factors, bi-encoders are often preferred for the paraphrase identification task.",
"Though pre-trained models like BERT seem to encode certain structures in their contextualised representations, open questions remain about how to better utilise such information (Hewitt and Manning, 2019; Clark et al., 2019) and how useful the hidden structure is compared to externally provided sentence structures (Glava and Vuli c, 2021; Dai et al., 2021).",
"Recent improvements are also observed on various natural language understanding tasks by incorporating structural information into pre-trained models.",
"SentiBERT proposed by Yin et al. (2020) 5580 incorporates constituency parse tree into BERT for sentiment analysis.",
"Xu and Yang (2019) model each sentence as a directed dependency graph by using RGCN, and achieve improvements on pronoun resolution.",
"Zhang et al. (2020) propose a semantics-aware BERT (SemBERT) model by further encoding semantic labels with BERT using a GRU.",
"RGCNs have also been used by Wu et al. (2021) to introduce semantic information into RoBERTa, and achieved consistent improvements when fine-tuned on problem-specific datasets.",
"Peng et al. (2021) propose a SBERT-RGCN model where structural information is explicitly encoded into SBERT in a similar way, achieving improvements on unsupervised sentence similarity comparison tasks.",
"Similar efforts can be seen where researchers try to provide syntax information via self-attention mechanism (Bai et al., 2021; Li et al., 2020).",
"Self-Explain model proposed by Sun et al. (2020) focuses on continuous text spans.",
"It generates sentence embeddings by taking the weighted sum over all possible continuous text spans rather than individual tokens in the sentence.",
"Though, Self-Explain achieves improvements over SentiBERT and SemBERT on sentiment analysis and language inference tasks, the continuous span strategy only captures linear structure and not differences in linguistic structure.",
"In this paper, we draw inspiration from it, designing a similar span-based component to incorporate predicate-argument spans.",
"Our proposed model adopts the same conjoined twin architecture as SBERT and turns focus to the predicate-argument structure of the given sentence.",
"As depicted in Figure 1, the model consists of different components: BERT: Each sentence is first fed into the pretrained BERT-base model to produce both a sentence representation, by applying mean-pooling over all token representations from the last hidden layer, and an original contextualised sequence-length token representation, which is used to derive predicate-argument span representations.",
"Predicate Argument Spans (PAS): We use Al-lenNLP (Gardner et al., 2018) with its BERT-based semantic role labelling (SRL) tagger to obtain predicates and relevant arguments for all input sentences.",
"We group the predicate and its arguments together to generate predicate-argument spans.",
"The initial position in the sentence determines their position in the span.",
"An example of such spans is shown below: He slices tomatoes in the kitchen From this sentence, the predicate is the verb slices , and the three arguments are ( he , tomatoes and in the kitchen ), involving the relations ( ARG0 , ARG1 and ARGM-LOC ), respectively.",
"In this way, we form three predicate-argument spans and split them into individual words: ( He , slices ), ( slices , tomatoes ), ( slices , in , the , kitchen ).",
"One sentence is likely to have multiple predicates, by adopting this strategy, we are able to obtain all potential predicate-argument spans in the given sentence.",
"We further utilise these extracted spans to form a span-based sentence representation.",
"If no predicate-argument structure can be found in the sentence, we directly use the representation after mean-pooling over all tokens as its sentence representation.",
"Aggregation: After obtaining all predicate-argument spans, we derive corresponding span representations by looking at BERT's token representations.",
"In BERT/RoBERTa, tokenization yields sub-tokens, whereas in the created spans, we have an entire word token.",
"To properly align them, we use the same tokenizer to break the original word into sub-tokens and represent it as a sequence of sub-tokens in the span if a sub-token exists.",
"Given a predicate-argument span sequence s = { s 1 , s 2 , ..., s N } in the sentence, where N denotes the number of spans and every span s i consists of tokens { x 1 , ..., x l } that make up the span.",
"For each span s i , we obtain its dense vector representation h i by taking mean-pooling over all tokens in it: h i = MeanP ooling ( x 1 , .., x l ) (1) Therefore, the whole representation for span sequence s is represented as h = { h 1 , h 2 , ..., h N } , where h i RD .",
"Then, we aggregate information from all spans using a simple self-attentive mechanism.",
"Following Sun et al. (2020), this is achieved by first assigning weights i to each span h i and combining these representations using weighted sum: o i = W h i + b i = exp ( o i ) N (cid:80) j =1 exp ( o j ) (2) 5581 Figure 1: The proposed model in twin structure.",
"where W R 1 D and b are learnable parameters.",
"The span-based sentence representation h from the aggregation component is the weighted average of all predicate-argument span representations: h = N (cid:88) i =1 i h i (3) The weights are learned during training.",
"This gives the model flexibility to decide the best combination method on its own.",
"The combination of self-attentive mechanism and predicate-argument spans allow us to construct structure-aware sentence embeddings without introducing a large number of parameters.",
"Connect BERT and Aggregation: The final sentence representation is the concatenation of both BERT mean-pooling based sentence representation and the span-based sentence representation.",
"Sentence embeddings of the given sentence-pair are then combined using vector operations before passing to the final classifier for training as shown in Figure 1.",
"To combine the embeddings, we use the concatenation of the element-wise multiplication u v and the absolute element-wise difference | u v | .",
"This is different to the typical concatenation strategy used with SBERT/SRoBERTa (Reimers and Gurevych, 2019) which introduces asymmetry into the task by using (u, v, |u-v|).",
"In initial experiments, we tested the prediction consistency of SBERT on paraphrase tasks and found that, across different datasets, between 2.78% and 9.16% of test predictions change when the sentence order is reversed.",
"Furthermore, here, we find that SBERT performs worse on paraphrase tasks with (u, v, |u-v|) compared to (|u-v|, u v ).",
"Results are given in Table 5 and discussed in Section 5.1.",
"Finally, we note that in this twin structure, all parameters are shared and are updated accordingly.",
"Cross-entropy loss is used for optimisation.",
"We compare our model with SBERT, SRoBERTa 1 and the SBERT-RGCN (Peng et al., 2021) which utilises RGCN to incorporate structures into SBERT with an introduction of 32 million extra parameters 2 .",
"The original sentence-pair aggregation strategy of these models is (u, v, |u-v|).",
"We modify this to (|u-v|, u v ) as discussed in Section 3, but we retain the original notation.",
"We adopt their structures and directly fine-tune the whole model on downstream tasks from the original BERT/RoBERTa checkpoints.",
"We considered two strategies to apply SBERT on classification inference.",
"One involved finding the optimal similarity threshold on the development set and then applying it on the test set, while the other involved directly using the trained classifier.",
"In this paper, 1 https://github.com/UKPLab/sentence-transformers.",
"Due to limited computational resources, all pre-trained models are of base size.",
"2 SBERT-RGCN tried both dependency and semantic parse trees.",
"We evaluate our model on six binary paraphrase identification benchmarks.",
"The statistics of these datasets are listed in Table 1.",
"Below we give some basic descriptions: Microsoft Research Paraphrase Corpus (MSRP) : A corpus of sentence pairs obtained by clustering news articles with an SVM classifier and human annotations (Dolan and Brockett, 2005).",
"It has 4,076 train data and 1,725 test data.",
"In this paper, we split 10% of training data as the validation set according to GLUE (Wang et al., 2019) standardised splits.",
"TwitterURL : To better study the realistic language usage, Lan et al. (2017) proposed the TwitterURL corpus where sentence pairs in the dataset are collected from tweets that share the same URL of news articles.",
"PIT2015 : The corpus is derived from Twit-ter's trending topic data, containing 18,763 sentence pairs on more than 400 distinct top-ics (Xu et al., 2015).",
"Given we are dealing with binary classification, we discard debatable sentence pairs according to its guideline and obtain 16,510 sentence pairs in total.",
"This dataset contains paraphrase pairs that are lexically diverse and non-paraphrase pairs that are lexically similar, but semantically dissimilar.",
"To capture these properties, models are assumed to be structure-aware.",
"Quora Question Pairs (QQP) : The Quora Question Pairs dataset is a collection of potential duplicate question pairs from the QA website Quora.com (Iyer et al., 2017).",
"In this paper, we adopt the same split strategy as in Wang et al. (2017).",
"PAWS_QQP : QQP is criticised for lacking negative examples with high lexical overlapping.",
"Models trained on QQP tend to mark any sentence pairs with a high word overlap as paraphrases despite clear clashes in meaning.",
"In light of these factors, Zhang et al. (2019b) proposed a new paraphrase identification dataset which has extremely high lexical overlap by applying word scrambling and back translation to sentences in QQP.",
"PAWS_Wiki : Similar to PAWS_QQP, Zhang et al. (2019b) applied the same technique on sentences obtained from Wikipedia articles to construct sentence pairs.",
"Both PAWS datasets aim to measure sensitivity of models on word order and sentence structure.",
"Due to the lack of development set for PAWS_QQP, we use PAWS_Wiki's development set for early stopping since they are constructed in the same way.",
"It is worth noting that both PIT2015 and PAWS_QQP datasets have relatively small test sets compared to others.",
"Following the SBERT training protocol, we train all models with a batch-size of 16.",
"We tune the learning rate in the range of (1e-5, 2e-5, 5e-5) with Adam optimizer and a linear learning rate warmup over 10% of the training data.",
"All models are trained for four epochs and use the development set for early stopping with a patience of 5. The evaluation step depends on actual tasks but roughly we evaluate them on the development set twice each epoch.",
"The maximum sequence length is set to be 128.",
"All experiments are conducted on NVIDIA Titan V GPUs.",
"The main experiment results are summarised in Table 2. We report the averaged F1 score of positive class with standard error.",
"In the table, we see that the proposed model consistently outperforms its SBERT and SRoBERTa versions on 5 paraphrase identification tasks and show competitive, but not statistically significantly different results on QQP.",
"As also revealed by Zhang et al. (2019b), negative examples in QQP often have low lexical overlap, and models trained on it tend to mark any sentence pairs with high word overlap as paraphrases.",
"We 5583 QQP TwitterURL MSRP PAWS_Wiki PAWS_QQP PIT2015 SBERT 90.780.09 70.850.28 81.670.46 81.570.53 66.010.45 52.031.44 SBERT-RGCN 90.410.09 70.400.22 81.700.17 81.140.81 66.220.75 59.110.93 PAS+SBERT 90.740.06 72.120.26 83.420.23 82.600.18 68.850.73 59.191.85 SRoBERTa 90.790.09 70.690.23 81.690.53 81.420.93 67.350.97 52.672.75 PAS+SRoBERTa 90.760.03 72.040.23 83.220.46 82.870.35 69.680.72 59.502.74 Table 2: Results on six paraphrase identification tasks, we calculate the F1 score of the positive class given most of them are imbalanced datasets.",
"reason that the QQP task is relatively easy and does not require much structural information to achieve high scores.",
"For tasks like PAWS_QQP and PIT2015 where structures are more important, the performance gap is more apparent.",
"Furthermore, despite bringing in more than 30 million parameters and explicitly encoding sentence structures with a complex model, SBERT-RGCN does not significantly outperform SBERT on most of these tasks (excluding PIT2015) and underperforms our proposed model.",
"In summary, the proposed model shows improved performances on five out of six paraphrase tasks, demonstrating the advantages of bringing in the predicate-argument structure.",
"Moreover, when we combine PAS with SRoBERTa, we get similar performance gains, proving the generalisation ability of our component.",
"Similarly in Reimers and Gurevych (2019), we only observe minor differences by replacing SBERT with SRoBERTa.",
"The number of parameters for different approaches are shown in Table 3. We note that compared to SBERT, our proposed model introduces 3,840 additional parameters, and if we only consider the span-based component, only 768 additional parameters are introduced.",
"In comparison, SBERT-RGCN brings in more than 32 million parameters.",
"In order to better understand how the performance gain is achieved, we have carried out several experiments to investigate different aspects of the proposed model.",
"The following experiments are conducted only with SBERT, since we would expect similar results with SRoBERTa.",
"Our proposed model is made of different components and so it is important to dissect the impact of each component so as to explain the improved performance.",
"Given that the final sentence representation is the concatenation of both mean-pooling based BERT representation and the weighted sum of span representations, we first assess their performances individually on six datasets.",
"Furthermore, it is necessary to assess the impact of adopting the weighted sum strategy when we derive span-based sentence representations.",
"We experimented with simple averaging over all spans and compared it with the weighted sum where the model learns to combine different spans.",
"The ablation experiment results are shown in Table 4. The SBERT-only component appears to perform the poorest, and the complete model achieves the highest performance on five out of six tasks.",
"By only using the span-based sentence representation, we are able to achieve significant improvements over SBERT on most of these tasks.",
"The improvements are more substantial when concatenating with SBERT sentence representations.",
"We observe considerable performance decreases on most tasks when switching from weighted sum to simple averaging, which further verifies the benefits of adopting learnable weights.",
"The original asymmetric sentence aggregation strategy (u, v, |u-v|) of SBERT assumes an ordering of the sentences by concatenating two individual 5584 QQP TwitterURL MSRP PAWS_Wiki PAWS_QQP PIT2015 PAS+SBERT 90.740.06 72.120.26 83.420.23 82.600.18 68.850.73 59.191.85 SBERT-only 90.780.09 70.850.28 81.670.46 81.570.53 66.010.45 52.031.44 PAS only 90.700.08 71.640.14 82.910.12 82.260.34 67.380.22 54.951.45 PAS only (simple average) 90.110.13 71.090.30 82.130.14 81.850.26 66.550.41 51.821.31 Table 4: Experimental results for ablation study.",
"sentence embeddings.",
"u v has been widely used elsewhere (Conneau et al., 2017; Cer et al., 2018) and we found that concatenating this with |u-v| gave the best performance on all tasks.",
"The results are summarised in Table 5. Therefore, we use (|u-v|, u v ) as our concatenation method for all of our other experiments.",
"The impact of incorporating predicate-argument spans into SBERT in terms of the performance on various paraphrase identification tasks has been investigated in the above experiments.",
"We now address the question of whether it is the use of specifically predicate-argument based spans that is critical, or whether this is simply a result of the fact that we are benefiting from the use of representations based on spans rather than all tokens.",
"To verify this, we further conduct experiments with different span strategies.",
"We pick three paraphrase identification datasets for this purpose (MSRP, PAWS_QQP and PIT2015) since performance gaps between PAS+SBERT and SBERT are more apparent in previous experiments.",
"Here we experiment with two other span strategies.",
"The first, inspired by the Self-Explain model (Sun et al., 2020), is the continuous random span, where instead of following the predicate-argument structure, we randomly sample continuous word sequences from the sentence to build a span.",
"The length of the sampled spans is arbitrary.",
"To make a fair comparison, the number of sampled spans is Task Span Type Span only Self-Explain* SBERT MSRP PAS 82.910.12 81.230.27 81.670.46 ContinuousRandomSpan 81.400.43 Random Span 81.860.47 PAWS_QQP PAS 67.380.22 66.880.46 66.010.45 ContinuousRandomSpan 65.450.44 Random Span 65.750.74 PIT2015 PAS 54.951.45 47.601.01 52.031.44 ContinuousRandomSpan 51.621.92 Random Span 50.852.11 Table 6: Evaluation for different span strategies using our span-only component on three datasets.",
"the same as that of the predicate-argument spans in the sentence.",
"The other one is random span, where we do not necessarily sample continuous words, but allow word leaps from one to another.",
"In this strategy, we have the opportunity to get both continuous and discontinuous word sequences to form spans, which better matches the scenario of PAS.",
"The only difference between these two strategies and PAS is the words in the span.",
"We also experiment with a bi-encoder approach more directly based on the Self-Explain cross-encoder model (Sun et al., 2020).",
"This model extracts all possible continuous text spans and obtains span representations by taking the first and last token in the span, passing them through a complex mapping function.",
"Unlike our PAS model, this 5585 10 30 60 100 percentage of training data (%) 62 64 66 68 70 72 SBERTPAS+SBERT",
"compared to SBERT.",
"Table 6 shows the results.",
"In order to focus on the impact of different span strategies, we only use the PAS component and do not concatenate it with SBERT sentence representations in this experiment.",
"As shown in the table, the PAS-based model outperforms the Self-Explain inspired bi-encoder model and achieves the best performance among all other span-based models.",
"The continuous random span and the random span model have comparable performances with SBERT.",
"This is expected because they do not introduce linguistically-meaningful structures and the impact of contextu-alisation makes them similar to SBERT despite the absence of some tokens.",
"Despite introducing 2.36 million more parameters, the Self-Explain inspired bi-encoder model does not show consistent improvements over SBERT on these datasets, which further suggests the importance of the predicate-argument structure in this paraphrase identification task.",
"In order to examine the stability of our model and the impact of the predicate-argument structure when different sizes of training data are available, we conduct experiments with different training data scales.",
"We randomly sample from 10% to 100% data (10%, 30%, 60%, 100%) from the training set as training data.",
"We show the results in Figure 2. In spite of limited increased parameters, the proposed model appears to yield consistent improvements across different training scales.",
"We also note that, whilst our proposed model performs comparably to SBERT on QQP when trained with the complete data-set, we can see that when only a small proportion of training data (e.g. 10%, 30%) is available, our model demonstrates improvements over SBERT.",
"Thus the introduction of predicate-argument structures may be more beneficial with limited annotated training data.",
"In this work, we propose a method which effectively introduces sentence structure to a sentence embedding via the aggregation of predicate-argument spans (PAS).",
"Experiments with SBERT and SRoBERTa show that such method brings improvements on six paraphrase identification tasks.",
"Compared to models based on RGCNs, our method obtains more consistent benefits with minimal increased cost in terms of numbers of parameters.",
"Upon closer investigation, we show that the PAS component and its learnable weights play a substantial impact in the performance gain.",
"This PAS component, as demonstrated with SRoBERTa, can be easily extended to other models that require the generation of sentence embeddings.",
"Our future work will include enhancing the structural difference between sentences by taking use of the argument tag information.",
"We thank all anonymous reviewers for their insightful comments, and NVIDIA for the donation of the GPU that supported our work.",
"Also, we would like to thank Bowen Wang for helpful discussions and proofreading."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"method",
"other",
"other"
] |
[
"In this paper, we aim to explore an uncharted territory, which is Chinese multimodal named entity recognition (NER) with both textual and acoustic contents.",
"To achieve this, we construct a large-scale human-annotated Chinese multimodal NER dataset, named CNERTA .",
"Our corpus totally contains 42,987 annotated sentences accompanying by 71 hours of speech data.",
"Based on this dataset, we propose a family of strong and representative baseline models, which can leverage textual features or multimodal features.",
"Upon these baselines, to capture the natural monotonic alignment between the textual modality and the acoustic modality, we further propose a simple multimodal multitask model by introducing a speech-to-text alignment auxiliary task.",
"Through extensive experiments, we observe that: (1) Progressive performance boosts as we move from unimodal to multimodal, verifying the necessity of integrating speech clues into Chinese NER.",
"(2) Our proposed model yields state-of-the-art (SoTA) results on CNERTA , demonstrating its effectiveness.",
"For further research, the annotated dataset is publicly available at http://github.com/DianboWork/ CNERTA .",
"Speech is a part of thought. Oliver Sacks, Seeing Voices",
"As a fundamental subtask of information extraction, named entity recognition (NER) aims to locate and classify named entities mentioned in unstructured texts into predefined semantic categories, such as person names, locations and organizations.",
"NER plays a crucial role in many natural language processing (NLP) tasks, including relation extraction (Zelenko et al., 2003), question answering (Molla et al., 2006) and summarization (Aramaki et al., 2009).",
"Most of the research on NER, such as Lam-ple et al. (2016); Ma and Hovy (2016); Chiu and Nichols (2016), only relies on the textual modality to infer tags.",
"However, when texts are noisy or short, and it is not sufficient to locate and classify named entities accurately only based on textual information (Baldwin et al., 2015; Lu et al., 2018).",
"One promising solution is to introduce other modalities as the supplement of the textual modality.",
"So far, some studies on multimodal NER, such as Moon et al. (2018); Zhang et al. (2018); Lu et al. (2018); Arshad et al. (2019); Asgari-Chenaghlu et al. (2020); Yu et al. (2020); Chen et al. (2020); Sun et al. (2020), have attempted to couple the textual modality with the visual modality and witnessed a stable improvement.",
"In this work, we also focus on multimodal NER.",
"But differently from previous studies, we pay special attention to Chinese multimodal NER with both textual and acoustic contents.",
"The motivation comes from two aspects: First, despite much recent success in multimodal NER, current studies on this topic are limited in English, and totally skirt other languages.",
"Meanwhile, previous work on Chinese NER, such as Xu et al. (2013); Peng and Dredze (2016a); Zhang and Yang (2018); Cao et al. (2018); Sui et al. (2019); Gui et al. (2019); Ma et al. (2020); Li et al. (2020), totally ignores valuable multimodal information.",
"With around 1.3 billion native speakers and the wide spread of short-form video apps in China, it is necessary and urgent to carry out research on Chinese multimodal NER.",
"Second, unlike the static visual modality, the time-varying acoustic modality plays a unique role in Chinese NER, especially in providing precise word segmentation information.",
"In detail, different from English, Chinese is an ideographic language featured by no word delimiter between words in written.",
"This language characteristic is one of the major roadblocks in Chinese NER, since named entity boundaries are usually word boundaries (Zhang and Yang, 2018).",
"Fortunately, cues contained in the fluent acoustic modality, especially pauses between adjacent words, are able to aid the NER model in discovering word boundaries.",
"A classic example shown in Figure 1 can perfectly illustrate this point.",
"In this example, the sentence with ambiguous word segmentation would be disambiguated with the aid of the acoustic modality, which would absolutely assist the model to infer correct NER tags.",
"In this work, we make the following efforts to advance multimodal NER: First, we construct a large-scale human-annotated C hinese NER dataset with T extual and A coustic contents, named CNERTA .",
"Specifically, we annotate all occurrences of 3 entity types (per-son name, location and organization) in 42,987 sentences originating from the transcripts of Aishell-1 (Bu et al., 2017), a corpus that has been widely employed in Mandarin speech recognition research in recent years (Shan et al., 2019; Li et al., 2019; Tian et al., 2020).",
"In particular, unlike previous multimodal NER datasets (Moon et al., 2018; Zhang et al., 2018; Lu et al., 2018) are all flatly annotated, not only the topmost entities but also nested entities are annotated in CNERTA .",
"Second, based on CNERTA , we establish a family of strong and representative baselines.",
"In detail, we first investigate the performance of several classic text-only models on our dataset, including BiLSTM-CRF (Lample et al., 2016) and BERT-CRF (Devlin et al., 2019).",
"Then, since introducing a lexicon has been proven as an effective way to incorporate word information in Chinese NER (Zhang and Yang, 2018), we implement several lexicon-enhanced models, such as Lattice-LSTM (Zhang and Yang, 2018) and ZEN (Diao et al., 2020), to explore whether the acoustic modality can provide word information beyond the lexicon.",
"Finally, to verify the effectiveness of introducing the acoustic modality, we test some widely used multimodal models, such as CMA (Tsai et al., 2019) and MMI (Yu et al., 2020), on our dataset.",
"Third, upon these strong baselines, we further propose a simple M ultiM odal M ultiT ask model (short for M3T ) to make better use of the pause information in the acoustic modality.",
"Specifically, different from coupling the visual modality with the textual modality, there is a monotonic alignment between the acoustic modality and the textual modality.",
"Armed with such an alignment, the position of each Chinese character in the continuous speech would be determined, which would make it easy to discover pauses between adjacent words.",
"Therefore, to automatically estimate this desired alignment, we introduce a speech-to-text alignment auxiliary task and propose a hybrid CTC/Tagging loss.",
"In the hybrid loss, a masked CTC loss (Graves et al., 2006) is designed for enforcing a monotonic alignment between speech and text sequences.",
"The primary contributions of this work can be summarized as follows: We construct CNERTA , the first human-annotated Chinese multimodal NER dataset, where each annotated sentence is paired with its corresponding speech data.",
"To our best knowledge, this dataset is not only the largest multimodal NER dataset, but also the largest Chinese nested NER dataset.",
"We establish a family of baselines to leverage textual features or multimodal features.",
"Through various experiments, we observe consistent performance boosts originating from acoustic features, which verifies the significant merits of integrating acoustic features for Chinese NER.",
"We further propose a multimodal multitask method by introducing a speech-to-text alignment auxiliary task.",
"By jointly solving the tagging task and the alignment task, the proposed method can yield SoTA results on CNERTA .",
"Mutlimodal NER: As multimedia technology evolves, processing multimodal data is becoming a burning issue.",
"As a basic NLP tool, multimodal NER attracts increasing attention in recent years.",
"Most of studies on multimodal NER focus on leveraging the associate images to better identify the named entities contained in the text.",
"Specifically, Moon et al. (2018) propose a multimodal NER network with modality attention to fuse textual and visual information.",
"To model inter-modal interactions and filter out the noise in the visual context, Zhang et al. (2018) propose an adaptive co-attention network and a gated visual attention mechanism for multimodal NER.",
"As transformer-based models (Vaswani et al., 2017; Devlin et al., 2019) become the mainstream method in NLP, researchers turn to study how to fuse visual clues in transformers structure.",
"Chen et al. (2020) use captions to represent images as text and adopt transformer-based sequence labeling models to connect multimodal information.",
"Yu et al. (2020) propose a Multimodal Transformer model, which empowers transformer with a multimodal interaction module to capture the inter-modality dynamics between words and images.",
"But different from them, we aim to explore an unexplored territory in this work, which is Chinese multimodal NER with both speech and textual contents.",
"Chinese NER: Compared with English NER, Chinese NER is more complicated since the written text in Chinese is not naturally segmented.",
"Therefore, how to incorporate word information is the key challenge in Chinese NER.",
"There are three main ways to fuse word information in Chinese NER.",
"The first one is the pipeline method.",
"In the pipeline method, Chinese word segmentation (CWS) is first applied and then a word-based NER model is used.",
"The second one is to learn CWS and NER tasks jointly (Xu et al., 2013; Peng and Dredze, 2016b; Cao et al., 2018; Wu et al., 2019).",
"In such a way, the word boundary information in the CWS task can be transferred to the NER model.",
"The third one is to resort to an automatically constructed lexicon (Zhang and Yang, 2018; Ding et al., 2019; Liu et al., 2019a; Sui et al., 2019; Gui et al., 2019; Li et al., 2020; Ma et al., 2020; Xue et al., 2020).",
"Different from all previous studies, we focus on use speech clues to incorporate word information in Chinese NER.",
"In this work, we aim to explore Chinese NER with both speech and textual clues.",
"But we are not aware of any such existing corpus, hence we are motivated to collect one.",
"In this section, we will discuss the data acquisition process, subsequently present statistics of the dataset and compare the annotated dataset with other widely-used NER datasets.",
"The main challenge in data acquisition is to find a large-scale dataset, which includes texts and the corresponding speech data.",
"One possible way is to attach speech data to current existing Chinese NER datasets.",
"However, it is costly to gather hundreds of participants in the recording.",
"Therefore, we take a different way, manually annotating NER tags on a speech recognition dataset from scratch.",
"In detail, our annotated dataset is based on Aishell-1 (Bu et al., 2017) dataset, which is a large-scale Mandarin automatic speech recognition dataset.",
"In this dataset, text transcriptions are chosen from five domains: Finance, Science and Technology, Sport, Entertainments and News.",
"There are 400 participants in the recording, and the gender of participants is balanced with 47% male and 53% female.",
"Speech utterances are recorded via three categories of devices in parallel, which are a high fidelity microphone working at 44.1 kHz, 16-bit, Android phones working at 16 kHz, 16-bit, and Apple iPhones working at 16 kHz, 16-bit.",
"To ensure the quality of annotation, we design two rounds in the annotation procedure.",
"In the first Dataset # Train # Dev # Test # Total Language Structure Modality MSRA 46,364 -4,365 50,729 Chinese Flat Text OntoNotes 15,724 4301 4,346 24,371 Chinese Flat Text Weibo NER 1,350 271 270 1,891 Chinese Flat Text Resume 3,821 463 477 4,761 Chinese Flat Text GENIA 15,022 1,669 1,854 18,545 English Nested Text JNLPBA 20,546 -4,260 24,806 English Nested Text ACE-2004 6,198 742 809 7,749 English Nested Text ACE-2005 7,285 968 1,058 9,311 English Nested Text Twitter-2015 4,000 1,000 3,257 8,257 English Flat Text + Image Twitter-2017 3,373 723 723 4,819 English Flat Text + Image CNERTA 34,102 4,440 4,445 42,987 Chinese Nested Text + Speech Table 2: A comparison between CNERTA and other existing widely-used NER datasets.",
"round, we use Brat (Stenetorp et al., 2012) as the annotation tool and ask 3 internal annotators (in-cluding the first author of this paper) to perform annotation, who are very familiar with this task.",
"They independently identify and classify named entities in the transcriptions with more than 17 characters.",
"Cohen's kappa coefficient (Cohen, 1960) is used to measure the inter-annotator agreements.",
"After the first round, = 0.965, which shows the quality of CNERTA is satisfactory.",
"But there are still some sentences for which annotators give out different annotations.",
"For those sentences, the annotators check the disagreed annotations carefully and discuss to reach the agreements for all cases.",
"After we finish the annotation process, we split the dataset into three parts: training, development, and test set.",
"Table 1 shows the high level statistics of data splits for CNERTA .",
"We compare CNERTA with several widely used NER datasets in Table 2.",
"Specifically, we first compare our corpus with some Chinese NER datasets, such as MSRA (Levow, 2006), OntoNotes (Weischedel et al., 2011), Weibo NER (Peng and Dredze, 2016a) and Resume (Zhang and Yang, 2018).",
"Then, we compare our corpus with several widely used nested NER datasets, like GENIA (Kim et al., 2003), JNLPBA (Collier and Kim, 2004), ACE-2004 (Doddington et al., 2004) and ACE-2005 (Walker et al., 2004).",
"Finally, multimodal NER datasets, including Twitter-2015 (Zhang et al., 2018) and Twitter-2017 (Lu et al., 2018), are compared with our corpus.",
"From Table 2, we observe that our corpus has unique value compared with the existing datasets.",
"The value is reflected in the following aspects: (1) CNERTA is a large-scale dataset; (2) CNERTA is the first Chinese multimodal dataset; (3) Not only the topmost entities but also nested entities are annotated; (4) Among these datasets, the acoustic modality is only introduced in CNERTA .",
"Given a text X = x 1 , x 2 , ..., x n and its corresponding speech S = s 1 , s 2 , ..., s t , where x i denotes the i -th Chinese character and s j denotes the j -th waveform frame, the goal of the task is to leverage textual and speech clues to identify and classify all named entities contained in the text.",
"Unlike flat NER, named entities may overlap and also be labeled with more than one label in nested NER.",
"To solve nested NER, we follow Strakova et al. (2019) to encode the nested entity structure into a CoNLL-like, per-character BIO encoding (Ramshaw and Marcus, 1995).",
"There are two rules to guide the linearization: (1) entity mentions starting earlier have priority over entities starting later, and (2) for mentions with the same beginning, longer entity mentions have priority over shorter ones.",
"A multilabel for a given Chinese character is a concatenation of all intersecting entity mentions, from the highest priority to the lowest.",
"For more details, we refer readers to Strakova et al. (2019).",
"The acoustic encoder is used to map raw speech signals into continuous space.",
"There are three parts in the proposed acoustic encoder: a speech processing layer, a convolution front end and a transformer-based encoder.",
"Specifically, in the speech processing layer, a speech signal first goes through a pre-emphasis filter; then gets sliced into frames and a window function is applied to each frame; afterwards, a Short-Time Fourier transform (Kwok and Jones, 2000) is employed on each frame and the power spectrum is calculated; and subsequently, the filter banks (Ravindran et al., 2003) are computed.",
"Then, we use a convolution front end to down-sample the long acoustic features.",
"In the convolution front end, following Dong et al. (2018); Tian et al. (2020), two 3 3 CNN layers with stride 2 are stacked for both time and frequency dimensions.",
"Afterwards, in order to enable the acoustic encoder to attend by relative positions, the positional encoding is added to the output of the convolution front end.",
"Finally, to effectively capture long-term dependencies, down-sampled acoustic features flow through the transformer-based encoder (Vaswani et al., 2017).",
"The transformer-based encoder is a stack of 6 identical layers, each of which is composed of a self-attention sub-layer and a feed-forward network.",
"Based on the annotated dataset, a family of strong and representative baselines is established, including (1) text-only models presented in Section 5.1, (2) lexicon-enhanced models shown in Section 5.2 and (3) multimodal models introduced in Section 5.3.",
"Open-Source NLP Toolkit: Many open-source NLP toolkits, such as spaCy (Honnibal et al., 2020) and Stanza (Qi et al., 2020), support Chinese NER.",
"In spaCy, a multitask CNN is employed.",
"In Stanza, a contextualized string representation based tagger from Akbik et al. (2018) is adopted.",
"In both spaCy and Stanza, the tagger is trained on OntoNote (Weischedel et al., 2011).",
"To map the output of taggers to CNERTA 's label space, expert-designed rules are used, such as PERSON PER.",
"Since these toolkits are only designed for flat structure, we do not evaluate these toolkits in nested settings.",
"BiLSTM-CRF: Featured by a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) as the textual encoder and conditional random fields (CRF) (Lafferty et al., 2001) as the decoder, the widely used BiLSTM-CRF (Lample et al., 2016) is adopted as an important baseline.",
"PLM-CRF: Instead of training a model from scratch, we also adopt the framework of fine-tuning a pretrained language model (PLM) on a downstream task (Radford et al., 2018).",
"In this framework, we adopt BERT (Devlin et al., 2019) as the textual encoder and use CRF as the decoder.",
"In addition to initializing the textual encoder with the original pretrained BERT model, a SoTA Chinese pretrained language model, called MacBERT (Cui et al., 2020), is used.",
"Compared with BERT, MacBERT is built upon RoBERTa (Liu et al., 2019b) and the original MLM task in BERT is replaced with the MLM as correction task.",
"For more details, we refer readers to Cui et al. (2020).",
"A drawback of the text-only methods mentioned above is that explicit word and word sequence information is not fully exploited, which can be potentially useful.",
"With this consideration, we also adopt lexicon-enhance models to incorporate word lexicons.",
"(1) Lattice-LSTM (Zhang and Yang, 2018) is a classic method that can encode a sequence of input characters as well as all potential words that match a lexicon.",
"(2) ZEN (Diao et al., 2020) is a pretrained Chinese text encoder enhanced by an n-gram lexicon.",
"In ZEN, n-gram contexts are extracted, encoded and integrated with the character encoder.",
"For more details about Lattice-LSTM and ZEN, we refer readers to Zhang and Yang (2018) and Diao et al. (2020).",
"To leverage the acoustic modality, several multimodal models are introduced.",
"In these models, fusion modules are built on the top of the acoustic encoder and the textual encoder, which are designed for capturing the interaction between the textual hidden representations X = [ x 1 , x 2 , ..., x n ] ; x i R d and the acoustic representations S = [ s 1 , s 2 , ..., s t (cid:48) ] ; s j R d .",
"We present two representative fusion modules, which are Cross-Modal Attention (CMA) module (Tsai et al., 2019) and Multimodal Interaction (MMI) module (Yu et al., 2020).",
"Cross-Modal Attention Module (CMA): Given the textual hidden representations X R d n and the acoustic representations S R d t (cid:48) , we first employ a m -head cross-modal attention mechanism (Tsai et al., 2019), by treating X as queries, and S as keys and values: CA i ( X , S ) = softmax([ W q i X ] T [ W k i S ] (cid:112) d / m )[ W v i S ] MH-CA ( X , S ) = W (cid:48) [CA 1 ( X , S ) , ..., CA m ( X , S )] where CA i refers to the i -th head of cross-modal attention, and { W q i , W k i , W v i } R d/m d , W (cid:48) R d d denote the weight matrices for the query, key, value and multi-head attention, respectively.",
"Then, we stack the following sub-layers on top: F = LN( X + MH-CA ( X , S )) F = LN( F + FFN( F )) (1) where LN means layer normalization (Ba et al., 2016) and FFN means a fully connected feed-forward network, which consists of two linear transformation with a ReLU activation (Nair and Hinton, 2010).",
"Finally, the new textual representations F R d n , which are enhanced by acoustic features, are fed into the CRF decoder to infer NER tags.",
"Multimodal Interaction Module (MMI): A stack of cross-modal attention layer mentioned above makes up the multimodal interaction module.",
"Since the architecture of MMI is too complex and is not the core of this paper, we will not introduce it in the main text.",
"For more details about MMI, we refer readers to Yu et al. (2020).",
"Previous multimodal methods ignore a natural monotonic alignment between the acoustic modality and the textual modality.",
"To capture this alignment, we propose a multimodal multitask model, called M3T .",
"The framework of the proposed method is shown in Figure 2.",
"In the M3T model, we adopt the CMA module to fuse acoustic information into the textual representations.",
"Besides, a CTC project layer is built upon the acoustic encoder, and the loss function is a combination of masked CTC loss and CRF loss.",
"Specifically, through the CTC project layer, each acoustic representation s i R d is first mapped to the total size of model units (in this paper, the 2D Conv Layer Add & Norm Multi-HeadSelf-Attention Feed Forward CTC Project Layer CRF Layer Positional Encoding Add & Norm Textual Encoder x2 Feed Forward Add & Norm Add & Norm Multi-Head Cross-Modal Attention + x6 Q K V + Joint Training Masked CTC Loss CRF Loss Figure 2: Overall architecture of the proposed multimodal multitask model. model unit is the Chinese character) and then is passed through a logit function: G = logit( WT v S ) (2) where W v R d | V | and | V | is the total size of Chinese characters.",
"Unlike automatic speech recognition, only the characters in the given text need to be aligned rather than the entire model units.",
"Therefore, we only keep these rows unchanged, whose corresponding characters are contained in the given text, and fill the other rows in G R | V | t (cid:48) with the value .",
"The masked tensor G is then fed into CTC loss.",
"Finally, to jointly solve the tagging task and the alignment task, a hybrid loss of combining the masked CTC loss with the CRF loss is used: L = L crf + L ctc (3) where is a hyperparameter.",
"In this section, we carry out various experiments to investigate the effectiveness of introducing the acoustic modality.",
"In addition, we empirically compare the proposed model and these baselines under different settings.",
"Following previous studies in NER (Zhang and Yang, 2018), standard precision (P), recall (R) and F1-score (F1) are used as evaluation metrics.",
"LSTM-Based Baselines: We use the 50-dimensional character embeddings, which are pretrained on Chinese Giga-Word * using word2vec (Mikolov et al., 2013).",
"The dimensionality of LSTM hidden states is set to 300 and the initial learning rate is set to 0.001.",
"We train the models using 100 epochs with a batch size of 16.",
"Lexicon: The lexicon used in Lattice-LSTM is the same as Zhang and Yang (2018) and the lexicon used in ZEN is the same as Diao et al. (2020).",
"Due to low speed in training and inference, we only employ Lattice-LSTM in unimodal settings.",
"Pretrained Language Model Fine-Tuning: We use the base models of BERT (Devlin et al., 2019), MacBERT (Cui et al., 2020) and ZEN (Diao et al., 2020).",
"The initial learning rate of pretrained language model is set to 1 10 5 .",
"We fine-tune models using 10 epochs with a batch size of 16.",
"Table 3 shows the results of baselines and our proposed model on CNERTA .",
"From the table, we find: (1) Introducing the acoustic modality can sig-nificantly boost the performance of the character-based models, such as BiLSTM-CRF, BERT-CRF and MacBERT-CRF.",
"With the simple CMA module to introduce the acoustic modality, there is a more than 1.6% improvement in both flat NER and nested NER.",
"Furthermore, by using the M3T model to leverage the acoustic modality, a more than 3% improvement can be brought in all cases.",
"These experimental results demonstrate the effectiveness of introducing the acoustic modality in character-based NER models.",
"(2) Introducing the acoustic modality can improve the performance of lexicon-based models, such as ZEN-CRF.",
"By introducing the acoustic modes in ZEN-CRF with the CMA module, the performance in flat NER and nested NER can be improved by 1.38% and 1.73%, respectively.",
"Armed with the M3T model, the performance in flat NER and nested NER can be further improved by 2.93% and 3.19%.",
"Although not as significant as the improvement of the character-based models, these Sentence Gold BERT-M3T BERT-CRF (Maslakh, from Saudi Arabia, won the first place in the preliminary contest with 43.93 seconds) (LOC) (PER) (LOC) (PER) (LOC) (LOC) (PER) (It has a lot to do with a bowl of beef noodles eaten at the Capital Airport) (LOC) (LOC) (LOC) (Inter Milan's official Japanese Twitter released photos of the players when they arrived) (ORG) (ORG) (ORG) (Kabariro threw a good result of 70.65m in Bilbao) (PER) (LOC) (PER) (LOC) (PER) (PER) (LOC) Table 4: Case studies to illustrate the effectiveness of introducing the acoustic modality.",
"results still prove that the acoustic modality can provide lexicon-based models with some information that does not contain in the large-scale lexicon.",
"(3) Our proposed method (M3T) can achieve the SoTA results on CNERTA .",
"Compared with CMA (Tsai et al., 2019) and MMI (Yu et al., 2020), there is a significant improvement.",
"We conjecture that is due to that the monotonic alignment between the acoustic modality and the textual modality is captured by the masked CTC loss and armed with this alignment, precise word boundary information contained in speech is leveraged by the model.",
"As NER models established here are not yet as accurate as one would hope, some analyses of the errors that occur in the output of NER models are",
"performed.",
"We divide the error into type error and boundary error.",
"The type error is defined as that the boundary of the predicted entity is correct but the predicted type is wrong, and the other errors are classified as boundary errors.",
"The statistics of boundary errors and type errors are shown in Table 5.",
"From the table, we find that: (1) Errors are mainly caused by mistakenly locating boundaries of entities.",
"Therefore, discovering entity boundaries is the main challenge in Chinese NER.",
"(2) Leveraging the acoustic modality can effectively reduce boundary errors.",
"In nested NER, the number of errors decreases from 906 to 848, totally owning to the reduction of boundary errors, but the number of type errors increases, which may be due to overfitting or some random factors.",
"To visually show the effectiveness of introducing the acoustic modality, case studies on comparing the output of BERT-CRF and BERT-M3T are present in Table 4.",
"From the table, we can observe that: without the acoustic modality, BERT-CRF is prone to locate some ambiguous entities mistakenly, such as (Saudi Arabia), (Capital Airport), (Inter Milan).",
"But armed with the acoustic modality, these entities are located with complete accuracy.",
"In the last case, BERT-M3T makes some mistakes.",
"We listen to the corresponding audio clip and find that there is a long pause between and .",
"In this paper, we explore Chinese multimodal NER with both textual and acoustic contents.",
"To achieve this, we construct a large-scale manually annotated multimodal NER dataset named CNERTA .",
"Based on this dataset, we establish a family of baseline models.",
"Furthermore, we propose a simple multimodal multitask method by introducing a speech-to-text alignment auxiliary task.",
"Through extensive experiments, we prove that Chinese NER models can benefit from introducing the acoustic modality and our proposed model is effective.",
"In the future, we are interested in mining other information contained in speech, such as rhythm, emotion, pitch, accent and stress, to boost NER.",
"Meanwhile, we will also work on designing some speech-text pretraining tasks for building a large-scale pretrained model with multimodal capabilities.",
"We thank the anonymous reviewers for their insightful comments.",
"We also thank Zhixing Tian, Tao Wang and Ye Bai for helpful suggestions.",
"This work is supported by the National Key Research and Development Program of China (Grant No. 2020AAA0106400), the National Natural Science Foundation of China (Grant No. 61922085 and Grant No. 61976211) and Beijing Academy of Artificial Intelligence (Grant No. BAAI2019QN0301)."
] | [
"objective",
"result",
"abstain",
"objective",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"objective",
"objective",
"method",
"method",
"other",
"other",
"other"
] |
[
"Named Entity Recognition (NER) is the task of identifying spans that represent entities in sentences.",
"Whether the entity spans are nested or discontinuous, the NER task can be categorized into the flat NER, nested NER, and discontinuous NER subtasks.",
"These subtasks have been mainly solved by the token-level sequence labelling or span-level classification.",
"However, these solutions can hardly tackle the three kinds of NER subtasks concurrently.",
"To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework.",
"Based on our unified framework, we can leverage the pre-trained Seq2Seq model to solve all three kinds of NER subtasks without the special design of the tagging schema or ways to enumerate spans.",
"We exploit three types of entity representations to linearize entities into a sequence.",
"Our proposed framework is easy-to-implement and achieves state-of-the-art (SoTA) or near SoTA performance on eight English NER datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets 1 .",
"Named entity recognition (NER) has been a fundamental task of Natural Language Processing (NLP), and three kinds of NER subtasks have been recognized in previous work (Sang and Meulder, 2003; Pradhan et al., 2013a; Doddington et al., 2004; Kim et al., 2003; Karimi et al., 2015), including flat NER, nested NER, and discontinuous NER.",
"As shown in Figure 1, the nested NER contains overlapping entities, and the entity in the discontinuous NER may contain several nonadjacent spans.",
"The sequence labelling formulation, which will assign a tag to each token in the sentence, has been widely used in the flat NER field (McCal-lum and Li, 2003; Collobert et al., 2011; Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Strakova et al., 2019; Yan et al., 2019; Li et al., 2020a).",
"Inspired by sequence labelling's success in the flat NER subtask, Metke-Jimenez and Karimi (2016); Muis and Lu (2017) tried to formulate the nested and discontinuous NER into the sequence labelling problem.",
"For the nested and discontinuous NER subtasks, instead of assigning labels to each token directly, Xu et al. (2017); Wang and Lu (2019); Yu et al. (2020); Li et al. (2020b) tried to enumerate all possible spans and conduct the span-level classification.",
"Another way to efficiently represent spans is to use the hypergraph (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018; Muis and Lu, 2016).",
"Although the sequence labelling formulation has dramatically advanced the NER task, it has to design different tagging schemas to fit various NER subtasks.",
"One tagging schema can hardly fit for all three NER subtasks 2 (Ratinov and Roth, 2009; Metke-Jimenez and Karimi, 2016; Strakova et al., 2019; Dai et al., 2020).",
"While the span-based models need to enumerate all possible spans, which is quadratic to the length of the sentence and is almost impossible to enumerate in the discontinuous NER scenario (Yu et al., 2020).",
"Therefore, span-based methods usually will set a maximum span length (Xu et al., 2017; Luan et al., 2019; Wang and Lu, 2018).",
"Although hypergraphs can efficiently represent all spans (Lu and Roth, 2015; Katiyar and Cardie, 2018; Muis and Lu, 2016), it suffers from the spurious structure problem, and structural ambiguity issue during inference and the decoding is quite complicated (Muis and Lu, 2017).",
"Because the problems lie in different formulations, no publication has tested their model or framework in three NER subtasks simultaneously to the best of our knowledge.",
"In this paper, we propose using a novel and simple sequence-to-sequence (Seq2Seq) framework with the pointer mechanism (Vinyals et al., 2015) to generate the entity sequence directly.",
"On the source side, the model inputs the sentence, and on the target side, the model generates the entity pointer index sequence.",
"Since flat, continuous and discontinuous entities can all be represented as entity pointer index sequences, this formulation can tackle all the three kinds of NER subtasks in a unified way.",
"Besides, this formulation can even solve the crossing structure entity 3 and multi-type entity 4 .",
"By converting the NER task into a Seq2Seq generation task, we can smoothly use the Seq2Seq pre-training model BART (Lewis et al., 2020) to enhance our model.",
"To better utilize the pre-trained BART, we propose three kinds of entity representations to linearize entities into entity pointer index sequences.",
"Our contribution can be summarized as follows: 2 Attempts made for discontinuous constituent parsing may tackle three NER subtasks in one tagging schema (Vilares and G omez-Rodr guez, 2020).",
"3 Namely, for span ABCD, both ABC and BCD are entities.",
"Although this is rare, it exists (Dai et al., 2020).",
"4 An entity can have multiple entity types, as proteins can be annotated as drug/compound in the EPPI corpus (Alex et al., 2007).",
"We propose a novel and simple generative solution to solve the flat NER, nested NER, and discontinuous NER subtasks in a unified framework, in which NER subtasks are formulated as an entity span sequence generation problem.",
"We incorporate the pre-trained Seq2Seq model BART into our framework and exploit three kinds of entity representations to linearize entities into sequences.",
"The results can shed some light on further exploration of BART into the entity sequence generation.",
"The proposed framework not only avoids the sophisticated design of tagging schema or span enumeration but also achieves SoTA or near SoTA performance on eight popular datasets, including two flat NER datasets, three nested NER datasets, and three discontinuous NER datasets.",
"The term Named Entity was coined in the Sixth Message Understanding Conference (MUC-6) (Gr-ishman and Sundheim, 1996).",
"After that, the release of CoNLL-2003 NER dataset has greatly advanced the flat NER subtask (Sang and Meulder, 2003).",
"Kim et al. (2003) found that in the field of molecular biology domain, some entities could be nested.",
"Karimi et al. (2015) provided a corpus that contained medical forum posts on patient-reported Adverse Drug Events (ADEs), some entities recognized in this corpus may be discontinuous.",
"Despite the difference between the three kinds of NER subtasks, the methods adopted by previous publications can be roughly divided into three types.",
"Token-level classification The first line of work views the NER task as a token-level classification task, which assigns to each token a tag that usually comes from the Cartesian product between entity labels and the tag scheme, such as BIO and BILOU (Ratinov and Roth, 2009; Collobert et al., 2011; Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Alex et al., 2007; Strakova et al., 2019; Metke-Jimenez and Karimi, 2016; Muis and Lu, 2017; Dai et al., 2020), then Conditional Random Fields (CRF) (Lafferty et al., 2001) or tag sequence generation methods can be used for decoding.",
"Though the work of (Strakova et al., 2019; Wang et al., 2019; Zhang et al., 2018; Chen and Moschitti, 2018) are much like our method, they all tried to predict a tagging sequence.",
"Therefore, they still need to design tagging schemas for different NER subtasks.",
"Span-level classification When applying the sequence labelling method to the nested NER and discontinous NER subtasks, the tagging will be complex (Strakova et al., 2019; Metke-Jimenez and Karimi, 2016) or multi-level (Ju et al., 2018; Fisher and Vlachos, 2019; Shibuya and Hovy, 2020).",
"Therefore, the second line of work directly conducted the span-level classification.",
"The main difference between publications in this line of work is how to get the spans.",
"Finkel and Manning (2009) regarded the parsing nodes as a span.",
"Xu et al. (2017); Luan et al. (2019); Yamada et al. (2020); Li et al. (2020b); Yu et al. (2020); Wang et al. (2020a) tried to enumerate all spans.",
"Following Lu and Roth (2015), hypergraph methods which can effectively represent exponentially many possible nested mentions in a sentence have been extensively studied in the NER tasks (Katiyar and Cardie, 2018; Wang and Lu, 2018; Muis and Lu, 2016).",
"Combined token-level and span-level classification To avoid enumerating all possible spans and incorporate the entity boundary information into the model, Wang and Lu (2019); Zheng et al. (2019); Lin et al. (2019); Wang et al. (2020b); Luo and Zhao (2020) proposed combining the token-level classification and span-level classification.",
"The Seq2Seq framework has been long studied and adopted in NLP (Sutskever et al., 2014; Cho et al., 2014; Luong et al., 2015; Vaswani et al., 2017; Vinyals et al., 2015).",
"Gillick et al. (2016) proposed a Seq2Seq model to predict the entity's start, span length and label for the NER task.",
"Recently, the amazing performance gain achieved by PTMs (pre-trained models) (Qiu et al., 2020; Peters et al., 2018; Devlin et al., 2019; Dai et al., 2021; Yan et al., 2020) has attracted several attempts to pretrain a Seq2Seq model (Song et al., 2019; Lewis et al., 2020; Raffel et al., 2020).",
"We mainly focus on the newly proposed BART (Lewis et al., 2020) model because it can achieve better performance than MASS (Song et al., 2019).",
"And the sentence-piece tokenization used in T5 (Raffel et al., 2020) will cause different tokenizations for the same token, making it hard to generate pointer indexes to conduct the entity extraction.",
"BART is formed by several transformer encoder and decoder layers, like the transformer model used in the machine translation (Vaswani et al., 2017).",
"BART's pre-training task is to recover corrupted text into the original text.",
"BART uses the encoder to input the corrupted sentence and the decoder to recover the original sentence.",
"BART has base and large versions.",
"The base version has 6 encoder layers and 6 decoder layers, while the large version has 12.",
"Therefore, the number of parameters is similar to its equivalently sized BERT 5 .",
"In this part, we first introduce the task formulation, then we describe how we use the Seq2Seq model with the pointer mechanism to generate the entity index sequences.",
"After that, we present the detailed formulation of our model with BART. 3.1 NER Task The three kinds of NER tasks can all be formulated as follows, given an input sentence of n tokens X = [ x 1 , x 2 , ..., x n ] , the target sequence is Y = [ s 11 , e 11 , ..., s 1 j , e 1 j , t 1 , ..., s i 1 , e i 1 , ..., s ik , e ik , t i ] , where s, e are the start and end index of a span, since an entity may contain one (for flat and nested NER) or more than one (for discontinuous NER) spans, each entity is represented as [ s i 1 , e i 1 , ..., s ij , e ij , t i ] , where t i is the entity tag index.",
"We use G = [ g 1 , ..., g l ] to denote the entity tag tokens (such as Person, Location, etc.), where l is the number of entity tags.",
"We make t i ( n, n + l ] , the n shift is to make sure t i is not confusing with pointer indexes (pointer indexes will be in range [1 , n ] ).",
"Since we formulate the NER task in a generative way, we can view the NER task as the following equation:",
"token.",
"We use the Seq2Seq framework with the pointer mechanism to tackle this task.",
"Therefore, our model consists of two components: 5 Because of the cross-attention between encoder and decoder, the number of parameters of BART is about 10% larger than its equivalently sized of BERT (Lewis et al., 2020).",
"(2) Decoder is to get the index probability distribution for each step P t = P ( y t | X, Y <t ) .",
"However, since Y <t contains the pointer and tag index, it cannot be directly inputted to the Decoder.",
"We use the Index2Token conversion to convert indexes into tokens y t = X y t , if y t n, G y t n , if y t > n (3) After converting each y t this way, we can get the last hidden state h dt R d with Y <t = [ y 1 , ..., y t 1 ] as follows h dt = Decoder( H e ; Y <t ) (4) Then, we can use the following equations to achieve the index probability distribution P t E e = TokenEmbed( X ) (5) H e = MLP( H e ) (6) H e = H e + (1 ) E e (7) G d = TokenEmbed( G ) (8) P t = Softmax([ H e h dt ; G d h dt ]) (9) where TokenEmbed is the embeddings shared between the Encoder and Decoder; E e , H e , H e R n d ; R is a hyper-parameter; G d R l d ; [ ; ] means concatenation in the first dimension; means the dot product.",
"During the training phase, we use the negative log-likelihood loss and the teacher forcing method.",
"During the inference, we use an autoregressive manner to generate the target sequence.",
"We use the decoding algorithm presented in Algorithm 1 to convert the index sequence into entity spans.",
"Since our model is a Seq2Seq model, it is natural to utilize the pre-training Seq2Seq model BART to enhance our model.",
"We present a visualization of Algorithm 1 Decoding Algorithm to Convert the Entity Representation Sequence into Entity Spans Input: Target sequence Y = [ y 1 , ..., y m ] and y i [1 , n + | G | ] Output: Entity spans E = { ( e 1 , t 1 ) , ..., ( e i , t i ) } 1: E = {} , e = [] , i = 1 2: while i < = m do 3: y i = Y [ i ] 4: if y i > n then 5: if len ( e ) > 0 then 6: E.add (( e, G y i n )) 7: end if 8: e = [] 9: else 10: e.append ( y i ) 11: end if 12: i = i + 1 13: end while 14: return E Sentence: After BPE: b 111 b 1211 b 13111 b 2111 b 22111 b 31111 b 4111 b 42111 b 51 Position Index: 0 1 2 3 4 5 6 7 8 BPE: Word: Span: [0,1,2,5,PER] [0,5,PER] [0,2,5,5,PER] PER LOC ORG [0,3,5,6,LOC] [0,1,2,3,4,5,6,7,LOC] [6,7, ORG] [6, ORG] [0,7,LOC] [6,7, ORG] x 5 x 4 x 3 x 2 x 1 Three entity representations: Figure 3: The bottom three lines are examples of the three kinds of entity representations to determine the entity in the sentence unambiguously.",
"our model based on BART in Figure",
"2. However, BART's adoption is non-trivial because the Byte-Pair-Encoding (BPE) tokenization used in BART might tokenize one token into several BPEs.",
"To exploit how to use BART efficiently, we propose three kinds of pointer-based entity representations to locate entities in the original sentence unambiguously.",
"The three entity representations are as follows: Span The position index of the first BPE of the starting entity word and the last BPE of the ending entity word.",
"the same way.",
"BPE The position indexes of all BPEs of the entity words.",
"Word Only the position index of the first BPE of each entity word is used.",
"For all cases, we will append the entity tag to the entity representation.",
"An example of the entity representations is presented in Figure",
"3. If a word does not belong to any entity, it will not appear in the target sequence.",
"If a whole sentence has no entity, the prediction should be an empty sequence (only contains the start of sentence ( < s > ) token and the end of sentence ( < /s > ) token ).",
"To show that our proposed method can be used in various NER subtasks, we conducted experiments on eight datasets.",
"Flat NER Datasets We adopt the CoNLL-2003 (Sang and Meulder, 2003) and the OntoNotes dataset 6 (Pradhan et al., 2013b).",
"For CoNLL-2003, we follow Lample et al. (2016); Yu et al. (2020) to train our model on the concatenation of the train and development sets.",
"For the OntoNotes dataset, we use the same train, development, test splits as Pradhan et al. (2012); Yu et al. (2020), and the New Testaments portion were excluded since there is no entity in this portion (Chiu and Nichols, 2016).",
"Nested NER Datasets We conduct experiments on ACE 2004 7 (Doddington et al., 2004), ACE 2005 8 (Walker and Consortium, 2005), Genia corpus (Kim et al., 2003).",
"For ACE2004 and ACE2005, we use the same data split as Lu and Roth (2015); Muis and Lu (2017); Yu et al. (2020), the ratio between train, development and test set is 8:1:1.",
"For Genia, we follow Wang et al. (2020b); Shibuya and Hovy (2020) to use five types of entities and split the train/dev/test as 8.1:0.9:1.0.",
"6 https://catalog.ldc.upenn.edu/ LDC2013T19 7 https://catalog.ldc.upenn.edu/ LDC2005T09 8 https://catalog.ldc.upenn.edu/ LDC2006T06 9 In the reported experiments, they included the document context.",
"We rerun their code with only the sentence context.",
"The lack of document context might cause performance degradation is also confirmed by the author himself in https://github.com/juntaoy/biaffine-ner/issues/8#issuecomment-650813813 .",
"Discontinuous NER Datasets We follow Dai et al. (2020) to use CADEC (Karimi et al., 2015), ShARe13 (Pradhan et al., 2013a) and ShARe14 (Mowery et al., 2014) corpus.",
"Since only the Adverse Drug Events (ADEs) entities include discontinuous annotation, only these entities were considered (Dai et al., 2020; Metke-Jimenez and Karimi, 2016; Tang et al., 2018).",
"We use the BART-Large model, whose encoder and decoder each has 12 layers for all experiments, making it the same number of transformer layers as the BERT-Large and RoBERTa-Large model.",
"We did not use any other embeddings, and the BART model is fine-tuned during the optimization.",
"We put more detailed experimental settings in the Supplementary Material.",
"We report the span-level F1.",
"Results are shown in Table 1.",
"We do not compare with Yamada et al. (2020) since they added entity information during the pre-training process.",
"Clark et al. (2018); Peters et al. (2018); Akbik et al. (2019); Strakova et al. (2019) assigned a label to each token, and Li et al. (2020b); Yu et al. (2020) are based on span-level classifications, while our method is based on the entity sequence generation.",
"And for both datasets, our method achieves better performance.",
"We will discuss the performance difference between our three entity representations in Section 5.4.",
"Table 2 presents the results for the three nested NER datasets, and our proposed BART-based gen-CADEC",
"erative models are comparable to the token-level classication (Strakova et al., 2019; Shibuya and Hovy, 2020) and span-level classification (Luan et al., 2019; Li et al., 2020b; Wang et al., 2020a) models.",
"Results in Table 3 show the comparison between our model and other models in three discontinuous NER datasets.",
"Although Dai et al. (2020) tried to utilize BERT to enhance the model performance, they found that ELMo worked better.",
"In all three datasets, our model achieves better performance.",
"In this part, we discuss the performance difference between the three entity representations.",
"The Word entity representation achieves better performance almost in all datasets.",
"And the comparison between the Span and BPE representations is more involved.",
"To investigate the reason behind these results, we calculate the average and median length of entities when using different entity representations, and the results are presented in Table 4.",
"It is clear that for a generative framework, the shorter the entity representation the better performance it should achieve.",
"Therefore, as shown in Table 4, the Word representation with smaller average entity length in CoNLL2003, OntoNotes, CADEC, ShARe13 achieves better performance in these datasets.",
"However, although the average entity length of the BPE representation is longer than the Span representation, it achieves better performance in CoNLL2003, OntoNotes, ACE2004, ACE2005, this is because the BPE representation is more similar to the pre-training task, namely, predicting continuous BPEs.",
"And we believe this task similarity is also the reason why the Word representation (Most of the words will be tokenized into a single BPE, making the Word representation still continuous.) achieves better performance than the Span representation in ACE2004, ACE2005, and ShARe14, although the former has longer entity length.",
"A clear outlier is the Genia dataset, where the Span representation achieves better performance than the other two.",
"We presume this is because in this dataset, a word will be tokenized into a longer BPE sequence (this can be inferred from the large entity length gap between the Word and BPE representation.) so that the Word representation will also be dissimilar to the pre-training tasks.",
"For example, the protein lipoxygenase iso-forms will be tokenized into the sequence [ Glip', oxy', gen', ase', Giso', forms'], which makes the target sequence of the Word representation be [ Glip', Giso'], resulting a discontiguous BPE Flat NER Nested NER Discontinuous NER Errors CoNLL2003 OntoNotes ACE2004 ACE2005 Genia CADEC ShARe13 ShARe14 E 1 0.05% 0.02% 0.23% 0.06% 0.0% 0.31% 0.0% 0.01% E 2 0.04% 0.03% 0.13% 0.22% 0.11% 1.02% 0.18% 0.16% E 3 0.05% 0.02% 0.30% 0.26% 0.06% 0.0% 0.08% 0.02% Table 5: Different invalid prediction probability for the Word entity representation.",
"Since only about 10% of entities in the discontinuous NER datasets are discontinuous, only evaluating the whole dataset may not show our model can recognize the discontinuous entities.",
"Therefore, like in Dai et al. (2020); Muis and Lu (2016) we report our model's performance on the discontinuous entities in Table 6.",
"As shown in Table 6, our model can predict the discontinuous named entities and achieve better performance.",
"In this part, we mainly focus on the analysis of the Word representation since it generally achieves better performance.",
"We do not restrict the output distribution; therefore, the entity prediction may contain invalid predictions as show in Table 5, this table shows that the BART model can learn the prediction representations quite well since, in most cases, the invalid prediction is less than 1%.",
"We exclude all these invalid predictions during evaluation.",
"Its appearance order in the sentence determines the entity order, and we want to study whether the entity that appears later in the target sequence will have worse recall than entities that appear early.",
"The results are provided in Figure 4.",
"The latter the entity appears, the larger probability that it can be recalled for the flat NER and discontinuous NER.",
"While for the nested NER, the recall curve is quite involved.",
"We assume this phenomenon is because, for the flat NER and discontinuous NER (more than 91.1% of entities are continuous) datasets, different entities have less dependence on each other.",
"While in the nested NER dataset, entities in the latter position may be the outermost entity that contains the former entities.",
"The wrong prediction of former entities may negatively influence the later entities.",
"In this paper, we formulate NER subtasks as an entity span sequence generation problem, so that we can use a unified Seq2Seq model with the pointer mechanism to tackle flat, nested, and discontinuous NER subtasks.",
"The Seq2Seq formulation enables us to smoothly incorporate the pre-training Seq2Seq model BART to enhance the performance.",
"To better utilize BART, we test three types of entity representation methods to linearize the entity span into sequences.",
"Results show that the entity representation with a shorter length and more similar to continuous BPE sequences achieves better performance.",
"Our proposed method achieves SoTA or near SoTA performance for eight different NER datasets, proving its generality to various NER subtasks.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"The discussion with colleagues in AWS Shanghai AI Lab was quite fruitful.",
"We also thank the developers of fastNLP 10 and fitlog 11 .",
"We thank Juntao Yu for helpful discussion about dataset processing.",
"This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106700) and National Natural Science Foundation of China (No. 62022027).",
"For the consideration of ethical concerns, we would",
"make detailed description as following:",
"(3) Our work does not contain identity characteristics.",
"It does not harm anyone.",
"(4) Our experiments do not need a lots of computer resources compared to pre-trained models.",
"(1) All of the experiments are conducted on existing datasets, which are derived from public sci-entific papers.",
"(2) We describe the characteristics of the datasets in a specific section.",
"Our analysis is consistent with the results."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Zero-shot stance detection (ZSSD) aims to detect the stance for an unseen target during the inference stage.",
"In this paper, we propose a joint contrastive learning (JointCL) framework, which consists of stance contrastive learning and target-aware prototypical graph contrastive learning.",
"Specifically, a stance contrastive learning strategy is employed to better generalize stance features for unseen targets.",
"Further, we build a prototypical graph for each instance to learn the target-based representation, in which the prototypes are deployed as a bridge to share the graph structures between the known targets and the unseen ones.",
"Then a novel target-aware prototypical graph contrastive learning strategy is devised to generalize the reasoning ability of target-based stance representations to the unseen targets.",
"Extensive experiments on three benchmark datasets show that the proposed approach achieves state-of-the-art performance in the ZSSD task 1 .",
"Stance detection aims to automatically identify one's opinionated standpoint/attitude (e.g. Pro , Con , or Neutral , etc.) expressed in text towards a specific proposition, topic, or target (Somasun-daran and Wiebe, 2010; Augenstein et al., 2016; Mohammad et al., 2016; Sobhani et al., 2017).",
"For example, a text Everyone is able to believe in whatever they want. expresses a stance of Pro towards the target Atheism .",
"Existing methods achieved promising performance in in-target stance detection when trained and tested on the datasets towards the same set of Equal contribution Corresponding Author 1 The source code of this work is released at https:// github.com/HITSZ-HLT/JointCL targets (Mohtarami et al., 2018; Graells-Garrido et al., 2020), and in cross-target stance detection that identifies the stance of a destination target using models trained on a related source target in a one-to-one way (Xu et al., 2018; Zhang et al., 2020; Liang et al., 2021a).",
"In practice, however, it is infeasible to enumerate all possible (in-target) or related (cross-target) targets beforehand for training stance detection models.",
"Hence, zero-shot stance detection (ZSSD) (Allaway and McKeown, 2020), which aims to detect the stance for unseen targets during the inference stage is a promising scenario forward.",
"To deal with ZSSD, intuitively, we can either reason the target-based stance features from the learned stance information based on the context (i.e., from the context-aware perspective), or identify stance information that is potentially relevant with unseen targets from the learned target-related stance expressions (i.e., from the target-aware perspective).",
"Existing research attempted to explore attention mechanism (Allaway and McKeown, 2020), adversarial learning (Allaway et al., 2021), or graph architecture based on external commonsense knowledge (Liu et al., 2021) to learn the stance representations from the context regarding the known targets, aiming to generalize the learned stance features to the unseen targets for ZSSD.",
"But they tend to ignore that the stance information of an unseen target can be represented in the light of the known targets from the target-aware perspective.",
"In this paper, to generalize the stance features to the unseen targets, we propose a joint contrastive learning ( JointCL ) framework to leverage the stance features of known targets from both the context-aware and the target-aware perspectives.",
"On the one hand, from the context-aware perspective, we explore a Stance Contrastive Learning 81 strategy, which effectively improves the quality of stance features by leveraging the similarity of training instances in a stance class while pushing away instances from other stance classes.",
"This essentially allows the exploitation of target-based contextual stance features to better generalize to the unseen targets.",
"On the other hand, from the target-aware perspective, we propose a feasible solution to capture the relationships between the known targets and the unseen ones.",
"Specifically, inspired by (Li et al., 2021), we explore a clustering method to generate prototypes from all training instances.",
"We then build prototypical graphs linking the prototypes with the target-based representations, in which each prototype is regarded as a bridge that allows the sharing of the graph structures between known targets and unseen ones.",
"Based on the prototypical graphs, we devise a novel Target-Aware Prototypical Graph Contrastive Learning strategy to learn the correlation and difference among the target-based representations.",
"Specifically, a novel edge-oriented graph contrastive loss is deployed to make the graph structures similar for similar target-based representations, and different for dissimilar ones.",
"This essentially generalizes the graph structures learned from the known targets to the unseen ones, so as to better derive target-aware stance information for the unseen targets by the graph representations.",
"The ZSSD task is approached from a new perspective for detecting stance of an unseen target via reasoning the target-based stance features from the learned stance information based on the context or devising the target-aware stance information that is potentially relevant with the unseen target from the learned ones.",
"We propose a novel joint contrastive learning ( JointCL ) framework, which consists of stance contrastive learning and target-aware prototypical graph contrastive learning, to generalize the target-based stance features to the unseen targets.",
"Extensive experiments on three benchmark datasets show that the proposed JointCL framework outperforms state-of-the-art baselines in the ZSSD task.",
"Further, the proposed JointCL framework can be easily extended to the few-shot and cross-target stance detection and achieves outstanding performance.",
"Zero-shot stance detection (ZSSD) aims to detect stance for destination unseen targets by learning stance features from known targets (Allaway and McKeown, 2020).",
"To deal with zero-shot stance detection, Allaway and McKeown (2020) created a new dataset consisting of a large range of topics covering broad themes, called Varied Stance Topics (VAST).",
"Based on it, they proposed a topic-grouped attention model to implicitly capture relationships between targets by using generalized topic representations.",
"Allaway et al. (2021) adopted a target-specific stance detection dataset (Mohammad et al., 2016) and deployed adversarial learning to extract target-invariant transformation features in ZSSD.",
"More recently, to exploit both the structural-level and semantic-level information of the relational knowledge, Liu et al. (2021) proposed a commonsense knowledge enhanced graph model based on BERT (Devlin et al., 2019) to tackle ZSSD.",
"Contrastive learning in the latent space has recently shown great promise, which aims to make the representation of a given anchor data to be similar to its positive pairs and dissimilar to its negative pairs (Hadsell et al., 2006; Wu et al., 2018; Tian et al., 2020; Chen et al., 2020a; Khosla et al., 2020; Chen et al., 2020b; Zhang et al., 2021; Wang et al., 2021; Gunel et al., 2021).",
"Various contrastive learning approaches have been developed to deal with natural language processing tasks (Kachuee et al., 2021; Qin et al., 2021; Yang et al., 2021; Liu and Liu, 2021; Liang et al., 2021b), including unsupervised text representation learning (Giorgi et al., 2021), text classification (Qiu et al., 2021), and text clustering (Zhang et al., 2021).",
"More recently, Li et al. (2021) presented prototypical contrastive learning and a ProtoNCE loss to encourage representations to be closer to their assigned prototypes.",
"However, this method only models the relationship between an anchor instance and its nearest prototype.",
"On the other hand, You et al. (2020) proposed a graph contrastive learning framework based on graph data augmentation, which improves the graph representations for better generalizability and robustness.",
"However, their ap-82 Encoder Encoder Mini-batch Training set Target-aware prototypical graph CL pull push pull push Hidden vectors Stance CL Classifier Prototypes Generation cluster anchorpositive negative copy Figure 1: The architecture of our JointCL framework.",
"proach ignores the relationships of edges regarding the graph structures.",
"In our ( JointCL ) framework, we devise a novel edge-oriented graph contrastive loss to learn the contrastive information of the relationships between prototypes and the targets, thus generalizing the graph structures to the unseen targets for learning target-aware stance information.",
"In this section, we describe the proposed Joint Contrastive Learning ( JointCL ) framework for zero-shot stance detection in detail.",
"As demonstrated in Figure 1, the architecture of the JointCL framework contains four main components: 1) stance contrastive learning , which performs contrastive learning based on the supervised signal of stance labels for better generalization of stance features; 2) prototypes generation , which derives the prototypes of the training data by a clustering method; 3) target-aware prototypical graph contrastive learning , which performs the edge-oriented graph contrastive learning strategy based on the target-aware prototypical graphs for sharing the graph structures between known targets and unseen ones; 4) classifier , which detects the stances of targets based on the hidden vectors and graph representations.",
"the training target and the stance label towards the context r is respectively.",
"N s is the number of the training instances.",
"Further, let D d = { ( r id , t id ) } N d i =1 be the testing set for the targets which are unseen in the training set.",
"Here, t id is the testing target in the context r id .",
"The goal of ZSSD is to predict a stance label (e.g. Pro , Con , or Neutral ) of each testing instance by training a model on the training set.",
"Given a sequence of words r = { w i } ni =1 and the corresponding target t , where n is the length of the sentence r , we adopt a pre-trained BERT (De-vlin et al., 2019) as the Encoder Module and feed [ CLS ] r [ SEP ] t [ SEP ] as input into the encoder module to obtain a d m -dimensional hidden representation h R d m of each input instance:",
"Here, we use the vector of the [ CLS ] token to represent the input instance.",
"For the training set D s , the hidden representations of the training instances can be represented as H = { h i } N s i =1 .",
"As previously discussed in Gunel et al. (2021), good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes.",
"To improve the generalization ability of stance learning, we define a stance contrastive loss on the hidden vectors of instances with the supervised stance label information.",
"Given the hidden vectors { h i } N b i =1 in a mini-batch B (here, N b is the size of mini-batch), and an anchor of hidden vector h i , h i , h j B with the same stance label is considered as a positive pair, i.e. y i = y j , where y i and y j are the stance labels of h i and h j , respectively, while the samples { h k B , k = i } are treated as negative representations with respect to the anchor .",
"Then the contrastive loss is computed across all positive pairs, both ( h i , h j ) and ( h j , h i ) in a mini-batch: L stance = 1 N b (cid:88) h i B s ( h i ) (2) s ( h i ) = log (cid:80) j B\\ i 1 [ y i = y j ] exp( f ( h i , h j ) / s ) (cid:80) j B\\ i exp( f ( h i , h j ) / s ) (3) 83 where 1 [ i = j ] { 0 , 1 } is an indicator function evaluating to 1 iff i = j .",
"f ( u , v ) = sim ( u , v ) = u v / u v denotes the cosine similarity between vectors u and v .",
"In the Prototypical Networks for few-shot learning, Snell et al. (2017) derived the prototype of each class by computing the mean vector of the embedded support points belonging to the class.",
"However, in the ZSSD data, the distribution of targets is usually imbalanced.",
"Therefore, inspired by (Li et al., 2021), we perform k -means clustering on the hidden vectors of the training instances H = { h i } N s i =1 to generate k clusters as the prototypes C = { c i } ki =1 with respect to the target-based representations of training set.",
"Here, a prototype is defined as a representative embedding for a group of semantically similar instances (Li et al., 2021).",
"Clustering is performed at each training epoch to update the prototypes.",
"Once the prototypes are generated, a prototypical graph is constructed to capture the relationships between the prototypes and the known targets.",
"This enables the learning of the representation of a target-based instance by modeling the different weights of edges between its corresponding target and various prototypes, so as to generalize the learned graph information to the unseen targets.",
"Here, the prototypes and the target-based representations are updated in an alternative manner.",
"For a hidden vector h i of a training instance i , we first treat the prototypes C and the hidden vector h i as nodes of the prototypical graph: X = [ c 1 , c 2 , , c k , h i ] , and then construct the adjacency matrix G R ( k +1) ( k +1) of the fully-connected graph, G i,j = G j,i = 1 .",
"Next, we feed the nodes X and the corresponding adjacency matrix G into a graph attention network (GAT) (Velickovic et al., 2018) to derive the attention scores i and the graph representation z i for the target-based instance i : i = a (GAT( X ; G )) (4) z i = f (GAT( X ; G )) (5) where GAT( ) represents GAT operation.",
"a ( ) denotes retrieving the attention score matrix from the GAT operation, f ( ) denotes retrieving the graph representation for h i .",
"From the target-aware perspective, we further explore a Target-Aware Prototypical Graph Contrastive Learning strategy, aiming at generalizing the graph structures learned from the known targets to the unseen ones.",
"Specifically, for the attention matrices { i } N b i =1 in each mini-batch B , we devise a novel edge-oriented prototypical graph contrastive loss, making the graph structure of similar target-based representations to be similar.",
"This essentially allows the model to learn the representations of (unseen) targets through the prototypes, thus generalizing the target-aware stance information to the unseen targets.",
"For an anchor instance i with edge weights (i.e., the attention score matrix) i , we construct a positive pair ( i , j ) by retrieving the attention score matrix of instance j which is either about the same target or has been assigned to the same prototype, and expresses the same stance as i .",
"We also construct negative pairs, ( i , k ) , k B , k = i .",
"Then, the edge-oriented graph contrastive loss is defined as 2 : L graph = 1 N b (cid:88) i B g ( i ) (6) g ( i ) = log (cid:80) j B\\ i ( i, j )exp( f ( i , j ) / g ) (cid:80) j B\\ i exp( f ( i , j ) / g ) (7) ( i, j ) = (cid:40) 1 if y i = y j and p i = p j 0 otherwise (8) where p i = p j represents the instances i and j correspond to the same target or belong to the same prototype, and express the same stance.",
"The calculation of the stance and edge-oriented prototypical graph contrastive losses for each mini-batch B is illustrated in Algorithm 1. 3.7 Stance Detection For each instance i , we first concatenate the hidden vector h i and the graph representation z i to get the output representation v i towards the instance i : v i = h i z i (9) Then the output representation v i is fed into a classifier with a softmax function to produce the pre-2 Here, to compute the cosine similarity, we flatten each matrix i into a one-dimensional array.",
"Algorithm 1: Calculation of the stance",
"dicted stance distribution y i R d y :",
"where d y is the dimensionality of stance labels.",
"W R d y d m and b R d y are trainable parameters.",
"We adopt a cross-entropy loss between predicted distribution y i and ground-truth distribution y i of instance i to train the classifier: L class = N b (cid:88) i =1 d y (cid:88) j =1 y ji log y ji (11) 3.8 Learning Objective The learning objective of our proposed model is to train the model by jointly minimizing the three losses generated by stance detection, stance contrastive learning, and target-aware prototypical graph contrastive learning.",
"The overall loss L is formulated by summing up three losses: L = c L class + s L stance + g L graph + || || 2 (12) where c , s and g are tuned hyper-parameters.",
"denotes all trainable parameters of the model, represents the coefficient of L 2 -regularization.",
"We conduct experiments on three datasets to evaluate the proposed JointCL framework.",
"1) VAST (Allaway and McKeown, 2020), which contains a large variety of targets.",
"Each instance consists of a sentence r , a target t , and a stance label y ( Pro , Con , or Neutral ) towards t .",
"To show the generalizability of coping with few-shot stance detection, following (Allaway and McKeown, 2020), we also conduct experiments on few-shot condition.",
"The statistics of VAST dataset are shown in Table 1. 2) SEM 16 , which contains 6 pre-defined targets, including Donald Trump (DT), Hillary Clinton (HC), Feminist Movement (FM), Legalization of Abortion (LA), Atheism (A), and Climate Change (CC).",
"Each instance can be classified as Favor , Against or Neutral .",
"3) WT-WT , which contains 5 pre-defined company pairs (tar-get), including CVS_AET (CA), CI_ESRX (CE), ANTM_CI (AC), and AET_HUM (AH).",
"Each instance refers to a stance label from Support (cor-responding to Favor ), Refute (corresponding to Against ), Comment (corresponding to Neutral ), or Unrelated .",
"The statistics of WT-WT and SEM 16 datasets are shown in Table 2. Following (Allaway et al., 2021) and (Conforti et al., 2020), for SEM 16 and WT-WT datasets, we use the leave-one-target-out evaluation setup.",
"embedding module in which each word token is mapped to a 768-dimensional embedding.",
"The learning rate is set to 3e-5.",
"Following (Xu et al., 2018), the coefficient is set to 1e-5.",
"Adam is utilized as the optimizer.",
"The mini-batch size is set to 16, considering the trade-off between computational resource and evaluation performance.",
"For contrastive losses, both the temperature parameters s and g are set to 0.07.",
"For clustering, the number of clusters are set to k = 100 for the VAST dataset and k = 10 for the WT-WT and SEM 16 datasets respectively.",
"Corresponding to the number of k , we set c = 0 .",
"8 , s = 1 , and g = 0 .",
"1 for VAST dataset and g = 0 .",
"5 for WT-WT and SEM 16 datasets, respectively.",
"They are the optimal hyper-parameters in the pilot studies.",
"We apply early stopping in training process and the patience is 5.",
"We report averaged scores of 10 runs to obtain statistically stable results.",
"Evaluation Metric For the VAST dataset, following (Allaway and McKeown, 2020), we calculate Macro-averaged F1 of each label to measure the testing performance of the models.",
"For the SEM 16 dataset, following (Allaway et al., 2021), we report F avg , the average of F1 on Favor and Against .",
"For the WT-WT dataset, following (Conforti et al., 2020), we report the Macro F1 score of each target.",
"We compare the proposed JointCL with a series of strong stance detection baselines, including neural network-based method: BiCond (Au-genstein et al., 2016), attention-based models: CrossNet (Xu et al., 2018) and SiamNet (San-tosh et al., 2019), knowledge-based method: SEKT (Zhang et al., 2020), graph network method: TPDG (Liang et al., 2021a), adversarial learning method: TOAD (Allaway et al., 2021), and BERT-based methods: BERT (Devlin et al., 2019), TGA Net (Allaway and McKeown, 2020), BERT-GCN (Liu et al., 2021), and CKE-Net (Liu et al., 2021).",
"In addition, we provide variants of our proposed JointCL in the ablation study: (1) w/o L stance denotes without stance contrastive learning.",
"(2) w/o L graph denotes without prototypical graph contrastive learning.",
"(3) w/o graph denotes that this model performs the target-aware contrastive learning on the hidden representations of the instances with the supervised information from target labels.",
"That is, the contrastive loss functions of Eq.",
"6 and Eq.",
"7 are replaced by: L graph = 1 N b (cid:88) h i B g ( h i ) (13) g ( h i ) = log (cid:80) j B\\ i 1 [ t i = t j ] exp( f ( h i , h j ) / ) (cid:80) j B\\ i exp( f ( h i , h j ) / ) (14) (4) w/o cluster denotes without using clustering to generate prototypes.",
"That is, this model simply takes the mean of target-based hidden representations as a prototype.",
"(5) w/o edge denotes without considering edge information, i.e., it performs the prototypical graph contrastive learning on the graph representations of the instance nodes.",
"The contrastive loss functions of Eq.",
"6 and Eq.",
"7 are replaced by: L graph = 1 N b (cid:88) z i B g ( z i ) (15) g ( z i ) = log (cid:80) j B\\ i 1 [ p i = p j ] exp( f ( z i , z j ) / ) (cid:80) j B\\ i exp( f ( z i , z j ) / ) (16) 5 Experimental Results 5.1 Main Results The main comparison results of ZSSD on three benchmark datasets are reported in Table 3. It can be observed from the experimental results, our proposed JointCL framework performs consistently better than the non-BERT and the BERT-based comparison models on both the VAST and WT-WT datasets, and achieves overall better performance than the comparison baselines on the SEM 16 dataset.",
"This verifies the effectiveness of our JointCL framework in the ZSSD task.",
"Furthermore, the significance tests of JointCL over the baseline models show that our JointCL significantly outperforms the baseline models (the results of p value on most of the evaluation metrics are less than 0 . 05 ).",
"More concretely, in comparison with the adversarial learning-based model ( TOAD ), our JointCL achieves significant improvement across all datasets.",
"This indicates that exploring graph contrastive learning to model the relationships among targets can better generalize the target-based stance features to the unseen targets.",
"In addition, the comparison results between our JointCL 86 Model VAST (%) SEM 16 (%) WT-WT (%) Pro Con Neu All DT HC FM LA A CC CA CE AC AH BiCond 44.6 47.4 34.9 42.8 30.5 32.7 40.6 34.4 31.0 15.0 56.5 52.5 64.9 63.0 CrossNet 46.2 43.4 40.4 43.4 35.6 38.3 41.7 38.5 39.7 22.8 59.1 54.5 65.1 62.3 SiamNet 47.5 43.3 39.6 43.5 36.9 37.5 44.3 41.6 41.2 25.6 58.3 54.4 68.7 67.7 SEKT 50.4 44.2 30.8 41.8 ------TPDG 53.7 49.6 52.3 51.9 47.3 50.9 53.6 46.5 48.7 32.3 66.8 65.6 74.2 73.1 TOAD 42.6 36.7 43.8 41.0 49.5 51.2 54.1 46.2 46.1 30.9 55.3 57.7 58.6 61.7 BERT 54.6 58.4 85.3 66.1 40.1 49.6 41.9 44.8 55.2 37.3 56.0 60.5 67.1 67.3 TGA Net 55.4 58.5 85.8 66.6 40.7 49.3 46.6 45.2 52.7 36.6 65.7 63.5 69.9 68.7 BERT-GCN 58.3 60.6 86.9 68.6 42.3 50.0 44.3 44.2 53.6 35.5 67.8 64.1 70.7 69.2 CKE-Net 61.2 61.2 88.0 70.2 -----JointCL (ours) 64.9 63.2 88.9 72.3 50.5 54.8 53.8 49.5 54.5 39.7 72.4 70.2 76.0 75.2 Table 3: Experimental results on three ZSSD datasets. The results with are retrieved from (Allaway and McKeown, 2020), from (Liu et al., 2021), from (Allaway et al., 2021), from (Conforti et al., 2020), and from (Liang et al., 2021a). Best scores are in bold.",
"and the previous BERT-based models demonstrate that the stance representations learned from known targets can be better generalized to the unseen targets with our proposed novel contrastive learning strategy.",
"To analyze the impact of different components in our proposed JointCL on the performance, we conduct an ablation study and report the results in Table 4. We can observe that the removal of stance contrastive learning ( w/o L stance ) sharply reduces the performance in all evaluation metrics and across all datasets.",
"This indicates that performing contrastive learning based on stance information can improve the quality of stance representations for better generalizing the learned stance features to the unseen targets, and thus improve the performance of ZSSD.",
"The removal of edge-oriented prototypical graph contrastive learning ( w/o L graph ) leads to considerable performance degradation.",
"This implies that performing target-based contrastive learning for prototypical graph can generalize the graph relations between known targets and prototypes to the unseen targets, which enables the model to derive better representation for the examples of unseen targets, and thus leads to improved ZSSD performance.",
"In addition, from the results of w/o graph we can see that purely performing the target-based contrastive learning on the hidden representations slashes the learning ability of stance contrastive learning, and thus leads to poorer performance.",
"This verifies the effectiveness of exploring prototypical graph contrastive learning in our JointCL .",
"We also observe that the performance of w/o cluster drops consistently across datasets, which indicates that exploring clustering method can effectively relieve the problem of the imbalanced distribution of targets in the dataset.",
"The removal of edge-oriented graph contrastive strategy ( w/o L edge ) leads to noticeable performance degradation.",
"This implies that, to represent the (unseen) targets with prototypes, we should pay more attention to the relationships between targets and prototypes, rather than simply drawing closer similar target-based representations in the graph.",
"To analyze the impact of using different values of k in k -means clustering on the performance, we conduct experiments on the three datasets, and show the results in Figure 2. Here, for VAST , we show the results of all labels.",
"For the SEM 16 and WT 87",
"WT , we show the average performance of all targets.",
"We observe that for VAST that contains a large number of targets (more than 5,000 in the training set), the performance increases with the increasing value of k and peaks at k = 100 .",
"Further increasing the values of k results in worse performance.",
"Similarly, for SEM 16 and WT-WT , better performance is obtained in the region of k [10 , 20] and peaks when k = 10 .",
"This implies that we can set an appropriate region for the value of k according to the number of targets in the dataset.",
"Analysis of Few-Shot Condition To evaluate the generalizability of our JointCL framework in few-shot stance detection, following (Allaway and McKeown, 2020; Liu et al., 2021), we also evaluate JointCL in the few-shot condition on the VAST dataset.",
"From the experimental results shown in Table 5, we can see that JointCL performs overall better than all the comparison methods under the few-shot condition.",
"This verifies the effectiveness and generalizability of JointCL in dealing with both zero-shot and few-shot stance detection.",
"Analysis of Cross-Target Scenario We further conduct comparison experiments in the cross-target scenario on the SEM 16 dataset.",
"Cross-target stance detection trains on a source target and tests on an unseen but related one, which is a task related to ZSSD.",
"We report the results in Table 6.",
"It can be observed that JointCL achieves consistently better performance on all cross-target scenarios, which Model HC DT DT HC FM LA LA FM BiCond 29.7 35.8 45.0 41.6 CrossNet 43.1 36.2 45.4 43.3 BERT 43.6 36.5 47.9 33.9 SEKT 47.7 42.0 53.6 51.3 TPDG 50.4 52.9 58.3 54.1 JointCL (ours) 52.8 54.3 58.8 54.5 Table 6: Experimental results of cross-target condition.",
"verifies that our JointCL can generalize the learning ability to deal with cross-target scenarios.",
"In addition, when compared with the results of Table 3, we see that the results of cross-target stance detection are generally better than ZSSD.",
"This shows that recognizing the relationships among targets in advance can potentially improve the stance detection performance for the unseen targets, which illustrates the challenge of the ZSSD task from another angle.",
"To qualitatively demonstrate how the proposed JointCL captures good generalization of stance features for unseen targets in ZSSD, we randomly select 200 test instances for each label from VAST dataset and show the t-SNE (van der Maaten and Hinton, 2008) visualization of intermediate em-beddings learned by BERT-GCN and our proposed JointCL on VAST in Figure 3. It can be seen that the distributions of representations derived from BERT-GCN largely overlap especially for the Pro and Con stances.",
"But there are clear separations between different stances (including the Pro and Con stances) produced by our proposed JointCL .",
"This verifies that the novel joint contrastive learning strategy in JointCL can better separate representations from different stances, so as to improve the performance of ZSSD.",
"In this paper, we propose a novel joint contrastive learning ( JointCL ) framework to deal with the zero-shot stance detection (ZSSD) task.",
"On the one hand, we deploy a stance contrastive learning strategy to improve the quality of stance representations, so as to capture good generalization of stance features for the unseen targets.",
"This is based on our observation that for some cases we can determine the stance towards a specific target from its associated context.",
"On the other hand, we devise a target-aware prototypical graph contrastive learning strategy to generalize the learned graph information to the unseen targets by leveraging the prototypes as a bridge to model the relationships between known and unseen targets.",
"This is for other cases when it is difficult to infer the stance for an unseen target from the context, but instead, could be relatively easier by exploiting the target-aware stance information from the learned associated targets.",
"Experimental results on three benchmark datasets show that our JointCL achieves state-of-the-art performance in ZSSD.",
"Further, the generalizability analysis shows that our JointCL can also perform outstandingly on few-shot and cross-target stance detection.",
"This work was partially supported by the National Natural Science Foundation of China (61876053, 62006062, 62176076, 62006060), UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1, EP/T017112/1), Natural Science Foundation of Guangdong Province of China (No. 2019A1515011705), Shenzhen Foundational Research Funding (JCYJ20200109113441941, JCYJ20210324115614039), Shenzhen Science and Technology Innovation Program (Grant No. KQTD20190929172835662), Joint Lab of Lab of HITSZ and China Merchants Securities.",
"Yulan He is supported by a Turing AI Fellowship funded by the UK Research and Innovation (UKRI) (grant no. EP/V020579/1)."
] | [
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other"
] |
[
"This paper presents an audio visual automatic speech recognition (AV-ASR) system using a Transformer-based architecture.",
"We particularly focus on the scene context provided by the visual information, to ground the ASR.",
"We extract representations for audio features in the encoder layers of the transformer and fuse video features using an additional crossmodal multihead attention layer.",
"Additionally, we incorporate a multitask training criterion for multiresolution ASR, where we train the model to generate both character and subword level transcriptions.",
"Experimental results on the How2 dataset, indicate that multiresolution training can speed up convergence by around 50% and relatively improves word error rate (WER) performance by upto 18% over subword prediction models.",
"Further, incorporating visual information improves performance with relative gains upto 3.76% over audio only models.",
"Our results are comparable to state-of-the-art Listen, Attend and Spell-based architectures.",
"Automatic speech recognition is a fundamental technology used on a daily basis by millions of end-users and businesses.",
"Applications include automated phone systems, video captioning and voice assistants providing an intuitive and seemless interface between users and end systems.",
"Current ASR approaches rely solely on utilizing audio input to produce transcriptions.",
"However, the wide availability of cameras in smartphones and home devices acts as motivation to build AV-ASR models that rely on and benefit from multimodal input.",
"Traditional AV-ASR systems focus on tracking the user's facial movements and performing lipreading to augment the auditory inputs (Potamianos et al., 1997; Mroueh et al., 2015; Tao and Busso, 2018).",
"The applicability of such models in real world environments is limited, due to the need for accurate audio-video alignment and careful camera placement.",
"Instead, we focus on using video to contextualize the auditory input and perform multimodal grounding.",
"For example, a basketball court is more likely to include the term lay-up whereas an office place is more likely include the term lay-off.",
"This approach can boost ASR performance, while the requirements for video input are kept relaxed (Caglayan et al., 2019; Hsu et al., 2019).",
"Additionally we consider a multiresolution loss that takes into account transcriptions at the character and subword level.",
"We show that this scheme regularizes our model showing significant improvements over subword models.",
"Multitask learning on multiple levels has been previously explored in the literature, mainly in the context of CTC (Sanabria and Metze, 2018; Krishna et al., 2018; Ueno et al., 2018).",
"A mix of seq2seq and CTC approaches combine word and character level (Kremer et al., 2018; Ueno et al., 2018) or utilize explicit phonetic information (Toshniwal et al., 2017; Sanabria and Metze, 2018).",
"Modern ASR systems rely on end-to-end, alignment free neural architectures, i.e. CTC (Graves et al., 2006) or sequence to sequence models (Graves et al., 2013; Zhang et al., 2017).",
"The use of attention mechanisms significantly improve results in (Chorowski et al., 2015) and (Chan et al., 2016).",
"Recently, the success of transformer architectures for NLP tasks (Vaswani et al., 2017; Devlin et al., 2019; Dai et al., 2019) has motivated speech researchers to investigate their efficacy in end-to-end ASR (Karita et al., 2019b).",
"Zhou et.",
"al., apply an end-to-end transformer architecture for Mandarin Chinese ASR (Zhou et al., 2018).",
"Speech-Transformer extends the scaled dot-product attention mechanism to 2 D and achieves competitive results for character level recognition (Dong et al., 2018; Karita et al., 2019a).",
"Pham et.",
"al. introduce the idea of stochastically deactivating layers during training to achieve a very deep model (Pham et al., 2019).",
"A major challenge of the transformer architecture is the quadratic memory complexity as a function of the input sequence length.",
"Most architectures employ consecutive feature stacking (Pham et al., 2019) or CNN preprocessing (Dong et al., 2018; Karita et al., 2019b) to downsample input feature vectors.",
"Mohamed et al. (2019) use a VGG-based input network to downsample the input sequence and achieve learnable positional embeddings.",
"Multimodal grounding for ASR systems has been explored in (Caglayan et al., 2019), where a pretrained RNN-based ASR model is finetuned with visual information through Visual Adaptive Training.",
"Sterpu et al. (2018) propose a seq2seq model based on RNNs for lip-reading that performs cross-modal alignment of face tracking and audio features through an attention mechanism.",
"Furthermore, Hsu et al. (2019) use a weakly supervised semantic alignment criterion to improve ASR results when visual information is present.",
"Multimodal extensions of the transformer architecture have also been explored.",
"These extensions mainly fuse visual and language modalities in the fields of Multimodal Translation and Image Captioning.",
"Most approaches focus on using the scaled dot-product attention layer for multimodal fusion and cross-modal mapping.",
"Afouras et al. (2018) present a transformer model for AV-ASR targeted for lipreading in the wild tasks.",
"It uses a self attention block to encode the audio and visual dimension independently.",
"A decoder individually attends to the audio and video modalities producing character transcriptions.",
"In comparison our study uses the video features to provide contextual information to our ASR.",
"Libovick`y et al. (2018) employ two encoder networks for the textual and visual modalities and propose four methods of using the decoder attention layer for multimodal fusion, with hierarchical fusion yielding the best results.",
"Yu et al. (2019) propose an encoder variant to fuse deep, multi-view image features and use them to produce image captions in the decoder.",
"Le et al. (2019) use cascaded multimodal attention layers to fuse visual information and dialog history for a multimodal dialogue system.",
"Tsai et al. (2019) present Multimodal Transformers, relying on a deep pairwise cascade of cross-modal attention mechanisms to map between modalities for multimodal sentiment analysis.",
"In relation to the previous studies, the main contributions of this study are",
"a) a fusion mechanism for audio and visual modalities based on the crossmodal scaled-dot product attention,",
"b) an end to end training procedure for multimodal grounding in ASR and",
"c) the use of a multiresolution training scheme for character and subword level recognition in a seq2seq setting without relying on explicit phonetic information.",
"We evaluate our system in the 300 hour subset of the How2 database (Sanabria et al., 2018), achieving relative gains up to 3.76% with the addition of visual information.",
"Further we show relative gains of 18% with the multiresolution loss.",
"Our results are comparable to state-of-the-art ASR performance on this database.",
"Our transformer architecture uses two transformer encoders to individually process acoustic and visual information (Fig. 1).",
"Audio frames are fed to the first set of encoder layers.",
"We denote the space of the encoded audio features as the audio space A .",
"Similarly, video features are projected to the video space V using the second encoder network.",
"Features from audio and visual space are passed through a tied feed forward layer that projects them into a common space before passing them to their individual encoder layers respectively.",
"This tied embedding layer is important for fusion as it helps align the semantic audio and video spaces.",
"We then use a cross-modal attention layer that maps projected video representations to the projected audio space (Section 2.1).",
"The outputs of this layer are added to the original audio features using a learnable parameter to weigh their contributions.",
"The fused features are then fed into the decoder stack followed by dense layers to generate character and subword outputs.",
"For multiresolution predictions (Section 2.2), we use a common decoder for both character and subword level predictions, followed by a dense output layer for each prediction.",
"This reduces the model parameters and enhances the regularization effect of multitask learning.",
"Scaled dot-product attention operates by constructing three matrices, K , V and Q from sequences of inputs.",
"K and V may be considered keys and values in a soft dictionary, while Q is a query that contextualizes the attention weights.",
"The attention mechanism is described in Eq.",
"1, where denotes D o t p r o du c t a tt e n t i o n Encoder Layer Encoder Layer Encoder Layer N x Decoder Layer Decoder Layer Decoder Layer M x Audio frames Subwordprediction V i d e o f r a m e s K V QV i d e o E n c o d e r Down-sampling !",
"The case where K , V and Q are constructed using the same input sequence consists a self-attention mechanism.",
"We are interested in crossmodal attention, where K and V are constructed using inputs from one modality M 1 , video in our case (Fig. 1) and Q using another modality M 2 , audio.",
"This configuration as an effective way to map features from M 1 to M 2 (Tsai et al., 2019).",
"Note, that such a configuration is used in the decoder layer of the original transformer architecture (Vaswani et al., 2017) where targets are attended based on the encoder outputs.",
"We propose the use of a multitask training scheme where the model predicts both character and subword level transcriptions.",
"We jointly optimize the model using the weighted sum of character and subword level loss, as in Eq.",
"2: L = L subword + (1 ) L character (2) where is a hyperparameter that controls the importance of each task.",
"The intuition for this stems from the reasoning that character and subword level models perform different kinds of mistakes.",
"For character prediction, the model tends to predict words that sound phonetically similar to the ground truths, but are syntactically disjoint with the rest of the sentence.",
"Subword prediction, yields more syntactically correct results, but rare words tend to be broken down to more common words that sound similar but are semantically irrelevant.",
"For example, character level prediction may turn old-fashioned into old-fashioning , while subword level turns the sentence ukuleles are different to you go release are different .",
"When combining the losses, subword prediction, which shows superior performance is kept as the preliminary output, while the character prediction is used as an auxiliary task for regularization.",
"We conduct our experiments on the How2 instructional videos database (Sanabria et al., 2018).",
"The dataset consists of 300 hours of instructional videos from the YouTube platform.",
"These videos depict people showcasing particular skills and have high variation in video/audio quality, camera angles and duration.",
"The transcriptions are mined from the YouTube subtitles, which contain a mix of automatically generated and human annotated transcriptions.",
"Audio is encoded using 40 mel-filterbank coefficients and 3 pitch features with a frame size Input handling Recognition level WER Filtering Character 33 .",
"of 10 ms, yielding 43 -dimensional feature vectors.",
"The final samples are segments of the original videos, obtained using word-level alignment.",
"We follow the video representation of the original paper (Caglayan et al., 2019), where a 3 D ResNeXt-101 architecture, pretrained on action recognition, is used to extract 2048 D features (Hara et al., 2018).",
"Video features are average pooled over the video frames yielding a single feature vector.",
"For our experiments, we use the train, development and test splits proposed by (Sanabria et al., 2018), which have sizes 298 .",
"2 hours, 3 .",
"2 hours and 3 .",
"7 hours respectively.",
"Our model consists of 6 encoder layers and 4 decoder layers.",
"We use transformer dimension 480 , intermediate ReLU layer size 1920 and 0 .",
"2 dropout.",
"All attention layers have 6 attention heads.",
"The model is trained using Adam optimizer with learning rate 10 3 and 8000 warmup steps.",
"We employ label smoothing of 0 .",
"1 .",
"We weigh the multitask loss with = 0 .",
"5 which gives the best performance.",
"A coarse search was performed for tuning all hyperparameters over the development set.",
"For character-level prediction, we extract 41 graphemes from the transcripts.",
"For subword-level prediction, we train a SentencePiece tokenizer (Kudo and Richardson, 2018) over the train set transcriptions using byte-pair encoding and vocabulary size 1200 .",
"For decoding we use beam search with beam size 5 and length normalization parameter 0 .",
"7 .",
"We train models for up to 200 epochs and the model achieving the best loss is selected using early stopping.",
"Any tuning of the original architecture is performed on the development split.",
"No language model or ensemble decoding is used in the output.",
"memory complexity as a function of the input sequence length.",
"This issue is particularly prevalent in ASR tasks, with large input sequences.",
"We explore three simple approaches to work around this limitation.",
"First, we filter out large input sequences ( x > 15 s ), leading to loss of 100 hours of data.",
"Second we, chunk the input samples to smaller sequences, using forced-alignment with a conventional DNN-HMM model to find pauses to split the input and the transcriptions.",
"Finally, we stack 4 consecutive input frames into a single feature vector, thus reducing the input length by 4 .",
"Note that this only reshapes the input data as the dimension of our input is increased by the stacking process 1 .",
"Results for the downsampling techniques for character and subword level predictions are summarized in Table 1.",
"We observe that subword-level model performs better than the character level (upto 10% relative) in all settings.",
"This can be attributed to the smaller number of decoding steps needed for the subword model, where error accumulation is smaller.",
"Furthermore, we see that the naive filtering of large sequences yields to underperforming systems due to the large data loss.",
"Additionally, we see that frame stacking has superior performance to chunking.",
"This is not surprising as splitting the input samples to smaller chunks leads to the loss of contextual information which is preserved with frame stacking.",
"We evaluate the proposed multiresolution training technique with the frame stacking technique, observing a significant improvement(18.3%) in the final WER.",
"We thus observe that predicting finer resolutions as an auxiliary task can be used as an effective means of regularization for this sequence to sequence speech recognition task.",
"Furthermore, we have empirically observed that when training in multiple resolutions, models can converge around 50% faster than single resolution models.",
"Next, we evaluate relative performance improvement obtained from utilizing the visual features (Table 2).",
"We observe that incorporating visual information improves ASR results.",
"Our AV-ASR system yields gains > 3% over audio only models for both subword and multiresolution predictions.",
"Finally, we observe that while the Listen, Attend and Spell-based architecture of (Caglayan et al., 2019) is slightly stronger than the transformer model, the gains from adding visual information 1 We tried to use the convolutional architecture from (Mo-hamed et al., 2019), but it failed to converge in our experiments, possibly due to lack of data Features Level WER over audio Audio Subword 26 .",
"It is important to note that our models are trained end-to-end with both audio and video features.",
"An important question for real-world deployment of multimodal ASR systems is their performance when the visual modality is absent.",
"Ideally, a robust system satisfactorily performs when the user's camera is off or in low light conditions.",
"We evaluate our AV-ASR systems in the absence of visual data with the following experiments -",
"a) replace visual feature vectors by zeros",
"b) initialize visual features with gaussian noise with standard deviation 0 .",
"2",
"c) tweak the value to 0 on inference, gating the visual features completely.",
"Table 3 shows the results for the different experiments.",
"Results indicate gating visual inputs works better than zeroing them out.",
"Adding a gaussian noise performs best which again indicates the limited availability of data.",
"Overall, in the absence of visual information, without retraining, the AV-ASR model relatively worsens by 6% compared to audio only models.",
"This paper explores the applicability of the transformer architecture for multimodal grounding in ASR.",
"Our proposed framework uses a crossmodal dot-product attention to map visual features to audio feature space.",
"Audio and visual features are then combined with a scalar additive fusion and used to predict character as well as subword transcriptions.",
"We employ a novel multitask loss that combines the subword level and character losses.",
"Results on the How2 database show that",
"a) multiresolution losses regularizes our model producing significant gains in WER over character level and subword level losses individually",
"b) Adding visual information results in relative gains of 3.76% over audio model's results validating our model.",
"Due to large memory requirements of the attention mechanism, we apply aggressive preprocessing to shorten the input sequences, which may hurt model performance.",
"In the future, we plan to alleviate this by incorporating ideas from sparse transformer variants (Kitaev et al., 2020; Child et al., 2019).",
"Furthermore, we will experiment with more ellaborate, attention-based fusion mechanisms.",
"Finally, we will evaluate the multiresolution loss on larger datasets to analyze it's regularizing effects."
] | [
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"result",
"result",
"method",
"abstain",
"method",
"method"
] |
[
"We propose a graph-based method to tackle the dependency tree linearization task.",
"We formulate the task as a Traveling Salesman Problem (TSP), and use a biaffine attention model to calculate the edge costs.",
"We facilitate the decoding by solving the TSP for each subtree and combining the solution into a projective tree.",
"We then design a transition system as post-processing, inspired by non-projective transition-based parsing, to obtain non-projective sentences.",
"Our proposed method outperforms the state-of-the-art linearizer while being 10 times faster in training and decoding.",
"Surface realization is the task of generating a sentence from a syntactic or semantic representation.",
"In several shared tasks (Belz et al., 2011; Mille et al., 2018, 2019), the input representations are unordered dependency trees.",
"The state-of-the-art system (Yu et al., 2019a) in the Surface Realization Shared Task 2019 (SR'19) takes a pipeline approach, where the first step is linearization, namely ordering the tokens in the dependency tree.",
"They use a Tree-LSTM to encode each token with awareness of the whole tree, then apply the divide-and-conquer strategy to split the full tree into subtrees and find the optimal order for each subtree using beam search.",
"Finally, the linearized subtrees are combined into a full projective tree.",
"The general strategy is adapted from Bohnet et al. (2010).",
"In this work, we tackle linearization decoding in a different way, by casting it as a Traveling Salesman Problem (TSP).",
"Knight (1999) first formulated the word ordering of the target language in word-based machine translation as a TSP, where the words are the nodes to traverse, and the log probabilities of the bigrams are the edge costs.",
"Several works have followed this formulation.",
"Among others, Zaslavskiy et al. (2009) formulate the word ordering in phrase-based machine translation as a TSP, and show that it achieves better performance and speed than beam search decoding with the same bigram language model.",
"Horvat and Byrne (2014) explore higher-order n -gram language models for TSP-based word ordering, which transforms into a much larger TSP graph.",
"All of the aforementioned works operate on a bag of words without syntax , which is a TSP graph of non-trivial size with little information about the internal structure.",
"Much effort has been put into incorporating more powerful decoding algorithms such as Integer Programming (Germann et al., 2001) and Dynamic Programming (Tillmann and Ney, 2003).",
"Our work differs from the previous work on TSP-based word ordering in several aspects.",
"(1) Linearization is a special case of word ordering with syntax , where we can use a tree-structured encoder to provide better representation of the tokens.",
"(2) We adopt the divide-and-conquer strategy to break down the full tree into subtrees and order each subtree separately, which is faster and more reliable with an approximate decoder.",
"(3) We apply deep biaffine attention (Dozat and Manning, 2016), which has yielded great improvements in dependency parsing, and reinterpret it as a bigram language model to compute edge costs for the TSP.",
"In this paper, we solve the dependency tree linearization task as a TSP.",
"With the help of Tree-LSTM to encode the tree and biaffine attention as a bigram language model, we can use a greedy TSP solver to linearize the tree effectively.",
"Furthermore, the divide-and-conquer strategy greatly reduces the search space but introduces the projectivity restriction, which we remedy with a transition-based reordering system.",
"As a result, the proposed linearizer outperforms the previous state-of-the-art model both in quality and speed.",
"We follow the idea in Knight (1999) to treat linearization as a TSP.",
"Under the TSP formulation, we need to calculate the cost from every node i to every other node j , which can be interpreted as the log likelihood of the bigram ( i, j ) .",
"We use the biaffine attention model (Dozat and Manning, 2016) to obtain the costs, and use an off-the-shelf TSP solver, OR-Tools 1 , to decode the TSP.",
"To facilitate the approximate decoding of this NP-hard problem, we follow the divide-and-conquer strategy in Bohnet et al. (2010) of splitting the tree into subtrees, and decoding each subtree separately.",
"There are pros and cons of this approach: on the one hand, the search space is much smaller so that a greedy TSP solver can find good solutions in reasonable time; on the other hand, it restricts the output to be projective, i.e., nonprojective sentences can never be produced.",
"To remedy the projectivity restriction, we introduce a post-processing step using a simple transition system with only two transitions, swap and shift , to sort the linearized projective trees into potentially non-projective ones.",
"This system is an extention of our previous work (Yu et al., 2019a).",
"We use the same encoder and hyperparameters (see Appendix A) and only experiment with the decoders.",
"The code is available at the first author's web page.",
"2 As an overview, Figure 1 illustrates our pipeline for the linearization task, with an unordered dependency tree as input, and a linearized sentence as output, which is potentially non-projective, i.e., with crossing dependency arcs.",
"To solve the task, we (1) divide the tree into subtrees, (2) linearize each subtree by solving a TSP, (3) combine the linearized subtrees into an projective tree, and (4) use the swap system to obtain a non-projective tree.",
"To formulate the linearization task as a TSP, we use a node to represent each token in the tree, and an extra node with index 0 as both the origin and destination, which is interpreted as the boundaries",
"in the output sequence.",
"Figure 2 demonstrates a decoded TSP graph from its edge cost matrix, where the output sequence is < s > let us get < s > .",
"We use the routing solver of OR-Tools to solve the TSP given the edge costs.",
"It is a generic optimization library, unlike the more specialized and optimized TSP solvers such as Concorde (Apple-gate et al., 2006), but it enables imposing extra word order constraints described in 2.6, and can be easily extended to other constraints.",
"Among all the first solution strategies provided in OR-Tools, we found GLOBAL CHEAPEST ARC to perform the best, which selects a valid edge with the lowest cost at every step until a full path is found.",
"For the sake of efficiency, we use the greedy metaheuristic GREEDY DESCENT to refine the first solutions, which converges to local optima in very short time.",
"In practice, it works extremely well in combination with the greedy training described in 2.4.",
"More advanced metaheuristics such as GUIDED LOCAL SEARCH (Voudouris and Tsang, 1999) could find better solutions, but also require much more decoding time, it is thus less practical for real-time generation tasks.",
"We do not use it in the default setting, but include it in the analysis to demonstrate the effectiveness of the training.",
"We use the biaffine attention model (Dozat and Manning, 2016) to calculate the TSP edge costs.",
"First, we obtain the representation for each token by concatenating the embeddings of the features, then encode the tree information with a bidirectional Tree-LSTM, as described in Yu et al. (2019b): v i = v ( lem ) i v ( pos ) i v ( dep ) i v ( mor ) i (1) x i = Tree-LSTM ( v 0 ... v n ) i (2) The parameters of the decoder consist of two Multi-Layer Perceptrons (MLP ( fr ) and MLP ( to ) ) and a biaffine matrix ( W ).",
"We use the MLPs to obtain different views of the token representation as the first and second token in the bigram: h ( fr ) i = MLP ( fr ) ( x i ) (3) h ( to ) i = MLP ( to ) ( x i ) (4) We then apply the biaffine transformation on the vectors of the first word h ( fr ) i and the second word h ( to ) j to compute the score s i,j of each bigram ( i, j ) , where W is the weight matrix of size ( k + 1) ( k + 1) , and k is the size of h fri and h toj : s i,j = ( h ( fr ) i 1) W ( h ( to ) j 1) (cid:62) (5) In the actual computation, we parallelize Equation 5, where S is the output score matrix of size n n , and n is the number of tokens: S = ( H ( fr ) 1 ) W ( H ( to ) 1 ) (cid:62) (6) Finally, we turn the score matrix into a nonnegative cost matrix for the TSP solver: C = max ( S ) S (7) Our model is inspired by the biaffine dependency parser of Dozat and Manning (2016), but stands in contrast in many aspects.",
"They use a bidirectional LSTM to encode the sequential information of the tokens, and the biaffine attention itself does not model the sequence.",
"Each cell s i,j in their output matrix S is interpreted as the score of a dependency arc ( i, j ) .",
"They use a Maximal Spanning Tree algorithm to obtain a tree that maximizes the total score of the arcs in the tree.",
"In the case of linearization, our input and output are the opposite to theirs.",
"The input has no sequential but syntactic information, encoded by the bidirectional Tree-LSTM.",
"Each cell s i,j in the output matrix S is interpreted as the score of the bigram ( i, j ) .",
"We use a TSP solver to obtain a traversal of the tokens by minimizing the total edge costs, i.e., maximizing the total bigram scores.",
"We use a greedy training objective to train the biaffine scoring model, namely we enforce the score of each bigram ( i, j ) in the correct sequence z to be higher than any other bigrams in the same row or in the same column in the matrix by a margin:",
"L = (cid:88) ( i,j ) z ( (cid:88) j (cid:48) (cid:54) = j max(0 , 1 + s i,j (cid:48) s i,j ) + (cid:88) i (cid:48) (cid:54) = i max(0 , 1 + s i (cid:48) ,j s i,j )) (8)",
"This objective aims to maximizing the score of each correct bigram ( i, j ) in both directions, essentially log P ( j | i ) and log P ( i | j ) , where the cells in the same row corresponds to all possible tokens following i , and the cells in the column corresponds to all possible tokens preceding j .",
"The objective is greedy in the sense that it updates more than necessary to decode the correct path.",
"We contrast it to the structured loss in most graph-based dependency parsers (McDonald et al., 2005; Kiperwasser and Goldberg, 2016), which updates the scores of the correct path z against the highest scoring incorrect path z (cid:48) : L (cid:48) = max(0 , 1+ max z (cid:48) (cid:54) = z (cid:88) ( i (cid:48) ,j (cid:48) ) z (cid:48) s i (cid:48) ,j (cid:48) (cid:88) ( i,j ) z s i,j ) (9) The greedy objective for the TSP has two main advantages: (1) it does not require decoding during training, which saves training time; (2) it pushes the scores of each correct bigram to be the highest in the row and the column, which facilitates the greedy solver ( GLOBAL CHEAPEST ARC ) to find a good initial solution.",
"In fact, if the objective reaches 0, the greedy solver is guaranteed to find the optimal solution, since at each step, the cheapest arc is always a correct bigram instead of any other bigram in the same row or column.",
"If we directly linearize the full tree, the output is naturally unrestricted, i.e., possibly non-projective.",
"However, when we linearize each subtree separately in order to reduce the search space, as in the proposed method, the reconstructed output is restricted to be projective (Bohnet et al., 2010).",
"To relax the projectivity restriction, we design a transition system to reorder projective trees into non-projective trees as a post-processing step, inspired by Nivre (2009) but working in the opposite way.",
"It is essentially a reduced version of their transition system, removing the attachment transitions and keeping only swap and shift .",
"In the transition system (as shown in Table 1), a configuration consists of a stack , which is initially empty, and a buffer , which initially holds all input tokens.",
"The shift transition moves the front of the buffer to the top of the stack, and the swap transition moves the top of the stack back to the second place in the buffer.",
"When all tokens are moved from the buffer to the stack, the procedure terminates.",
"To prevent the model predicting infinite shift swap loops, we only allow swap if the initial index of the top of the stack is smaller than the front of the buffer.",
"The worst-case complexity of the sorting is quadratic to the number of tokens, however, since trees in natural language mostly only contain very few non-projective arcs, the transition system works in expected linear time, as shown in Nivre (2009).",
"We then implement a model to predict the transitions given the configurations.",
"We use two LSTMs to dynamically encode the stack from left to right ( LSTM ) and the buffer from right to left ( LSTM ).",
"We then concatenate the two outputs and use a MLP to predict the next transition.",
"When a shift is performed, we update LSTM state with the vector of the shifted token as the new stack representation, and the new buffer representation is the LSTM output of the new front token; when a swap is performed, the new stack representation is the LSTM output of the new top token, and the new buffer representation is recalculated by feeding the now second and first token in the buffer to the LSTM state of the third token.",
"Figure 3 illustrates the model under the transition system, where the arrows to the right represent LSTM , the arrows to the left represent LSTM , and the arrows between the stack and the buffer represent the MLP.",
"After each transition, little computation is needed to represent the new stack and buffer, marked with the red dashed line.",
"The example illustrates the steps to modify the configuration [1 2 | 3 4 5 6] into [1 2 | 4 3 5 6].",
"Note that the transition system is sound and complete, which means there is always a sequence of transitions to sort any sequence into any reordering.",
"In other words, the transition system on its own could also linearize the tree by taking a random permutation as input.",
"However, due to the noisy input order, it is very difficult for the LSTM model to learn good representations for the stack and buffer and predict correct transitions (cf.",
"Vinyals et al. (2015) for the discussion on encoding a set with an LSTM).",
"In contrast, when we only use this system to reorder a linearized projective tree as postprocessing, where input sequence is meaningful and consistent, it is much easier to learn.",
"Using the swap system as a post-processing step stands in contrast to Bohnet et al. (2012), where they pre-process the tree by lifting the arcs so that the correct word order could form a projective tree.",
"These two approaches draw inspiration from the non-projective parsing in Nivre (2009) and the pseudo-projective parsing in Nivre and Nilsson (2005) respectively.",
"We argue that our post-processing approach is more convenient since there is no need to change the syntactic annotation in the original tree, and it is much easier to evaluate the effectiveness of the sorting model.",
"In the SR'19 dataset, some relative word order information is given, which indicates e.g. the order of the conjuncts in the coordination.",
"Since the order in a coordination is generally arbitrary (at least syntactically), it will thus introduce randomness in the single reference evaluation.",
"We believe that using such information leads to more accurate evaluation, and therefore by default always use these constraints in the comparison.",
"The constraints does not specify direct adjacency, but only general precedence relations.",
"For example, to order the nodes { 1 , 2 , 3 , 4 , 5 } with the constraint 2 3 1 and 4 5 , a valid sequence could be [2 , 4 , 3 , 5 , 1] , while [4 , 5 , 2 , 1 , 3] is invalid.",
"To incorporate such constraints in the solver, we introduce an additional variable associated with each node in the routing problem, where the value is incremented by 1 after each step.",
"In other words, if a node is visited in the n -th step, then the associated variable will have the value n .",
"Then we add inequality constraints about those variables that are specified in the word order information into the routing problem and let the solver find the path that satisfies the constraints.",
"In practice, the solver can always find a solution to linearize the subtrees with the constraints.",
"However, it sometimes cannot find any solution to directly linearize the full tree within the time limit (1-10% of the cases depending on the treebank), because there are more nodes and more constraints in the full tree.",
"In this case, we simply remove the constraints and rerun the solver.",
"We use the datasets from the Surface Realization 2019 Shared Task (Mille et al., 2019) in our experiments, which includes 11 languages in 20 treebanks from the Universal Dependencies (Nivre et al., 2016).",
"We experiment on the shallow track, i.e., all tokens in the output are present in the input tree.",
"We only report the BLEU score (Pap-ineni et al., 2002) as the evaluation metric, since we mostly evaluate on the lemma level, where the metrics involving word forms are irrelevant.",
"As baselines for the final evaluation, we use several available linearizers by Bohnet et al. (2010) (B10), Puduppully et al. (2016) (P16) and Yu et al. (2019a) (Y19).",
"B10, P16 and our linearizer all use the same inflection and contraction models, trained with the same hyperparameters as in Y19, and we compare to the reported shared task results of Y19.",
"Table 2 shows the performance of different linearizers, where beam is the baseline beam-search linearizer as in Yu et al. (2019b) with default hyperparameters, full is the TSP decoder on the full tree level, sub is the TSP decoder on the subtree level, and +swap is sub post-processed with reordering.",
"We test the decoders under two conditions: without word order constraints ( -constraints ) and with word order constraints ( +constraints ).",
"Columns 2-9 show the BLEU scores on lemmata on the development set, and in the last 4 columns are the BLEU scores on inflected and contracted word forms on the test sets with the official evaluation script of SR'19.",
"While both only generating projective sentences, the sub decoder outperforms the baseline beam decoder by 0.6 BLEU points without word order constraints and 0.3 BLEU points with constraints.",
"Note that the beam search decoder uses an LSTM to score the sequences, which is essentially an unlimited language model, while the TSP decoders only uses a bigram language model.",
"While comparing the two TSP decoders, sub performs on average higher than full , while full performs better on treebanks with more nonprojective sentences, since it is not restricted.",
"Without word order constraints, full even slightly outperforms sub .",
"The reason that full performs relatively worse with constraints is that it sometimes has to remove the constraints to find a solution.",
"The sub+swap decoder eliminates the projectivity restriction, closing the performance gap to full for non-projective treebanks, and it does not hurt the performance on the projective treebanks.",
"In the last four columns we compare our sub+swap linearizer on the test set for the full pipeline with three external baselines, including the best system in the SR'19 shared task (Y19).",
"Our system outperforms B10 and P16 by a large margin of 7 and 11 BLEU points.",
"Note that their off-the-shelf systems are not designed to use word order constraints and morphological tags, which would account for a difference of about 3 points (see the effect of constraints in Table 2 and feature ablation in 3.7).",
"Under the same condition, our system outperforms Y19 on most of the treebanks and on average by 0.7, because of (1) a better projective decoder and (2) the non-projective postprocessing step.",
"Furthermore, our system is much faster than Y19, see the comparison in 3.8.",
"To illustrate the characteristics of different TSP decoders, we analyze their performance on sentences with different lengths and percentages of non-projective arcs.",
"Figure 4a shows the BLEU score of different TSP decoders with respect to the sentence length, averaged over all sentences in the development sets.",
"The sub model performs quite stably across the sentences with different lengths, while the full model performs much worse on longer sentences.",
"3 This confirms our hypothesis that the divide-and-conquer strategy of the subtree decoder can reduce search errors for large TSP problems.",
"Postprocessing with the swap system ( tsp+swap ) consistently improvements tsp across all sentence lengths.",
"3 Note that the very short sentences have even lower BLEU score, this is caused by the smoothing function in the BLEU evaluation, which gives a low score even for exact match.",
"Figure 4b shows the BLEU score with respect to the percentage of non-projective arcs in the gold tree, averaged over all sentences in the development sets.",
"Clearly, sub performs lower than full for sentences with more non-projective arcs due to the projectivity restriction, while the overall BLEU score of sub is higher, since 99% of the arcs and 90% of the sentences are projective.",
"With the help of the swap system, sub+swap closes the gap to full on the non-projective sentences.",
"In sum, the sub+swap model shows clear advantages over the other models since it is less prone to search error due to the reduced TSP size and free from the projectivity restriction, it is thus the best of both worlds.",
"As described in 2.4, we use a greedy training objective to train the biaffine model, namely we calculate a hinge loss of the correct bigram against",
"all other bigrams in the same row and in the same column.",
"This is in contrast to the structured loss, which is calculated between the gold sequence and the predicted sequence.",
"This contrast is similar to the two different training objective in Dozat and Manning (2016) against Kiperwasser and Goldberg (2016) for graph-based dependency parsing.",
"We experiment with the structured loss, following Kiperwasser and Goldberg (2016), where we also apply loss augmented inference (Taskar et al., 2005), i.e., adding a constant for all the bigrams that are not in the gold sequence.",
"We also experiment with only updating against the row or the column, which could be thought of as the bigram language model only in one direction, while updating against both is training a bidirectional language model.",
"Table 3 shows the results, where we train the sub and full models with different objectives: row+col is the default one, row and col only update against the row or the column, and path updates the gold path against the predicted path.",
"The results are clear: for both sub and full models, training on both directions is better than training on one direction, and the greedy objective is better than the structured objective.",
"The gap between the bidirectional greedy objective and others is larger in full than in sub , since full solves a larger TSP, where the greedy training is even more important for effective greedy decoding.",
"As discussed in 2.5, we use the transition system only for post-processing the linearized projective sentences, although the transition system itself is theoretically able to sort a random sequence.",
"The question is whether the model is able to learn to handle the random input.",
"We experiment with different training and testing scenarios of the sorting models.",
"They are trained in three scenarios, namely to sort (1) gold projective sentences into correct (potentially non-projective) sentences, noted as gold ; (2) predicted projective sentences, where the sentences are obtained by 5-fold jackknifing on the training set using the sub model, noted as pred ; and (3) random sequences, where the input is always shuffled during training, noted as rand .",
"The models are then applied to sort (1) gold projective sentences ( gold ); (2) predicted projective sentences from the sub model ( pred ); and (3) random permutation of the tokens ( rand ).",
"In the main experiment, the way we use the transition system corresponds to the gold-pred scenario.",
"Table 4 shows the BLEU scores on the development set averaged over all treebanks.",
"We also show the change of BLEU scores from the input to the output ( BLEU ) in different scenarios.",
"First, the gold model improves the input in all scenarios, especially the gold-pred scenario used in the main experiment brings 0.47 BLEU points improvement.",
"Interestingly, the pred model from jackknifing does not improve the performance, while usually training on the data with erroneous prediction should prevent overfitting to the gold data.",
"We conjecture the reason could be that the model is overfitting to fixing the particular errors in the predicted training data instead of learning to produce non-projective sentences.",
"Purely using the transition system for linearization ( rand-rand ) works to some extent, but performs lower than the baseline by a large gap for several reasons.",
"First, it imposes an arbitrary order in the input which is a suboptimal way to represent a bag of word.",
"Second, learning to sort random permutation requires a lot more training instances to generalize.",
"Finally, it takes on average O ( n 2 ) steps, which also increases the chance of error propagation.",
"In contrast, sorting a projective tree does not have any of these disadvantages.",
"Generally, when the training and testing scenarios are not aligned, the performance is always beam sub full swap +tree 85.55 85.85 85.03 71.36 -tree 78.31 74.17 35.82 19.74 -7.24 -11.68 -49.21 -51.62 Table 5: Comparing the decoders with and without the Tree-LSTM encoder.",
"worse due to the mismatched bias of transitions.",
"For example, gold-rand barely changes the random input since it mostly predicts shift , and rand-gold predicts swap too often such that the outcome is even worse than the input sentence.",
"The success of the simple bigram language model and greedy TSP decoding relies heavily on the Tree-LSTM encoding.",
"To demonstrate its importance, we remove the tree encoding for each linearizer, i.e., they only receive the token level features as the representation.",
"We experiment with four linearizers: apart from beam , sub and full as in the main experiments, we also include the swap linearizer that is trained to sort random input sequences.",
"The condition +tree is the default case, while in -tree we do not use the tree encoding.",
"Note that in the latter case, beam and sub still use the tree information to split the tree into subtrees, while full and swap do not use the tree information in any way.",
"The results are shown in Table",
"5. Without the tree encoder, the performance drop in sub is larger than beam , which suggests that sub is more dependent on the good representation of the Tree-LSTM encoder, since its scoring function is essentially a bigram language model, which would be much less expressive than the LSTM in beam if syntax is absent.",
"This result draws an interesting analogy to the fact that first-order graph-based dependency parsers (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016) also outperform the transition-based counterparts with a simpler scoring model but without error propagation.",
"The much larger drop in full and swap emphasizes the importance of the inductive bias introduced by the divide-and-conquer strategy, since natural languages are predominantly projective.",
"Generally, the syntax ablation experiment highlights the crucial difference between our work and the original idea by Knight (1999), namely we use contextualized bigrams in our TSP model, which is much more expressive than the vanilla version.",
"Consider the subtree with the words this and with in Figure 1, a vanilla bigram model would calculate a much higher score for with this than this with, while a contextualized bigram model could be aware that it is part of a rather special syntactic construction in English.",
"To understand how much each feature contributes to the linearization task, we perform ablation experiments on the selection of features.",
"In the default setting of our models, we use the lemma, UPOS, dependency relation, and morphological tags to encode each token.",
"We experiment with turning off each feature for the sub linearizer, as well as only using one feature, and the results are in Table",
"6. The results suggest that the UPOS tags and morphological tags do not provide much additional information and could be dropped if simplicity is desired.",
"In contrast, the lemmata and dependency relations are crucial to determine the word order, since the performance drops considerably without them.",
"By default, we use a greedy TSP solver, which already yields satisfactory performance.",
"We then make additional experiments with a more optimized metaheuristic (guided local search) to see if better performance can be gained in exchange for more decoding time.",
"With the guided local search, we set the search limit to 1 second or 100 solutions for each subtree, and 10 seconds or 1000 solutions for the full tree.",
"We also compare to the beam search linearizer with varying beam sizes from 1 to 64.",
"The results are shown in Figure 5, where the decoding time is measured on a single CPU core.",
"Generally, all greedy TSP solvers outperform the Pareto front of the beam search decoders.",
"The greedy solver performs almost as well as the optimized solver for the subtree TSP (85.85 vs. 85.91), while it performs clearly worse for the full tree 10 2 10 3 80 82 84 86 b-1 b-2 b-4 b-8 b-16 b-32 b-64 sub-greedy sub-guided sub-greedy-swap full-greedy full-guided Time [ms/sentence] BLEU s c o r e Figure 5: The speed (on a log scale) and BLEU score for different decoders, where the dots are projective decoders, and the crosses are non-projective decoders.",
"TSP (85.03 vs. 85.85).",
"This contrast again demonstrates that the divide-and-conquer strategy indeed greatly simplifies the problem for the greedy solver.",
"Post-processing with the swap system only slightly increase the decoding time (in total 50ms per sen-tence), but considerably improves the performance.",
"In this paper, we revisit the idea of treating word ordering as a TSP, but unlike the common bag-of-words scenario, the words have an underlying syntactic structure.",
"We demonstrate that with the Tree-LSTM encoder, the biaffine scoring model, the divide-and-conquer strategy, and a transition-based sorting system, we can linearize a dependency tree with high speed and quality and without the projectivity restriction.",
"We show with various ablation experiments that all of the components are crucial for the success of the TSP-based linearizer.",
"Our work emphasizes the importance of syntax in the word ordering task.",
"We discussed many connections and similarities between linearization and parsing.",
"We believe that quite generally, systems for solving one task can benefit from the other task's view on syntactic structure.",
"One possibility to capitalize on these synergies is to explore data augmentation methods to select beneficial extra training data in an unsupervised fashion.",
"This work was in part supported by funding from the Ministry of Science, Research and the Arts of the State of Baden-Wurttemberg (MWK), within the CLARIN-D research project."
] | [
"objective",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"other"
] |
[
"Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences.",
"Efforts thus far have focused on improving extraction accuracy but little is known about their explainability.",
"In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models.",
"We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation.",
"We also propose to automatically generate distractor sentences to augment the bags and train the model to ignore the distractors.",
"Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explainability.",
"Relation extraction with distant supervision associates a pair of entities with a bag of sentences, each containing mentions of both entities.",
"The bag is tagged with relations between the pair in a Knowledge Base (KB), without explicitly indicating which sentence(s) support the relation(s).",
"This method avoids the burden of manual annotations, but presents inherent ambiguity, creating challenges for learning.",
"To alleviate the impact of the irrelevant sentences many approaches have been proposed including models based on attention (Zeng et al., 2015; Lin et al., 2016; Liu et al., 2017; Luo et al., 2017; Du et al., 2018; Wang et al., 2018; Peng and Denilson, 2019; Bai and Ritter, 2019), approaches that use additional resources (Vashishth et al., 2018; Liu et al., 2018) and methods that utilize supervision data (Pershina et al., 2014; Angeli et al., 2014; Beltagy et al., 2019).",
"These studies primarily focus on improving relation extraction accuracy and little is known about whether the models are making right decision for the right reason or because of some irrelevant biases (Agrawal et al., 2016; Gururangan et al., 2018; Ghaeini et al., 2019).",
"This paper examines two strong baseline relation extraction models with several explanation mechanisms.",
"We manually annotated a test set from the widely used FB-NYT dataset with ground truth explanations to evaluate the quality of the explanation afforded by these models.",
"We also introduce two different methods for improving relation extraction.",
"First, we demonstrate that replacing the entity mentions with their fine-grained entity types for sentence representation leads to improvement in both the extract accuracy and model explainability.",
"Second, we augment the bags with automatically generated distractor sentences (i.e., sentences that contain no supporting information for the relation) and train the model to appropriately ignore the irrelevant information.",
"Our evaluation on the widely used FB-NYT dataset verifies that the proposed methods achieve the new state of the art for the extraction performance along with improved model explainability.",
"Given entity pair ( e i , e j ) , we form a bag B i,j = { s 1 , . . . s N ij } with N ij sentences that contain mentions of both entities and label it by the set of relations between e i and e j from the KB.",
"Neural models for relation extraction encode each sentences into a vector representation and a bag B i,j is thus represented by { x 1 , . . . x Nij } where x i R d .",
"Given a set of bags and the associated labels, the training objective is to learn a model that predicts the probability P ( r = k | B i,j ) that relation k exists between e i and e j based on B i,j , where k 1 . . . K and K is the total number of relations in the KB.",
"There are zero to multiple possible relation labels for each bag.",
"Importantly, only some sentences in the bag express any of the relations and the others are irrelevant (provide no information regarding the relations), but such sentences are not labeled.",
"We consider two baselines.",
"The first is DirectSup, a recent model achieving the state-of-the-art performance by utilizing auxiliary supervision (Beltagy et al., 2019).",
"The second baseline (CNNs+ATT) revamps the classic attention based method by Lin et al. (2016) but adopts the same sentence encoder as DirectSup for ease of comparisons.",
"In this work, we add a ReLU at the end of the sentence encoder (Beltagy et al., 2019) to produce positive sentence representations.",
"See (Beltagy et al., 2019) for detailed information regarding the sentence encoder.",
"DirectSup.",
"Given a bag of sentences, DirectSup encodes each sentence using CNNs with different filter sizes.",
"The outputs of the CNNs with different filter sizes are concatenated to produce the encoding of the sentence.",
"Given a bag B and the encoding of its sentences { x 1 , x 2 , ..., x N } , DirectSup assigns an importance weight for each sentence based on the output of a binary classifier learned from an additional direct supervision data in a multi-task manner.",
"Given a sentence encoding x n , the binary classifier provides a weight n [0 , 1] indicating the likelihood that x n expresses some form of relations in the KB.",
"As a result, for a bag B i,j , we have importance weights { 1 , . . . , N } .",
"It then produces a single bag representation as follows: x = Max-pool ( { 1 x 1 , . . . , n x N } ) (1) and the prediction for relation k is given by: P ( r = k | B ) = ( x r k + b k ) (2) where r k is an embedding of relation k , b k is a bias variable and is the Sigmoid function.",
"CNNs+ATT.",
"This model uses the same sentence encoder as DirectSup but differs in the attention mechanism used to decide sentence importance.",
"Specifically, it follows Lin et al. (2016) and computes the importance weights of the sentences in bag B with encodings { x 1 , . . . , x N } as follows: k,n = exp ( x n Aq k ) (cid:80) Ni =1 exp ( x i Aq k ) (3) where q k is a learned query vector associated with relation k and A is a diagonal matrix.",
"Given { k, 1 , ..., k,N } , we compute a bag representation specific for relation k by: x k = N (cid:88) n =1 k,n x n (4) and the prediction for relation k is given by: P ( r = k | B ) = ( x k r k + b k ) (5) where r k is relation k 's embedding and b k is the bias.",
"Entity embedding.",
"Prior work has demonstrated that incorporating entity embeddings into the relation extraction model leads to improved accuracy (Ji et al., 2017; Beltagy et al., 2019).",
"Here we also consider this strategy with the baseline models.",
"Specifically, let v i and v j be the entity embedding of e i and e j , we concatenate the bag representations x with v i v j and v i v j , where is element-wise product.",
"We then apply a linear project layer with ReLU to produce a new bag representation for final prediction with Eq.",
"2 and 5.",
"For any entity e i its embedding vector v i is obtained by concatenating the average of its skip-gram (Mikolov et al., 2013) word embeddings and the embeddings produced by Zhang et al. (2019) (produced by using TransE on Wikipedia factual tuples).",
"Training objective.",
"For all the models in this work we use the binary cross entropy loss function for training: l = (cid:88) B i,j K (cid:88) k =1 1 i,j,k log P ( r = k | B i,j )+ (1 1 i,j,k ) log (1 P ( r = k | B i,j )) (6) where 1 i,j,k is an indicator function that takes value 1 if relation k exists for bag B i,j .",
"The importance weights ( 's, aka attention), generated by the models can be interpreted as explanations.",
"However, recent studies (Ghaeini et al., 2018; Jain et al., 2019; Wiegreffe and Pinter, 2019) have questioned the validity of attention as a faithful explanation of model's behavior.",
"Thus we consider the following additional explanation mechanisms: Saliency.",
"Recent works show that a model's prediction can be explained by examining the input saliency, based on the gradient of the output w.r.t. the inputs (Simonyan et al., 2012; Ross et al., 2017; Ghaeini et al., 2019).",
"We define the saliency of sentence n for relation k , denoted by S x n ,k , as the L1 norm of the gradient of relation k logit o k with respect to x n",
".(Appendix. A.1).",
"Gradient input.",
"This is a commonly used measure for input attributions (Shrikumar et al., 2016; Selvaraju et al., 2019).",
"We will refer to this measure as GI x n ,k , computed as (cid:80) i x n [ i ] o k x n [ i ] .",
"Leave One Out (loo).",
"This measures the sensitivity of o k to the removal of a sentence.",
"We refer to this measure as loo x n ,k = ( o k o k, n ) , where o k, n is the new logit of relation k after removing sentence x n from its bag.",
"We propose two different approaches for improving relation extraction.",
"The first method we propose, introduces a subtle change to the representation of the sentences, which lead to higher performance and better explanation quality.",
"We further propose to automatically generate distractor sentences and train the model to appropriately ignore them.",
"Sentence representation.",
"Each sentence in a bag contains entity mentions m i and m j for entities e i and e j respectively.",
"In prior work m i and m j are kept unchanged (Lin et al., 2016; Beltagy et al., 2019).",
"We argue that when entity mentions are used to compute the sentence representation, they provide such rich information that the model may not need to look at the rest of the sentence to deduce a relation.",
"To ensure that our predictions are supported by appropriate sentences, we need to remove this effect.",
"We propose to replace the entity mentions with their Fine-Grained Entity Types (FGET) Ling and Weld (2012) to force the model to identify the relations through the sentences.",
"Learning from distractors.",
"Prior work studied learning from human provided rationales (Lei et al., 2016; Ross et al., 2017; Bao et al., 2018; Ghaeini et al., 2019) in order to improve model explainability.",
"However, human rationales are expensive to acquire.",
"In this work we propose to learn from automatically generated distractor sentences.",
"Let B i,j be a positive training bag (contains at least one relation) with entities ( e i , e j ) of FGET ( t i , t j ) .",
"Let R ij ( | R ij | > 1) be the set of annotated relations for B i,j .",
"For each k in R ij , we sample a distractor sentence s (cid:48) k from the set of sentences in the training set such that",
"1) it belongs to a bag whose FGET is ( t i , t j ) 2) the bag is not annotated with relation label k .",
"If s (cid:48) k is not found this way, we simply choose a random sentence from a random negative bag (bag with no relation).",
"Given s (cid:48) k , we replace its entity mentions with e i and e j (or t i and t j for FGET-based sentence representation) of a sentence in B i,j and add it to the bag, resulting in an augmented bag B (cid:48) i,j for relation k .",
"To learn from the augmented bags, we feed B (cid:48) i,j into the model and the goal is to lower the contribution of the distractor sentence in relation to the original sentences in the bag.",
"Specifically, we use GI to measure the sentence-level contribution and define the distractor loss for relation k as follows: l (cid:48) d,k = max (0 , + GI x (cid:48) k ,k max x B i,j GI x,k ) + | GI x (cid:48) k ,k | (7) where x (cid:48) k is the encoding of distractor sentence s (cid:48) k and is a hyper-parameter for margin.",
"The first term ensures that the contribution of the distractor is lower than the maximum contribution of all the sentences in the original bag and the second term reduces the absolute contribution of the distractor.",
"Although we use GI in Eq.7, other explanation measures such as saliency or the positive portion of the contributions can also be applied here.",
"Moreover a more advanced mechanism for generating distractors will likely lead to a higher performance.",
"We hence update the loss in Eq.",
"6 with: l m = l + l (cid:48) d (8) where l (cid:48) d = (cid:80) k l (cid:48) d,k and tradeoffs the regular learning loss with the distractor loss.",
"Dataset.",
"Similar to our baselines and prior work, we use the modified version of FB-NYT dataset.",
"The original FB-NYT dataset was built by Riedel et al. (2010) on New York Times articles which was aligned to Freebase facts.",
"It later was modified by Lin et al. (2016).",
"There are 52 relations in this dataset where place lived, captial, neighbor-hood of, natinality and location are the most frequent relations.",
"Tab.",
"1 shows the size of the modified dataset.",
"Setup and Training.",
"All models are implemented in PyTorch, trained with a Adam optimizer with learning rate 0.001 for a maximum of 30 epochs.",
"We use 300-d skip-gram (Mikolov et al., 2013) word embeddings and FGET embeddings and 5-d position embedding.",
"During training we freeze the word and entity embeddings.",
"All reported results are averaged over three different random runs.",
"We train on 90% of the training set and keep the remaining 10% for validation.",
"We select from the set { 0 .",
"01 , 0 .",
"1 , 1 .",
"0 , 10 .",
"0 , 100 .",
"0 } and set = 1 .",
"0 based on validation AUC and the margin is fixed at = 0 .",
"00001 .",
"Ground-truth explanations.",
"There are 1950 positive bags (6444 sentences) in the test split of FB-NYT.",
"For each pair of sentence-relation in a bag we annotate whether the sentence entails the relation or not.",
"Based on the annotations, we extract a set called expl-eval (see Appendix A.2 for details) including tuples of (bag-id, relation, positive sentence in bag, negative sentence in bag).",
"Each tuple provides a desired ordering of two sentences when measuring their importance to the model.",
"expl-eval is then used to compute the Kendall Tau correlation between the annotation and the explanations, which measures how consistently the importance weights ranks the sentences compared to the ground truth.",
"Similar to prior work we use precision-recall (PR) curves to characterize the extraction performance and report the area under the PR curve (AUC) up to 0.4 recall.",
"Tab.",
"2 reports the AUCs of the baselines and different variants of our proposed models with (+E) and without (-E) incorporating entity embeddings.",
"Specifically, we consider two different ways of incorporating the FGET representations.",
"Rows 3-4 show the AUCs of the two baseline models when we replace entity mentions with their FGET (+F), whereas rows 5-6 show the AUCs when we concatenate the FGET with the entity mentions (+FE).",
"From the results we can see that both baselines see clear performance gain from incorporating FGET into the representations.",
"Combining FGET with entity mention (+FE) achieves higher performance than using only FGET (+F), but our hypothesis is that the former will lead to less explainable models, which we will examine in the next section.",
"Finally the last three rows of the table show that adding LD to different base models can further improve model loo (H) loo (L) S x n,k (H) S x n,k (L) GI x n,k (H) GI x n,k (L) x n (H) x n (L) CNNs+ATT 0.16 -0.08 0.19 -0.02 0.20 0.04 0.69 0.21 DirectSup 0.19 0.12 0.08 0.15 0.29 0.19 0.26 -0.12 CNNs+ATT +F 0.21 0.10 0.36 0.03 0.23 0.00 0.73 0.11 DirectSup +F 0.24 0.15 0.31 -0.19 0.40 -0.17 0.28 0.15 CNNs+ATT +FE 0.01 -0.11 0.21 -0.14 0.20 -0.20 0.24 0.01 DirectSup +FE 0.14 -.12 0.19 -0.10 0.29 0.06 0.17 -0.11 CNNs+ATT +LD 0.18 -0.01 0.22 0.10 0.21 0 0.67 0.11 CNNs+ATT +LD +F 0.22 -0.11 0.43 0.09 0.28 0.07 0.70 0.12 DirectSup +LD +F 0.23 0.14 0.38 0.01 0.49 0.20 0.45 0.02 H:Highconfidence P ( r ) [0 . 76 , 1 . 0] L:Lowconfidence P ( r ) [0 , 0 . 25] Table 3: Kendall correlations for top confidence and least confidence range.",
"Similar to prior work, we observe that incorporating entity embeddings(+E) to the model leads to substantial performance gain across the board.",
"We also observe very similar performance gain when adding FGET and LD to the base models both with and without entity embeddings.",
"Our best model achieved an AUC of 0.341, which improves the previous state-of-the-art by 5.7%.",
"We apply the explanation mechanisms described in Section 4 to produce sentence importance scores for the test set and compute the Kendall Tau correlations for the importance scores using expl-eval .",
"For each model, to understand its behavior when it predicts correctly versus incorrectly, we consider the subset H ( L ) of bags/relations that the model outputs high (low) probability, i.e., p [0 . 76 , 1] ( [0 , 0 . 25] ), for the correct relation.",
"We report the performance on H and L separately in Tab.",
"3.",
"Comparing correlation values for H and L in Tab.",
"3, we observe that when the models are making correct and confident predictions ( H ), the values of correlation tend to be higher.",
"In contrast, when the model fails to detect the correct relation ( L ), we see substantially lower correlation scores.",
"By replacing entity mentions with their FGET in both CNNs+ATT and DirectSup (+F), we observe substantially increased correlation scores for correct predictions (H).",
"The improvement is consistent across all methods that are used to compute the importance scores.",
"Recall that Tab.",
"2 shows that concatenating FGET with entity mention (+FE) yields improved relation extraction performance for both CNNs+ATT and DirectSup.",
"In contrast, the explanation results presented here show that this comes at the cost of explainability, as demonstrated by the substantially lower correlation scores of CNNs+ATT+FE and DirectSup+FE.",
"This confirms our conjecture that removing entity mentions from the sentence representation leads to more explainable models, possibly by forcing the model to focus on the textual evidence contained in the sentence rather than the word embedding of the mentions.",
"Finally, we note that adding LD further improves the correlation score on H for S , GI and .",
"This suggests that learning from distractors is a valuable strategy that not only produces better relation extraction performance, but also enhances the model explanability.",
"In this work we provided an annotated test set with ground-truth sentence-level explanations to evaluate the explanation quality of relation extraction models with distant supervision.",
"Our examination of two baselines show that a model with lower relation extraction accuracy could have higher explanation quality.",
"We proposed methods to improve both the accuracy and explainability.",
"Our proposed methods are based on changing the representation of the sentences and learning from distractor to teach the model to ignore irrelevant information in a bag.",
"Our evaluation on the widely used FB-NYT dataset show the effectiveness of our method in achieving state-of-the art performance in both accuracy and explanation quality."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"objective",
"objective",
"result"
] |
[
"Event forecasting is a challenging, yet important task, as humans seek to constantly plan for the future.",
"Existing automated forecasting studies rely mostly on structured data , such as time-series or event-based knowledge graphs, to help predict future events.",
"In this work, we aim to formulate a task, construct a dataset, and provide benchmarks for developing methods for event forecasting with large volumes of unstructured text data.",
"To simulate the forecasting scenario on temporal news documents, we formulate the problem as a restricted-domain, multiple-choice, question-answering (QA) task.",
"Unlike existing QA tasks, our task limits accessible information, and thus a model has to make a forecasting judgement.",
"To showcase the usefulness of this task formulation, we introduce FORECASTQA, a question-answering dataset consisting of 10,392 event forecasting questions, which have been collected and verified via crowdsourcing efforts.",
"We present our experiments on FORECASTQA using BERT-based models and find that our best model achieves 61.0% accuracy on the dataset, which still lags behind human performance by about 19%.",
"We hope FORECASTQA will support future research efforts in bridging this gap.",
"1 1 Introduction Forecasting globally significant events, such as outcomes of policy decisions, civil unrest, or the economic ramifications of global pandemics, is a consequential but arduous problem.",
"In recent years there have been significant advances in applying machine learning ( e.g. , time-series prediction methods) to generate forecasts for various types of events including conflict zones (Schutte, 2017), duration of insurgency (Pilster and Bohmelt, 2014), civil unrest (Ramakrishnan et al., 2014a) and terrorist events (Raghavan et al., 2013).",
"Current automated forecasting methods perform well on problems for which there are sufficient structured data ( e.g. , knowledge graphs), but are not well suited for events for which such data may not exist.",
"Humans, though, can often accurately forecast outcomes by leveraging their judgement, domain knowledge, and prior experience (Tetlock and Gardner, 2016), along with the vast amounts of unstructured text data available to us ( e.g. , news ar-ticles).",
"We are able to identify and retrieve salient facts from the near-endless pool of unstructured information, synthesize those facts into coherent beliefs, and generate probabilistic forecasts.",
"Unfortunately, the process does not scale well in terms of the amount of information that must be processed and the number of events one has to forecast.",
"Here we address the above problem by formalizing a forecasting task, creating a dataset, and providing benchmarks to develop methods for the task.",
"Specifically, we formulate the forecasting problem as a multiple-choice Question Answering (QA) task, where the input is a news corpus, questions, choices and timestamps associated with each question, and the output is one of the given choices per question.",
"Our approach is rooted in the observation that both forecasting and QA follow a similar process: digesting massive amounts of textual data, identifying supporting pieces of evidence from text, and chaining different pieces to generate answers/forecasts.",
"Forecast Question Answering (FORECASTQA) introduces a novel timestamp constraint per question that prohibits the model from accessing new articles published after the timestamp .",
"By doing so, FORECASTQA simulates a forecasting scenario; each question's timestamp is chosen to ensure that the question is about the outcome of a future event.",
"To illustrate this, consider the question, Will primary schools in Europe admit non-vaccinated children around September 2019? in Figure 1, and the fact that models only have access to articles before 2019-09-01.",
"With the addition of this timestamp constraint, our query becomes a question about a future event in September, 2019 based on articles from the past; the model is now being tested for its forecasting ability 2 .",
"To answer the question, the model must find pertinent events from past information, resolve the temporal and causal relations between them, and finally make a forecasting judgement based on its interpretation of past information to answer the question.",
"Our task differs from that of other works that require an understanding of temporal relationships (Ning et al., 2020) and temporal commonsense reasoning (Zhou et al., 2019), as our task forces a model to make a forecasting judgement.",
"In support of the proposed FORECASTQA formulation, we construct a dataset of 10,392 yes-no and multiple-choice questions.",
"This data is collected via crowdsourcing based on news articles, where workers are shown articles and asked to come up with yes-no and multiple-choice questions.",
"We also crowdsourced appropriate timestamps for each question.",
"Finally, we design a method based on pre-trained language models to deal with retrieved articles for our task.",
"In our experiments, the methods using retrieved articles slightly outper-2 The ability to predict the outcome of future events based on unstructured text describing past events, without access to an extracted sequence of historical event triples, nor provided a fixed set of possible relations between events; as is the case with human forecasters.",
"Q: Who will drop Japan as a trading partner in August 2019?",
"Choices: South Korea ( answer ), South Africa, Syria, Portugal.",
"Article: Why Japan and South Korea just can't get along.",
"(1/1/19)",
"Apart from the fact of being one another's closest neighbours, the people of South Korea and Japan have a remarkable amount in common.",
"Economically, they are among one another's biggest trading partners.",
"And yet, time and again, relations between Seoul and Tokyo are marked, not by mutual support and co-operation but by anger, reproach and exasperation.",
"Reasoning Process: Seoul is in South Korea, Tokyo is in Japan ( commonsense world knowledge ).",
"Seoul and Tokyo are big trading partners ( language understanding lexical variations ).",
"The relations between Seoul and Tokyo are marked by anger, reproach and exasperation and these relations might cause trading relations to cease ( forecasting skills causal relation we can infer the answer from this part ).",
"form closed-book models, suggesting that our task is still challenging in that finding relevant information for forecasting and making a judgement are not straightforward.",
"Our best attempt achieves 61.0% accuracy on our dataset, a significant performance gap from human performance by 19.3%.",
"Event Forecasting.",
"There are several types of approaches exist to do event forecasting.",
"One approach could learn from highly structured event-coded data such as ICEWS (Boschee et al., 2015) and GDELT (Leetaru and Schrodt, 2013).",
"When these datasets are used for forecasting, they are often represented as a time series (Morstatter et al., 2019; Ramakrishnan et al., 2014b), in which each data point is associated with a timestamp.",
"Another approach is script-learning, in which a model is provided with a chain of events and a subsequent event and is asked to predict the relation between the chain and the future event (Hu et al., 2017; Li et al., 2018; Lv et al., 2019).",
"They require to convert text data into event triples and translate the questions and answer choices into their format, which limits the expressiveness of natural text.",
"However, unlike these datasets and approaches, FORECASTQA does not provide any structured data to a model.",
"The model must learn how to extract, keep track of, and link pertinent events from unstructured text to solve forecasting questions.",
"QA and Temporal Reasoning on Text.",
"There are several approaches for QA using unstructured text.",
"Extractive QA approaches rely on finding answer spans from the text that best answer a question (Rajpurkar et al., 2016, 2018; Yang et al., 2018; Kwiatkowski et al., 2019; Huang et al., 2019).",
"Multiple-Choice QA requires a model to pick the best answer from a set (Talmor et al., 2019; Sap et al., 2019; Zhou et al., 2019), and generative QA prompts the machine to produce its own answer (Khashabi et al., 2020).",
"Our dataset is a type of multiple-choice QA, but it differentiates itself from other QA datasets (all formats) in that the required answer does not exist in the provided text, nor is sufficient evidence provided to be able to answer a question with 100% certainty; a forecast is required.",
"We could convert our questions into alternative query formats such as a text-to-text format, but instead we stick to multiple-choice questions as humans often weigh the benefits of multiple choices when making a forecasting judgement.",
"QA datasets often exist to test certain types of reasoning.",
"One pertinent example of a reasoning type that QA tasks test is the understanding of temporal and casual relations (Jia et al., 2018a,b; Sun et al., 2018; Ning et al., 2020).",
"However, FORECASTQA requires more than just extraction and understanding of relations; a model must be able to extract and understand the relations present in the text with the goal of making a forecasting judgement about an event whose outcome is not found in the text.",
"Another type of reasoning tested in QA tasks is commonsense reasoning (Talmor et al., 2019) and even temporal commonsense reasoning (Zhou et al., 2019).",
"While questions in FORECASTQA often require commonsense to correctly answer, not all do; event outcomes do not always follow common sense.",
"Furthermore, our questions test forecasting abilities, which often includes various types of reasoning in addition to commonsense.",
"FORECASTQA is a question answering task whose goal is to test a machine's forecasting ability .",
"We consider forecasting as the process of anticipating the outcome of future events based on past and present data (Tetlock and Gardner, 2016).",
"We focus on forecasting outcomes of news-based events coming from topics such as politics, sports, economics, etc.",
"Training a machine to make forecasting decisions is inherently difficult, as the ground-truth label of event outcome ( e.g. , whether an event will occur) so often required for model training is only obtainable in the future.",
"To make progress in our goal, we devise a way to simulate the forecasting scenario by introducing a novel time constraint , allowing us to validate the machine predic-Will the Will there What will Whatis What kind Whattype What does Who will Who is How many How much Howwill How old Where will Whichcountry W h i c h c o un t r y ' s W h i c h p a r t y Which company Whywill Whenwill Isthe D o e s t h e A r e t h e Will there be electricity in Canada despite hurricane Dorian in September 2019?",
"tions by obtaining desired ground-truth labels.",
"There is also the difficulty of ensuring the quality of question generation via crowdsourcing (neces-sary when building a dataset of scale), due to possible human errors in question formation (Tetlock et al., 2017).",
"We have taken steps to ensure our questions cannot be answered with certainty using past data given the time constraint or commonsense knowledge, but the questions are tractable to answer with an educated guess (see Sec. 4.1).",
"3 Task Definition.",
"Formally, the input of the FORECASTQA task is a forecasting question Q with a corresponding ending timestamp t Q the last possible date where Q remains a forecasting question.",
"In addition, we have a set of possible choices, C , and a corpus of news articles, A ; the output is a choice C C .",
"Our task has a novel constraint that any retrieved article A A must satisfy t A < t Q .",
"In other words, models have access only to articles that are published before t Q .",
"We have ensured that the information required to solve the question deterministically comes out in an article, gold article , published after t Q , i.e., t gold article t Q .",
"Another way to think of our setup is that we are asking Q on the day before t Q , knowing that the information required to solve Q is not available yet.",
"This for-3 This is in contrast to open-domain QA (machine reading comprehension) (Kwiatkowski et al., 2019) where answers can always be found in some given passages.",
"input of FORECASTQA creation is a news article corpus and the output is yes-no/multiple-choice questions.",
"from existing QA tasks.",
"Challenges in FORECASTQA.",
"Due to the constrained open-domain setting and forecasting properties, testing a model's forecasting ability encompasses the following challenges: information retrieval (IR) on limited sources, understanding of temporal and causal relations between events, and finally a forecasting judgement.",
"Our time constraint limits the accessible articles and also creates more challenges than in standard open-domain QA; effective IR methods are necessary to anticipate what knowledge will be useful for predictions from past information sources.",
"Once useful articles have been retrieved, models should understand these articles and reason over pertinent facts from them.",
"Finally, these models use the gleaned knowledge to infer the outcome of a future event.",
"Unlike in other reading comprehension tasks, models cannot rely on the existence of an answer within the text, but must make an educated guess as to what will happen in the future.",
"While our task does encompass reasoning abilities tested in other datasets, no other tasks investigate these reasoning abilities in the context of predicting future events.",
"More analysis on reasoning types can be found in Sec. 4.2.",
"In this section, we describe how we construct our FORECASTQA dataset and analyze it.",
"The data collection is broken down into three sections: (1) gathering a news corpus, (2) generating question-answer-timestamp triples with distractor choices, and (3) verifying the triples' quality.",
"The data generation process is summarized in Fig. 3. News Corpus Collection.",
"We started by gathering English news articles from LexisNexis 4 .",
"We then curated a list of 21 trustful news sources and filtered articles based on their publishers; we also filtered out non-English articles.",
"Finally, we selected the five-year period of 2015-2019 and filtered out articles outside this period, leaving us with 509,776 articles.",
"This corpus is also used for retrieval in our task setting ( i.e. , constrained open-domain).",
"Q-Answer-timestamp Triple Creation.",
"5 Once we assembled the news corpus, we built (ques-tion, answer, timestamp) triples to accompany the new corpus as inputs for our task.",
"To generate the needed triples we looked to crowdsourcing via Amazon Mechanical Turk.",
"Our generation task consists of the following steps: (1) we selected a random news article from 2019 from the collected news corpus (these news articles are gold articles and will be hidden for experiments); (2) workers created questions, which if posed before the respective article's publication date would be seen as a forecasting question; (3) they indicated the answer, along with supporting evidence that the question consisted of (to ensure the correctness of the true answer); (4) they were asked to make multiple-choice distractors with their own knowledge and/or access to search engines; and (5) we ensured that a temporal phrase is present in the questions, for example: After May of 2020... , ... in June of 2021? to provide a temporal context (constraint) for each question, yielding more precise and well-defined forecasting questions.",
"Completion of this task results in the desired triple of: a forecasting question, an answer to the question (with distractor choices), and a timestamp as our temporal constraint.",
"The timestamp is set as the first day of the month in which the gold article was published.",
"To diversify questions in the dataset, we created two kinds of questions: binary yes-no questions and multiple-choice questions with four choices .",
"Multiple-choice questions start with one of the six Ws ( i.e. , who, what, when, where, why, and how) and are more challenging as they require determining the correctness of each choice.",
"Question Quality Verification.",
"We performed a separate crowdsourcing data verification to test and enforce the following criteria: (1) is answering the question a tractable problem given (relevant) 4 https://risk.lexisnexis.com 5 Due to the limited space, for more details of our triple creation guidelines for human annotators, verification steps, and screenshots of our data col-lection/verification AMT interfaces, please refer to Sec.",
"A of the appendix.",
"past",
"articles?, and (2) is the question deterministically answerable given any article adhering to the question's temporal constraint?",
"If a question is too difficult, i.e. , an educated guess to the answer (when given relevant, constraint-adhering articles) is not possible, then we filter the question out.",
"On the other hand, if the questions are answerable with certainty using past articles, or commonsense/world knowledge, then they are not considered to be forecasting questions.",
"The desired response (majority vote from 3 annotators) is a yes for criterion (1) and no for (2), as that would show that the tuple of question and time constraint simulates the desired forecasting scenario.",
"With the above method, we filtered out 31% of the questions collected in the triple creation step and were left with 5,704 yes-no questions and 4,513 multi-choice questions.",
"More details about the verification step are included in Sec.",
"A of the appendix.",
"To better understand the properties of the questions in FORECASTQA, we examine: 1) a few data statistics 2) types of questions asked, and 3) the types of reasoning required to answer our questions.",
"Summary Statistics.",
"FORECASTQA dataset is composed of 10,392 questions, divided into a 80/10/10 split of train, dev, and test data.",
"Our 10k questions are roughly evenly split between multiple-choice and yes-no binary questions (Ta-ble 2).",
"Over 17K distinct words were used to construct our questions and we have 218 unique time constraints associated with them; time constraints range from 2019-01-11 to 2019-11-12.",
"We include additional statistics in Sec.",
"Types of Questions.",
"To understand the types of questions in FORECASTQA, we examined the popular beginnings of sentences and created a tree-map plot (see Fig. 2).",
"As shown, nearly half the questions start with the word will (44%), a result of over half of the questions being yes-no questions.",
"Reasoning Types.",
"To examine types of reasoning required to answer our questions we sampled 100 questions and manually annotated them with reasoning types.",
"Due to the forecasting nature of our dataset, we are particularly interested in questions containing the forecasting ability and thus spend more time looking into these questions.",
"Our condensed results can be found in Figure 4, and more results from our cataloguing effort can be found in Sec.",
"C of the appendix.",
"Note that most questions contain more than one reasoning type.",
"To evaluate the forecasting capabilities of recent multi-choice / binary QA model architectures on FORECASTQA, we provide a comprehensive benchmarking analysis in this work.",
"We run the experiments in two settings: (1) closed-book and (2) constrained open-domain setup.",
"In the closed-book scenario only Q (question) and C (answer choices) are provided to the model ( Q, C ) , while A (news articles) is provided for setting (2), ( Q, C, A ) 6 .",
"We run these settings to understand the difficulty of both the closed-book and open-domain challenges presented by the questions in FORECASTQA.",
"6 t Q is always applied to A , we left it out of the notation for simplicity.",
"For both settings, we explore several baseline models, but all follows a general architecture of a text encoder f and an optional context aggregation module g to aggregate information from a set of retrieved articles.",
"Fig. 5 shows the architectures used.",
"We model both yes-no and multiple-choice questions as a binary classification task; a model's prediction is the class with the largest probability.",
"Below we introduce the details of our baselines.",
"Text Encoder.",
"We use pre-trained language model, BERT (Devlin et al., 2019), as a text encoder ( f from above) 7 .",
"f is designed to deal with ( Q, C ) and ( Q, C, A ) inputs, where A is a set of time-stamped articles that are retrieved from A to answer Q .",
"Each input of f is transformed into [ [CLS] Q [SEP] C [SEP] A i ] (for each A i A , C C ), or [ [CLS] Q [SEP] C ] (for each C C ) if articles are not supplied.",
"The [CLS] token is the same as the one commonly used for fine-tuning PTLMs for a classification task, and [SEP] is the special separator token.",
"The embedding of [CLS] is then used for predictions with an MLP layer (the leftmost model architecture in Fig. 5), or as input into a context aggregation module (the middle architecture in Fig. 5) subsequently introduced.",
"Context Aggregation (AGG).",
"Two architectures are used when aggregating information from multiple, time-stamped articles A retrieved for a question.",
"(1) Temporal Aggregation: This aggregator utilizes temporal ordering of the retrieved articles.",
"Articles are sorted by their timestamps and their [CLS] token representation from f are aggregated by a Gated Recurrent Unit (GRU) (Cho et al., 2014) with a MLP head to make final predictions.",
"(2) Set Aggregation: Alternatively, we ignore the temporal ordering of articles and use a maxpooling operation 7 We did not include more recent pre-trained language models ( e.g. , RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2020), T5 (Raffel et al., 2020)) or pre-trained QA models like UnifiedQA (Khashabi et al., 2020), as these models are trained using text data published after the earliest timestamp in our dataset (2019-01-01), meaning information leakage could occur (and violates the forecasting setup).",
"We tested more LMs in Sec.",
"E.5 of appendix.",
"on the [CLS] token representations of each article.",
"This pooled representation is passed to an MLP layer to make a prediction.",
"Comparison between these aggregations helps understand the effect of modeling temporal order of evidence.",
"These two aggregation modules are denoted by AGG (GRU) and AGG (Maxpool), respectively.",
"Multi-document Summarization (MDS).",
"Rather than conducting context aggregation of the retrieved articles, we consider an MMR summarizer (Carbonell and Goldstein, 1998) which performs extractive, multi-document summarization of text to generate a summary A summ (rightmost architecture in Fig. 5).",
"The summary article A summ is treated as if it is an A i A and fed into a text encoder along with Q and C which then produce the [CLS] embedding for making a prediction.",
"We name this method MDS.",
"Integrated Approach.",
"To take the best of both worlds in ( Q, C ) and ( Q, C, A ) settings, we integrate two architectures (the leftmost and middle ones in Fig. 5).",
"We concatenate the last two hidden representations of each architecture before passing the concatenated representation through a shared MLP layer.",
"We use BERTLARGE as f in both architectures, AGG (GRU) for g and call this model BERT LARGE ++ (integrated) in Table 3. Other Baselines.",
"We also consider other baselines: ESIM (Chen et al., 2017b), BI DAF++ (Clark and Gardner, 2018), prepending extracted open event triples (Liu et al., 2019a) to BERT input, and a script learning approach, SAM-Net (Lv et al., 2019).",
"We modify the approaches to fit into our setup.",
"Detailed descriptions of each baseline method are included in Sec.",
"E.3 of appendix.",
"We adopt two types of settings: the closed-book setting ( Q, C ) and the constrained open-domain setting ( Q, C, A ) .",
"In the constrained open-domain setting, we use BM25 (Robertson et al., 1995; Qi et al., 2019) as our IR method 8 to obtain A , 10 retrieved articles.",
"We also explore other IR methods in the later section.",
"Note that we retrieve articles that do not violate the time constraints.",
"We feed the question Q as a query and limit our access to articles in A by t Q .",
"Additionally, we validate the 8 Details of IR methods are described in appendix Sec.",
"E.2.",
"articles instead of retrieved articles (Sec. 6.3).",
"Evaluation Metrics.",
"Because forecasting is uncertain, a system's prediction probabilities indicate its confidence answering the question.",
"In addition to accuracy, we consider Brier score (Brier, 1950), which measures the mean squared error of probabilities assigned to sets of answer choices (outcomes).",
"Formally, Brier = 1 N (cid:80) Ni =1 (cid:80) Cc =1 ( p ic y ic ) 2 , where p ic is the probability of prediction; y ic is a label indicator for class c of the instance (1 or 0), N is the number of prediction instances, and C is the number of classes (2 or 4).",
"The highest Brier score is 0 (probability 1 for the correct class, probability 0 else), while the worst possible Brier score is 2 (probability 1 for the wrong class, probability 0 else).",
"A confident model gets low Brier scores.",
"To benchmark human performance, seven annotators (computer science graduate students) who were not involved in question generation were asked to answer 150 randomly sampled questions from the test set.",
"We consider two scenarios: 1) annotators are provided with retrieved articles, A ; and 2) annotators can access any article published before the timestamp via Google Search.",
"Moreover, as annotators live in the future with respect to the timestamp of a question, they might already know the actual answer.",
"To avoid the over-estimation Methods GRU Maxpool MDS BERTBASE , TF-IDF 53.2 53.9 51.6 BERTBASE , DPR 53.7 54.6 54.3 BERTBASE , BM25 55.4 54.2 52.0 BERTLARGE , TF-IDF 56.5 55.4 55.0 BERTLARGE , DPR 56.1 59.4 54.6 BERTLARGE , BM25 59.1 58.6 54.7 Table 4: Accuracy with different retrievers: BM25, TF-IDF, and dense passage retrieval (DPR).",
"of accuracy, we asked the annotators to not use their future knowledge.",
"If they felt this is not possible, we asked them to skip the question.",
"On average, 28.3% of questions are skipped.",
"Given this setup, humans achieve 71.2% and 79.4% accuracy respectively, for the two scenarios when taking a majority vote for each question; we also observed good inter-annotator agreement.",
"The two scenarios are referred as ( ) and ( ) in Table 3. 6.3 Results and Performance Analysis Results on the Constrained Open-domain Setting.",
"Table 3 shows the results of baseline methods for comparison.",
"We compare pre-trained language models with different context aggregators and other baselines.",
"The integrated model, BERTLARGE ++ shows the best performance in terms of accuracy, while BERTLARGE (closed-book) shows the best Brier score.",
"Unlike the accuracy metric, the Brier score penalizes overand underconfident forecasts (Mellers et al., 2014) thus the best model under each metric can be different.",
"The marginal differences in performance between the two settings suggest that access to information (text evidence) alone does not solve the forecasting problem.",
"We hypothesize an inability to encode salient relations for forecasting purposes prevents the additional information from proving useful.",
"Among the aggregators in BERTBASE , the GRU aggregator outperforms other aggregators and summarizers.",
"This suggests that utilizing articles' temporal order helps the reasoning.",
"Overall, baselines fall behind human performance by over 10% points given the same retrieved articles.",
"Study of Different IR Methods.",
"We further test several retrieval methods: BM25 (Robertson et al., 1995; Qi et al., 2019), TF-IDF (Chen et al., 2017a), and a pre-trained dense passage retriever (DPR) (Karpukhin et al., 2020).",
"As in Table 4, BERTLARGE with DPR retriever and the Maxpool aggregator shows the best performance than other combinations.",
"However, DPR does not achieve the best accuracy for all methods.",
"This implies that 1) Methods / Metrics GRU Maxpool ACC ( ) Brier ( ) ACC ( ) Brier ( ) w/o timestamps 55.4 0.583 54.2 0.568 Pre-pend timestamps 54.2 0.634 54.8 0.599 Binary timestamp encoding 51.1 0.623 55.6 0.624 Char-RNN timestamp encoding 54.0 0.640 54.3 0.620 Table 5: Study on modeling article timestamps (publica-tion dates) in the constrained open-domain setting.",
"Ablation on Timestamp Modeling.",
"We conduct an ablation study on modeling time information (publication date) of the retrieved articles, as seen in Table 5.",
"We test:",
"a) pre-pending date string as BERT input,",
"b) using binary encodings of dates 9 and concatenate with article encoding before aggregation, and",
"c) using char-RNN (Goyal and Durrett, 2019) for encoding date string before aggregation 10 .",
"We find that using binary encodings of dates improves the accuracy for the maxpool aggregator.",
"However, the GRU aggregator's accuracy decreases when given date information.",
"We conjecture that our modeling for the time information of each article is not strong enough to help forecasting.",
"We leave more sophisticated modeling for future work.",
"Answerability of Questions.",
"To validate that the questions in FORECASTQA are indeed answerable, we convert our setup into a machine reading comprehension (MRC) task find an answer given an assumed appropriate context.",
"We provide the model with a gold article or the evidence sentence (Sec. 4.1).",
"Since pre-trained models have achieved high performance on MRC tasks (Rajpurkar et al., 2016), we expect adequate performance when provided the correct context.",
"As seen in Table 6, we observe that in closed-book setting, BERT is able to beat out a random baseline, but it still does not 9 https://temporenc.org 10 Details are described in appendix Sec.",
"(a) Varying amounts of data.",
"(b) Different question types.",
"Figure 6:",
"(a) Test accuracy of BERTBASE trained with varying amounts of training data, with human performance (79.1%) shown in orange, and",
"(b) development accuracy breakdown by different types of multichoice questions.",
"perform well; implying our questions are not trivial for BERT, and context is required to answer them correctly.",
"When given the gold article, BERT achieves 76.9% (+22%) and it even performs better (84.4%) given the evidence sentence.",
"This all implies that given the right information, our forecasting questions can be answered correctly.",
"Study of Data Efficiency.",
"To examine how models might perform with less/more training data, we evaluate BERTBASE (closed-book) on the test set, by training it with varying amounts of labeled data.",
"Fig. 6a shows the the resulting learning curve.",
"We observe the accuracy of the model is expected to reach 70%, assuming 100k examples which is still 9% point lower than human performance.",
"Results on Different Question Types.",
"We test BERTBASE (closed-book) on different question types of multi-choice questions from our development set (Fig. 6b).",
"We find that the accuracy of the model varies across different question types: how questions are the most difficult to predict while higher accuracy is achieved on why questions.",
"Also for yes-no questions, the method achieves 69.5% on yes questions and 62.9% no questions, indicating that there is no significant bias towards certain type of binary questions.",
"Error Analysis.",
"We observe 4 main categories of errors produced by the methods in our analysis: (1) retrieving irrelevant articles, (2) incorrect reasoning on relevant evidence, (3) lacking (temporal) common sense, and (4) lacking numerical knowledge.",
"Please refer to Sec.",
"E.7 of appendix for examples and in-depth discussions of these errors.",
"Forecasting is a difficult task that requires every possible advantage to do well.",
"It would be wise to harness this pool of unstructured data for training automatic event forecasting agents.",
"To utilize this form of data for forecasting, we proposed a question-answering task that requires forecasting skills to solve FORECASTQA, and provided the accompanying dataset.",
"Various baseline methods did not perform well, but this is not surprising given the inherent difficulty of forecasting.",
"Our benchmark dataset can benefit future research beyond natural language understanding and hope forecasting performance will be significantly improved."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result"
] |
[
"Language is gendered if the context surrounding a mention is suggestive of a particular binary gender for that mention.",
"Detecting the different ways in which language is gendered is an important task since gendered language can bias NLP models (such as for coreference resolution).",
"This task is challenging since genderedness is often expressed in subtle ways.",
"Existing approaches need considerable annotation efforts for each language, domain, and author, and often require handcrafted lexicons and features.",
"Additionally, these approaches do not provide a quantifiable measure of how gendered the text is, nor are they applicable at the fine-grained mention level.",
"In this paper, we use existing NLP pipelines to automatically annotate gender of mentions in the text.",
"On corpora labeled using this method, we train a supervised classifier to predict the gender of any mention from its context and evaluate it on unseen text.",
"The model confidence for a mention's gender can be used as a proxy to indicate the level of genderedness of the context.",
"We test this gendered language detector on movie summaries, movie reviews, news articles, and fiction novels, achieving an AUC-ROC of up to 0 .",
"71 , and observe that the model predictions agree with human judgments collected for this task.",
"We also provide examples of detected gendered sentences from aforementioned domains.",
"Language can be extraordinarily gendered (Moul-ton et al., 1978).",
"Genderedness in language is when we use words or phrases that are stereotypical or indicative of a particular gender (we only consider male vs female in this work) (Prior, 2017).",
"It is important to detect this bias in language since not only is this bias propagated to the readers (Menegatti and Rubini, 2017), but also machine learning algorithms trained on gendered corpora tend to become biased (Zhao et al., 2018a; Rudinger et al., 2018), often aggravating the disparity (Zhao et al., 2017).",
"Bias in language and machine learning systems can lead to unfair treatment, e.g., early work by Moulton et al. (1978) shows that males have an advantage in contexts where they are referred to by a putative neutral term.",
"Recent work on coreference resolution systems (Zhao et al., 2018a) shows that bias in machine learning systems originates from training on existing corpora, resulting in male-stereotyped professions like surgeon and president incorrectly resolved to males instead of females.",
"Such biases in machine learning systems can lead to unintentional biases in downstream tasks producing effects like preferential treatment to male candidates over female candidates when selecting resumes (Dastin, 2018).",
"Detecting these biases is the first step in finding a solution.",
"Most of the current works for related problems tend to be domain-specific (Fu et al., 2016), rely on techniques such as simple counting of gender occurrences (Ali et al., 2010), or use manually constructed lexicons and features for analysis (Trix and Psenka, 2003), and thus do not generalize well and require expensive manual supervision.",
"Existing approaches also tend to either focus on the whole corpus/article being gendered (Schmader et al., 2007; Trix and Psenka, 2003) or a specific word being gendered (Caliskan et al., 2017; Bolukbasi et al., 2016; Zhao et al., 2018b), thus failing to capture the subtle occurrences of genderedness at mention-level or giving a quantifiable measure of how gendered the text is.",
"In this paper, we develop a method that eliminates the manual annotation requirement, and can generalize to words, phrases, sentences, articles, as well as whole corpora.",
"We present a framework for automated data labeling by combining existing NLP pipelines to identify sentence boundaries and mentions (using NER tagger) and using a gen-Female Their client is who has got gorgeous hair.",
"der classifier for names to get the gender of the mentions.",
"We build a classifier using this annotation to predict gender of a mention only from its context and quantitatively analyze the genderedness of various contexts using this model.",
"Figure 1 shows example inputs and outputs for our model.",
"Input to the model is the context sentence around a target mention (indicated by colored boxes), and the model prediction is the gender of this mention.",
"For the first sentence, the model uses context information coming from gorgeous hair' to predict the gender of mention to be female, indicating a more gendered sentence.",
"Similarly for the third sentence, model uses contextual information from the adjective lovely' and its proximity to the target mention to predict female.",
"For the second sentence, the target mention is subject for the verb in phrase intends to marry' and the object to be married is lovely Gauri'.",
"Our model uses this information to predict gender of target mention as male.",
"Since our data labeling pipeline is automated, we can easily annotate millions of documents and train complex classifiers that can accurately model the context.",
"These classifiers can be used to predict the gender of a mention from given context and quantify genderedness.",
"We present instantiations of this framework on four domains: news articles, novels, movie summaries, and movie reviews.",
"Since we are the first to study the task of mention-level gender detection, we evaluate the difficulty of the task and introduce the first benchmark using a user study.",
"We find that the task is challenging, and our model predictions corroborate with human predictions.",
"We present qualitative results of our model showing genderedness at different granularities word, phrase, sentence, and corpus.",
"Gender Bias in Datasets A number of approaches have considered gendered language use.",
"Blatt (2017) shows that shivered, wept, screamed are disproportionately used to describe women while muttered, grinned are used to describe men.",
"Studies on gender bias in student evaluations for instructors (Eidinger, 2017; MacNell et al., 2015; Boring et al., 2016; Centra and Gaubatz, 2000), and recommendation letters (Trix and Psenka, 2003; Schmader et al., 2007) also show similar disparities in terms of harshness of evaluations, length of letters, descriptive words, and use of standout adjectives.",
"Bias in language has also been studied for textbooks (Otlowski, 2003; Gharbavi and Mousavi, 2012; Macaulay and Brice, 1997), Wikipedia edits (Recasens et al., 2013), political text (Yano et al., 2010), media content (Ali et al., 2010; Len-Ros et al., 2005; Smith, 1997), sports journalism (East-man and Billings, 2000; Tyler Eastman, 2001; Kin-nick, 1998; Fu et al., 2016) and in movie character portrayals (Ramakrishna et al., 2017; Sap et al., 2017).",
"These approaches are domain-specific and rely on techniques like counting gender occurrences, manually annotating words or mentions, constructing list of keywords and lexicons, carrying out surveys, etc.",
"Our approach works across domains, and does not require manual annotations.",
"There has been significant amount of work in detecting author's gender (Koppel et al., 2002; Herring and Paolillo, 2006; Sarawgi et al., 2011; Mukherjee and Liu, 2010; Burger et al., 2011) for text, speaker gender for dialogues (Schofield and Mehr, 2016) in films, and to detect and reduce biases in these (Tatman, 2017; Thelwall, 2018; Koolen and van Cranenburgh, 2017).",
"While we do not focus on predicting the gender of the author, our framework can be used as a tool to compare the use of gendered language across various authors, or across various works by the same author.",
"Gender Bias in NLP Pipelines There has also been recent interest in examining the role of gender bias in existing NLP pipelines.",
"Caliskan et al. (2017) and Bolukbasi et al. (2016) show that word embeddings exhibit gender stereotypes.",
"Garg et al. (2018) build on this idea, using word embeddings to characterize the evolution of gender stereotypes during the 20 th and 21 st centuries.",
"Subsequent works attempt to mitigate this bias in embeddings (Zhao et al., 2018b).",
"Zhao et al. (2019) extend the idea to contextualized word embeddings (Peters et al., 2018), and quantify and propose ways to mitigate gender bias in them , while Gonen and Goldberg (2019) show that current approaches for debiasing embeddings are superficial.",
"Researchers have studied gender bias outside word embeddings as well.",
"Zhao et al. (2017) show that datasets for multi-label object classification and visual semantic role labeling are gender-biased and that models trained on these datasets amplify this bias, while Rudinger et al. (2017) find racial, religious and gender stereotypes in the SNLI corpus and Park et al. (2018) analyze gender bias in abusive language datasets.",
"Zhao et al. (2018a) and Rudinger et al. (2018) detect bias in existing coreference resolution systems, and Webster et al. (2018) build a gender-balanced labeled corpus of ambiguous pronoun-name pairs to understand this bias.",
"All of these either focus on whether the corpus as a whole is gendered or if a single word is gendered (in case of word embeddings).",
"Instead, we train a classifier to detect and quantify gendered language at mention-level.",
"Our framework can also be used to quantify genderedness at different levels mention, sentence, document, or corpus.",
"Gendered Language Gendered language is the use of words and phrases that discriminate 1 the gender of a subject.",
"In other words, the gender of mentioned person should be easy to predict from context if the text is gendered.",
"Examples of gendered language can be found in the use of stereotypes like linking women to homemakers and men to programmers (Bolukbasi et al., 2016) or when pronouns, adverbs, adjectives, nouns are used carelessly, e.g., when a masculine pronoun mention he' is used to refer to both sexes (Cottier, 2018; Stout and Dasgupta, 2011) or when pronoun mentions are used exclusively to define professions by gender (using she' when talking about a nurse).",
"Detecting gendered language is incredibly challenging since the ways in which gender is expressed can vary considerably across authors, domains, and time periods, making any approach that requires annotations to be corpus-specific.",
"Proposed Architecture We are interested in determining the extent to which language in context of the mention reveals the gender.",
"Humans learn to detect gendered language based on a lifetime of reading and observing society, and learning language specific to each gender.",
"We use this intuition to propose an automated framework for de-1 in the machine learning sense of the word Detecting Genderedness: He is good at sports Classifier Text is gendered if they match Context, (cid:126)x predicted gender, y true gender, y Training: Train Classifier to Predict Gender She is good at sewing.",
"tecting mention-level genderedness for any corpus (an overview is shown in Figure 2).",
"The input to the gender detector is the context (sentence) without the target mention and the output is the detector prediction for gender of that mention.",
"For a mention i , let C ib be the context before the mention, C ia be the context after the mention and f be the gender detector.",
"Then, p i = f ( C ia , C ib ) (1) is the detector's probability (confidence) that the mention i is female, i.e. p i close to 0 indicates high confidence for predicting male while close to 1 indicates high confidence for females.",
"We use the detector's probability of the true gender of the mention ( g i ) as an estimate of how gendered the text is: a high probability indicates that gender is heavily reflected in the context.",
"We define this as the gendered score, given by G im here: G im = (cid:40) p i True gender g i is female 1 p i True gender g i is male (2) We define gendered score for a document as the average of gender score for all of its mentions.",
"i.e. G doc = 1 NN (cid:88) i =1 G im (3) where N is the number of person mentions.",
"As detector, rather than relying on frequency-based linear models or Naive Bayes model, we can use simple as well as more complex classifiers that can accurately model semantics and syntax of the context.",
"For example, we use bag-of-words models (with logistic regression classifier) and recurrent neural network models as described in Section 4.2.",
"In this section, we give details about the datasets, our pipeline for automated data labeling, filtering and processing applied to contexts in order to remove obvious gender information, and the classifiers used to classify gender of a given mention.",
"we analyze text from four different domains: New York Times articles from the Annotated Gigaword corpus (Napoles et al., 2012) Novels from Gutenberg corpus 2 IMDB Movie Reviews (Maas et al., 2011)",
"Movie Summaries (Bamman et al., 2013) These domains cover a variety of writing styles.",
"While the novels represent fictional writing, news articles are non-fictional.",
"Movie summaries dataset describes the plot of the movies, i.e., how gender is represented in the plots, and the movie reviews dataset provides the ways in which people express their views on the plots, i.e., how gender is represented in user perception of the movie.",
"We train classifiers for each domain to predict the gender of mentions from the context they appear in, and use the resulting classifiers to detect gendered language.",
"Similar idea is explored in Choi et al. (2018) to detect the type of mention from the context.",
"For news, data from the first 6 months for every year is used for training, next three for validation, and last three for testing.",
"For novels and movie summaries, we divide the data randomly into 50 : 20 : 30 split for train, validation 2 Project Gutenberg, from www.gutenberg.org 1 Miss Mary Briganza will go to Korea with her parents.",
"Labeling Gender for Mentions We illustrate our processing pipeline via the example sentence in Figure 3.",
"For mention-level gender prediction, we need a dataset with identified person mentions and their genders.",
"Since we do not have labeled data, we need to identify mentions in contexts, and assign gender labels to them.",
"Along with pronouns he' and she', we use spacy 3 to tag all corpora with NER tags to identify the set of person mentions.",
"We use the SSN baby names dataset 4 from 1880 to 2016 to assign gender to each name.",
"If a name is associated with more than one sex, we exclude it if it is ambiguous (being less than 4 times more frequent for one sex), but otherwise assign it to the more frequent sex.",
"If a name is absent from our list of names, we replace the mention with a placeholder < Person > .",
"Table 1 shows the count of male and female mention-context pairs generated using this pipeline.",
"Processed sentence after this step is sentence 2 in Figure 3.",
"Filtering and Input Context Processing To remove obvious, uninteresting gender information, we discard sentences that contain any word from a gender-specific lexicon as used by Bolukbasi et al. (2016) such as gender-specific occupation words and gender-specific familial relation words, e.g., man', woman', prince', and hostess'.",
"Complete list is given in Appendix A. For contexts that contain gender-indicative pronouns (him', her', his', 3 https://spacy.io 4 https://www.ssa.gov/oact/babynames/ hers', himself', herself'), we replace them with a gender-neutral pronoun ( them', their' ).",
"All other mentions in the context (including he' and she') are replaced with a gender neutral word, and titles ( Mr',Mrs',Miss' ) are replaced with a gender-neutral title word.",
"Sentence 3 in Figure 3 is the result after this stage.",
"The input to classifier is sentence 4 in Figure 3 and the target is 1 (for female).",
"We extract such mention-context pairs from large text corpora to train classifiers that can predict the gender of individual mentions from their context using minimal manual supervision (as illustrated in Figure 2).",
"Bag-of-words and ngrams We construct bag-of-word classifiers by selecting the 50 , 000 most frequent words from the training subset, and bag-of-ngrams models by selecting the 100 , 000 most frequent n-grams (up to 3 -grams), for each dataset.",
"We explore a number of classifiers like logistic regression, support vector machines, random forest classifier, and choose logistic regression classifier since it consistently performs better than others.",
"LSTMs and CNNs We use both uniand bidirectional LSTM recurrent neural networks for the context.",
"In the 2-way LSTM model, we use two separate LSTMs: one for context before the mention, and the other for context after the mention.",
"The direction of LSTM for latter part is reversed so that the model gives more importance to words closer to the target mention.",
"This is followed by a sigmoid layer after the concatenation of the final hidden states.",
"The input layers are initialized using the Glove vectors (Pennington et al., 2014), and are updated during training.",
"We train the classifier with log-loss and Adam (Kingma and Ba, 2014) optimization algorithm, including dropout (Srivastava et al., 2014) and early stopping for regularization.",
"Hyper-parameters are tuned for different domains separately.",
"We experiment with ELMo embeddings (Peters et al., 2018), convolutional neural network (CNN)-based architectures, vanilla recurrent neural network (RNN), and gated recurrent unit (GRU) (Cho et al., 2014) models as well.",
"Performance Since our datasets are imbalanced, we use AUC-ROC as a performance metric.",
"Table 2 shows AUC-ROCs for various models for all the datasets.",
"Conventional bag-of-word/ngrams classifiers exhibit AUC-ROCs comparable to more Reviews Summaries News Novels Bag-of-ngrams 0.64 0.62 0.70 0.71 Bag-of-word 0.63 0.62 0.70 0.71 Single LSTM 0.67 0.62 0.63 0.63 2-way LSTM 0.67 0.66 0.68 0.67 2-way LSTM + ELMo 0.67 0.65 0.70 0.69 2-way RNN 0.65 0.63 0.65 0.63 2-way GRU 0.67 0.66 0.69 0.66 CNN 0.66 0.64 0.68 0.64 Table 2: AUC-ROCs for different models (evaluated on test data).",
"complex LSTM and CNN classifiers.",
"We use 2-way LSTM as the classifier for our final analysis.",
"To assess the difficulty of this task and to compare performance of our gendered language detector against a human baseline, we use Amazon Mechanical Turk to get human annotations for 500 random sentences from the test sets of each domain.",
"Task Description Turkers are shown sentences with missing mention, e.g., Sandwich maker said mojo and fresh roasted pork are key to a great Cuban sandwich' , and are asked to guess the gender of the missing mention.",
"We sample the sentences such that the true labels (male/female) are balanced.",
"For our study, we use two tasks that slightly differ from one another in the decisions turkers need to make.",
"In one task, turkers are given only two options, male and female , forcing them to make a choice.",
"In the second task, turkers are given five options on the Likert scale: extremely likely male, likely male, neutral, likely female, extremely likely female allowing for a finer scale of decision.",
"We include examples in the instructions, and a few extremely easy examples as probes to verify quality (Munro et al., 2010).",
"Each worker is shown 35 sentences from a single domain.",
"On average, we collect 7 human annotations per sentence.",
"Do humans predict gender well?",
"Sentences that do not have a clear majority are removed from our analysis.",
"As a measure of inter-rater reliability, we compute pairwise and majority agreement, in Table 3.",
"Percentage improvement over chance agreement is higher for 5-scale rating compared to 2-scale rating indicating that users tend to agree more when they are able to tag the borderline (possibly confusing) mentions as gender-neutral (chance agreement is 0 . 5 for 2-scale, and 0 . 2 for Dataset Pairwise Majority 2-Scale 5-Scale 2-Scale 5-Scale Reviews 0 . 62 0 . 32 0 . 74 0 . 52 News 0 . 65 0 . 38 0 . 77 0 . 55 Novels 0 . 60 0 . 33 0 . 73 0 . 52 Summaries 0 . 61 0 . 33 0 . 73 0 . 53 Combined 0 . 62 0 . 35 0 . 74 0 . 53 Table 3: Pairwise and majority inter-annotator agreement for instances with clear majority. 2-Scale indicates when users are asked to indicate male or female, while 5-Scale indicates gender on a scale of 5. E x t r e m e l y m a l e L i k e l y m a l e N e u t r a l L i k e l y f e m a l e E x t r e m e l y f e m a l e Human predictions 0 40 80 120 160 200 240 280 320 360 400 440 480 520 C o un t Male Female Figure 4: Comparing Human Predictions to Truth. x-axis represents the human prediction for mentions, while the green and pink bars represent counts of true male and female mentions respectively. 5-scale task).",
"To analyze the kind of mistakes humans make, we show the true distribution of male and female mentions compared against human predictions in Figure 4.",
"42% of examples are predicted Neutral' by humans showing that the task is pretty difficult for humans as they often find mention gender ambiguous.",
"Further, Likely male' and Likely female' categories have around 30% wrong predictions as well.",
"These high error rates explain low F1 of 0 .",
"52 for human annotations.",
"Extremely male' and Extremely female' have the least error rates showing that humans are more precise when they are more confident about the predictions.",
"Does our model match humans?",
"In order to compare human annotations against model predictions more concretely, we choose to use Kendall's c statistic (Berry et al., 2009), because it allows us to compare two variables when their underlying scales have different numbers of values.",
"Like correlation coeeficients, c ranges from -1 (fully negative association) to +1 (fully positive associ-0.0 0.2 0.4 0.6 0.8 1.0 Model probability for female Extremely male Likely male Neutral Likely female Extremely female Figure 5: Human and Model Predictions.",
"ation).",
"c between humans and our LSTM model predictions vary from 0 .",
"23 to 0 .",
"36 showing positive correlation.",
"We also look at classifier probability distribution for human decisions shown in box and whisker plot in Figure 5, where x-axis is the classifier probability of the mention being female.",
"The median value of classifier prediction for each category (shown in green line) shifts towards female prediction as we move from Extremely male' to Extremely female' category, corroborating the agreement between humans and model.",
"We first show aggregate word level and phrase level analysis, then show more complex and subtle sources of gendered language on sentence level.",
"Word-level Analysis Table 4 shows the top nouns and verbs extracted using bag-of-word classifier.",
"We also train separate classifiers only on nouns, on adjectives, and on verbs in order to find out which are most informative for gender.",
"Classifiers trained only on nouns performs best, indicating that nouns have most information.",
"Top male-indicative nouns stem from typically male-dominated sports, while top female-indicative nouns are related to fashion and home industry.",
"Phrase-level Analysis Table 5 shows some of the top phrases for predicting males and females for different domains extracted using bag-of-ngrams models.",
"We see phrases like clasped their hands' , fashion show' , jealous rage' , was asked to' for females, while clockwork orange' , 'action hero' , Female-specific Nouns Male-specific Nouns Summaries: cherie, elisabet, crawlers, plastics, governess, cheerleader, prostitution, overdosing, bimbo, spinner Novels: godmother, melvina, skirt, girlhood, lucile, womanly, eyebright, womanhood, shawl, dressmaker, demurely Reviews: comedienne, floriane, slut, adela, tch, topless, actress, tits, feminist, modeling, redhead, helen, vamp, bettie News: gymnasts, dietitian, lpga, hingis, feminist, dowd, soren-stam, wie, receptionist, omnimedia, quilting, homemaker Summaries: quacker, platoon, tweety, shemp, cellmate, ham, nibbles, falstaff, pup, towel, mousehole, bullies Novels: disciples, yussuf, rifle, jr, pepe, cigar, colleague, followers, erasmus, judas, opponents Reviews: seagal, inventor, panther, opponent, sellers, ratso, comedian, lawman, yossi, creators, brutus, ted News: spurs, astros, nicks, jets, sprewell, nets, vikings, clippers, lakers, holyfield, sonics, councilman, nba, bucs, pitches Female-specific Verbs Male-specific Verbs Summaries: giggles, conceive, type, spurned, distorted, strokes, railing, rehearse, gag, disowned, plaguing, forgo Novels: sobbed, sew, blushed, wailed, pouted, scream, moaned, giggled, weeping, blushing, sob, shrieked, faltered Reviews: swims, bare, willed, raped, married, pouting, pleading, glows, kisses, liberated, seduces, fled, numbed News: fax, widowed, choreograph, raped, graduates, decorating, sobbing, majoring, giggling, married, cries, decorate Summaries: commanding, barks, crack, credited, embezzled, executes, opposing, foils, relying, assassinate, engineered Novels: preached, elected, growled, states, yelled, roared, nominated, voted, grinned, slew, preach, fire, attack Reviews: direct, assuming, elected, defeat, casted, laid, mumbles, rule, directing, flicks, drinking, produce News: coach, pitching, batted, disarm, sacked, benched, fumbled, lightning, averaged, traded, sprained, vetoed Table 4: Most important nouns and verbs for predicting male/female.",
"construction worker' occur for males.",
"Similarly, the term secretary' occurs frequently with females, however phrases like defense secretary', treasury secretary' are positive features for male mentions indicating male-domination of certain fields.",
"Sentence-level Analysis Our approach is the first to find gendered sentences from a huge corpora.",
"We present examples of detected gendered language in Table 6; the first two columns show contexts for which a high level of gendering is estimated (most confident estimates), while the classifier has very low confidence for the examples in third column, indicating gender-neutral use of language.",
"We see several interesting examples, e.g., male-gendered contexts from summaries show that society attributes roles like billionaire computer moguls and FBI-agents to males.",
"The first female gendered example from novels depicts the way in which females are described and portrayed in fiction, which is in stark contrast to male descriptions.",
"enabling high-level analysis of what documents are most, or least, gendered.",
"Table 7 shows such an analysis, where the languages for movie reviews and summaries, genres for novels, and news desks for news are organized by their estimated genderedness.",
"We see that children's history books are more gendered than their literature books.",
"Books related to opera and one-act plays are among the least gendered ones, while those related to war, history, and philosophy use gendered language the most.",
"Movie reviews for movies in Vietnamese, Turkish and Polish are among the most gendered, while Greek and Japanese are the least gendered.",
"We see a similar pattern in movie summaries summaries for movies filmed in Polish and Turkish are more gendered than for movies in Korean or Romanian.",
"Sports is the most gendered category for news articles, while Cultural , Leisure , Society , and Home are among the least gendered ones.",
"The table also contains some unexpected predictions, such as the low gendering of Girls and Young women novels.",
"Sex vs Gender Since the current English language use is mostly limited to binary gender identities (both due to grammar and usage), we treat gender as a binary concept in this work.",
"Inclusion of genderqueer and non-binary identities will require data annotated by humans with sufficient domain knowledge, which was out of scope for this work.",
"We assume that mentions for which our labeling has associated the wrong gender' because of difference in sex/gender identities are sufficiently low in proportion that model is still able to learn relevant signals when trained on large corpora.",
"Facts vs Stereotypes In this work, we do not delineate between factual information (women get pregnant ) and the intentional use of stereotypes (women are sweethearts ).",
"In some domains, such as news, ignoring this difference can be misleading, and exploring approaches that are able to better separate these different biases is important.",
"Extension to New Domains There remain a number of exciting avenues for future work.",
"Although we analyze a variety of domains that differ from each other, our analysis focused on independently investigating each; it may be much more fruitful to compare and contrast the gendered language across multiple domains.",
"When extending this work to other domains like Twitter, blogs, etc., the performance of the system can be affected by various factors like accuracy of NER system for the domain (e.g., it would be lower for tweets) and names to gender mapping (which can vary for different geographies and cultures).",
"We present a concrete implementation and evaluation of our gendered language detector.",
"The main advantages of our pipeline and method are: (1) Flexibility , in application to different domains with minimal manual intervention, (2) Mention-level analysis, instead of article-level analysis in previous works, enabling more granular analysis, and (3) Quantitative measure of the extent of genderedness of context given a mention, allowing large-scale and detailed analyses and comparisons.",
"Our pipeline automatically extracts person mentions from a corpus, and by using an accurate gender predictor, trains a classifier to learn the ways in which language is gendered for that corpus .",
"This automation provides multiple benefits; not only are there no humans in the loop to inject their biases about what is, and is not, gendered language, but further, collection of a large annotated corpus allows us to train sophisticated neural models that are able to capture semantic and syntactic constructions in the language.",
"Evaluation suggests that our model is fairly accurate on this challenging task, and further, allows us to carry out analysis on multiple domains at varying levels of granularity, demonstrating potential applications of this work.",
"The code to support such endeav-ours, and to reproduce the results, is available at https://ucinlp.github.io/GenderQuant .",
"We would like to thank Dheeru Dua, Matt Gardner, Robert L. Logan IV, and the reviewers for their feedback and suggestions.",
"This work is supported in part by Allen Institute for Artificial Intelligence (AI2) and in part by NSF award #IIS-1756023."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"other",
"other",
"other"
] |
[
"Recent work in cross-lingual semantic parsing has successfully applied machine translation to localize parsers to new languages.",
"However, these advances assume access to high-quality machine translation systems and word alignment tools.",
"We remove these assumptions and study cross-lingual semantic parsing as a zero-shot problem, without parallel data (i.e., utterance-logical form pairs) for new languages.",
"We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-logical form paired data and in-domain natural language corpora in each new language.",
"Our model encourages language-agnostic encodings by jointly optimizing for logical-form generation with auxiliary objectives designed for cross-lingual latent representation alignment.",
"Our parser performs significantly above translation-based baselines and, in some cases, competes with the supervised upper-bound.",
"1 1 Introduction Executable semantic parsing maps a natural language utterance to a logical form (LF) for execution in some knowledge base to return a denotation .",
"The parsing task renders an utterance as a semantically identical, but machine-interpretable, expression grounded in a denotation.",
"The transduction between natural and formal languages has allowed semantic parsers to become critical infrastructure in building human-computer interfaces for question answering, (Berant et al., 2013; Liang, 2016; Kollar et al., 2018), dialog systems (Artzi and Zettlemoyer, 2011), and robotics (Dukes, 2014).",
"Recent advances in semantic parsing have improved accuracy for neural parsers (Jia and Liang, 2016; Dong and Lapata, 2016; Wang et al., 2020a) and examined their generalization capabilities with new dataset challenges (Zhong et al., 2017; Yu 1 Our code and data are available at github.com/tomsherborne/zx-parse .",
"et al., 2018), in addition to considering languages other than English (Duong et al., 2017; inter alia. ).",
"Prior work largely assumes that utterance-logical form training data is parallel in all languages (Jie and Lu, 2014), or must be created with human translation (Susanto and Lu, 2017a).",
"This entry barrier to localization for new languages has motivated the exploration of machine translation (MT) as an economical alternative (Sherborne et al., 2020; Morad-shahi et al., 2020).",
"However, MT can introduce performance-limiting artifacts and struggle to accurately model native speakers (Riley et al., 2020).",
"Additionally, high-quality machine translation is less viable for lower resource languages, further limiting the appeal of MT-based approaches.",
"In this work, we propose a new approach for zero-shot executable semantic parsing .",
"Our method maximizes the success of cross-lingual transfer for a parser, trained on English paired data ( EN LF ), to accurately generate logical forms 4134 from new languages ( X LF ).",
"Our goal is to parse utterances in a new language, l , without observing paired training data for this language, suitable machine translation, or bilingual dictionaries between l and English.",
"Our critical dependencies are a pre-trained language model and utterance-logical form paired data for a source language (i.e., English).",
"Aside from the zero-shot problem which is hard on its own (since paired data is not available for new languages), our semantic parsing challenge is further compounded with the difficulties inherent to structured prediction and the deficiency of copying strategies without gold token-level alignment (Zhu et al., 2020).",
"We conceptualize cross-lingual semantic parsing as a latent representation alignment problem.",
"As illustrated in Figure 1, we wish to encode different languages to an overlapping latent space for the decoder to have any chance at generating accurate logical forms.",
"To achieve this, we train a decoder, conditioned upon encodings from a source language (e.g., English), to generate logical forms and simultaneously train encodings of a new language (e.g., Chinese) to be maximally similar to English.",
"We hypothesize that if latent representations are aligned from a language-agnostic encoder, one can generate accurate logical forms from a new language without semantic parsing training data and thus eliminate the errors outlined in Figure 1.",
"Our approach adopts a multi-task learning paradigm and trains a parser with auxiliary objectives, optimized to converge representations of additional new languages.",
"We encourage language-agnostic representations by jointly optimizing for generating logical forms, reconstructing natural language, and promoting language invariance.",
"Our intuition is that auxiliary losses can be exploited to induce similarity in a multi-lingual latent space.",
"The effect of such alignment is that a decoder, trained only on English, can recognize an encoding from another language and generate the relevant logical form.",
"Similar multi-task approaches have been successful in spoken-language understanding (van der Goot et al., 2021), text simplification (Mallinson et al., 2020; Zhao et al., 2020b), dependency parsing (Ahmad et al., 2019b), and machine translation (Arivazhagan et al., 2019).",
"This work, to our knowledge is the first attempt to devise auxiliary objectives for executable semantic parsing as a zero-shot task.",
"Our framework and hypothesis are also sufficiently flexible for application in additional zero-shot sequence transduction tasks.",
"Our motivation is to improve parsing for non-English languages with maximal resource effi-ciency and minimal external dependencies beyond native-speaker utterances.",
"We, therefore, induce a shared multilingual space without resorting to machine translation (Sherborne et al., 2020; Morad-shahi et al., 2020) and argue that our approach is superior because it",
"(a) nullifies the introduction of translation or word alignment errors and",
"(b) scales to low-resource languages without reliable MT. Experimental results on Overnight (Wang et al., 2015; Sherborne et al., 2020) and a new executable version of MultiATIS++ show that our parser generates more accurate logical forms with a minimized cross-lingual transfer penalty from English to French (FR), Portuguese (PT), Spanish (ES), German (DE), Chinese (ZH), Hindi (HI), and Turkish (TR).",
"Cross-lingual Modeling This area has recently gained increased interest across several natural language understanding settings (Zhao et al., 2020a; Nooralahzadeh et al., 2020) with benchmarks such as XGLUE (Liang et al., 2020) and XTREME (Hu et al., 2020) allowing to study classification and generation tasks for multiple languages.",
"Crosslingual approaches have also been developed for dependency parsing (Tiedemann et al., 2014; Schuster et al., 2019), sentence simplification (Mallinson et al., 2020), and spoken-language understanding (SLU; He et al., 2013; Upadhyay et al., 2018).",
"Pre-training has shown to be widely beneficial for a wide range of cross-lingual models (Devlin et al., 2019; Conneau et al., 2020a).",
"By virtue of being trained on massive corpora, these models purportedly learn an overlapping cross-lingual latent space (Conneau et al., 2020b) but have also been identified as under-trained for some tasks (Li et al., 2021), shown poor zero-shot performance, especially for languages dissimilar to English (Pires et al., 2019), and high variance (Keung et al., 2020).",
"Semantic Parsing Most previous work (Lu, 2014; Susanto and Lu, 2017b,a) has focused on multilingual semantic parsing, i.e., learning from multiple natural languages in parallel, largely af-firming the benefit of high-resource multilingual data and multi-language ensemble training (Jie and Lu, 2014).",
"Shao et al. (2020) further improved cross-lingual similarity with adversarial language 4135 identification across such ensembled training data.",
"Code-switching in multilingual parsing has also been explored through mixed-language training datasets (Duong et al., 2017; Einolghozati et al., 2021).",
"To adapt a parser to new languages, machine translation has been used as a reasonable proxy for in-language data (Sherborne et al., 2020; Morad-shahi et al., 2020).",
"However, machine translation, in either direction can introduce limiting artifacts (Artetxe et al., 2020) with poor generalization due to how translationese training data diverges from gold test utterances (Riley et al., 2020).",
"Zero-shot parsing has primarily focused on cross-domain' challenges to improve generalization across varying query structures and lexicons (Herzig and Berant, 2018; Givoli and Reichart, 2019) or different databases (Zhong et al., 2020; Suhr et al., 2020; Yu et al., 2018).",
"The combination of zero-shot parsing with cross-lingual modeling has also been examined for the UCCA formalism (Hershcovich et al., 2019) and for task-oriented dialogue systems (see below).",
"Dialog Modeling Cross-lingual transfer has been studied in the context of goal-oriented dialog for the spoken language understanding (SLU) tasks of intent classification and slot labeling (i.e., parsing an utterance into a semantic frame identifying the user's intent and its arguments).",
"Recently released multilingual datasets like MultiATIS++ (Xu et al., 2020) and MTOP (Li et al., 2021) have facilitated the study of zero-shot transfer through the combination of pre-training, machine translation, and word alignment (to project annotations between languages).",
"Recent work in this setting (Zhu et al., 2020; Li et al., 2021; Krishnan et al., 2021; Nicosia et al., 2021) identifies a penalty for cross-lingual transfer that neither pre-training nor machine translation can fully overcome.",
"The primary challenge for cross-lingual parsing is learning parameters that can parse an utterance, x , from an unseen test language to an accurate logical form (LF).",
"Typically, a parser trained on language l , or multiple languages { l 1 , . . . , l N } , is only capable for these languages and performs poorly outside this set.",
"For a new language, prior approaches require parallel datasets and models (Jie and Lu, 2014; Haas and Riezler, 2016; Duong et al., 2017).",
"In our work, zero-shot parsing refers to parsing utterances in new languages without paired data during training , For some language, l , there exists no pairing of x l to a logical form, y , except for English.",
"2 This setting also excludes silver-standard training pairs created using machine-translation.",
"As these models have ultimately observed some form of utterance-logical form pairs for each new language, we do not consider such approaches here and refer to Sherborne et al. (2020) as an example of using MT for this task.",
"It might be tempting to approach this problem as a case of fine-tuning a pre-trained (English) decoder for LF generation.",
"Problematically, the output target is expressed in a formally defined language (e.g., SQL or DCS) which models the semantics of questions very differently to natural language (e.g., without presumption or co-operation; Kaplan 1978).",
"Formal languages (Kamp and Reyle, 1993) additionally present artifacts which render fine-tuning challenging such as unfamiliar syntax (e.g., table aliases or explicit recursion) and long output sequences.",
"In practice, we observed fine-tuning leads to poor performance (e.g., < 1% accuracy on all languages), with the model insisting on hallucinating natural language.",
"This is seemingly at odds with adjacent work in dialog modeling, which has found pre-trained decoders to be beneficial (Li et al., 2021).",
"However, SLU requires learning a lightweight label vocabulary compared to the 200+ tokens required in LFs.",
"Additionally, SLU typically maintains output sequences of similar size to natural language inputs (with tightly coupled syntactic compositionality between the two), whereas the syntactic and structural demands of LF generation are largely divorced from the input utterance.",
"In our solution, the model is trained to parse from utterance-logical forms pairs only in English.",
"Other languages are incorporated using auxiliary objectives and data detailed in Section 4.",
"We explore the hypothesis that an overlapping multi-lingual latent space can be learned through auxiliary objectives in tandem with logical form generation (see Figure 2).",
"Our intuition is that introducing these additional losses minimizes cross-lingual variance in latent encoding space by optimizing for language-agnostic representations with high similarity to the source language (i.e., English).",
"Our approach minimizes the cross-lingual transfer penalty such that the zero-shot parser predicts logical forms from test inputs regardless of utterance language.",
"By framing the cross-lingual parsing task as a latent representation alignment challenge, we explore a possible upper bound of parsing accuracy without errors from external dependencies.",
"Section 6 demonstrates that our zero-shot model, using only English paired data and a small additional corpus, can generate accurate logical forms above translation baselines to compete with fully supervised in-language training.",
"We adopt a multi-task sequence-to-sequence model (Luong et al., 2016) which combines logical form generation with two auxiliary objectives.",
"The first is a language identification discriminator and the second is a reconstruction or translation decoder.",
"An overview of our semantic parser is given in Figure 2; we describe each component below.",
"Generating Logical Forms Predicting logical forms is the primary objective for our model.",
"Given an utterance x = ( x 1 , x 2 , . . . , x T ) , we wish to generate logical form y = ( y 1 , y 2 , . . . , y M ) representing the same meaning in a machine-executable language.",
"We model this transduction task using an encoder-decoder neural network (Sutskever et al., 2014) based upon the Transformer architecture (Vaswani et al., 2017).",
"The sequence x is encoded to a latent representation z = ( z 1 , z 2 , . . . , z T ) through Equation (1) using a stacked self-attention Transformer encoder, E , with weights E .",
"z = E ( x | E ) (1) p ( y | x ) = M (cid:89) i =0 p ( y i | y <i , x ) (2) p ( y i | y <i , x ) = soft ( DLF ( y <i | z, DLF )) (3) LLF = (cid:88) ( x, y ) S LF log p ( y | x ) (4) The conditional probability of the output sequence y is expressed in Equation (2) as each token y i is autoregressively generated based upon z and prior outputs, y <i .",
"Equation (3) models distribution p ( y i | y <i , x ) using a Transformer decoder for logical forms, DLF , with associated weights DLF where soft is the softmax function.",
"We predict an output, y , for semantic parsing dataset SLF = { x n , y n } Nn =0 , through the encoder and logical form decoder, { E, DLF } .",
"Equation (4) describes the loss objective minimizing the cross-entropy between y and y .",
"Language Prediction Our first additional objective encourages language-agnostic representations by reducing the discriminability of the source language, l , from z .",
"Equation (5) defines a L anguage P rediction (LP) network to predict l from z using a linear classifier over L training languages: LP ( x ) = W i x + b i (5) where W i RL | z | and b i RL are a weight and bias respectively.",
"We follow the best model from Ahmad et al. (2019b).",
"Equation (6) describes the conditional model for the output distribution where a language label is predicted using the time-average of the input encoding z of length T : p ( l | x ) = soft (cid:32) LP (cid:32) 1 T (cid:88) t z t (cid:33)(cid:33) (6) Finally, Equation (7) describes the objective function for the LP network: LLP = (cid:88) x log p ( l | x ) (7) However, we reverse this gradient in the backward pass before the LP network, to encourage the encoder to produce language invariant representations (Ganin et al., 2016).",
"The LP network is optimized to discriminate the source language from z , but the encoder is now optimized adversarially against this 4137 objective.",
"Our intuition is that discouraging language discriminability in z encourages latent representation similarity across languages, and therefore reduces the penalty for cross-lingual transfer.",
"Generating Natural Language The final objective acts towards both regularization and crosslingual similarity.",
"Motivated by domain-adaptive pre-training (Gururangan et al., 2020), we further adapt the encoder towards question-style utterances from native speakers of each test language lacking task-specific training data.",
"We add an additional Transformer decoder optimized to reconstruct a noisy input from latent representation z , in Equation (1).",
"Utterance, x , is input to the encoder, E , and a separate decoder, DNL , then reconstructs x from z .",
"We follow the denoising objective from Lewis et al. (2020) and replace x with noised input x = N ( x ) with noising function N .",
"The output probability of reconstruction is given in Equation (9) with each token predicted through Equation (10) using decoder, DNL , with weights DNL : z = E ( x | E ) (8) p ( x | x ) = T (cid:89) i =0 p ( x i | x <i , x ) (9) p ( x i | x <i , x )=soft ( DNL ( x <i | z, DNL )) (10) The auxiliary objectives are trained using both the utterances from SLF and monolingual data, SNL = {{ x n } Nn =0 } Ll =0 , in L languages (see Section 5).",
"Submodel, { E, DNL } , predicts the reconstruction of x from x with the following objective: LNL = (cid:88) x log p ( x | x ) (11) In the form described above, this objective requires only unlabeled, monolingual utterances in each target language.",
"However, we can also augment it with a translation component to exploit natural language bi-text between the new language and English (e.g., SNL = {{ x n EN , x nl } Nn =0 } Ll =0 ) to further promote cross-lingual similarity.",
"According to some sampling factor , we randomly choose whether to reconstruct an utterance (as above) or translate to the parallel English utterance (i.e., replace x in Equation (11) with x EN ).",
"training, an English query is encoded and input to all three objectives to express output loss as LLF + LNL + LLP .",
"For new languages without ( x, y ) pairs, the utterance is encoded and input only to the auxiliary objectives for a combined loss as LNL + LLP .",
"During inference, an utterance is encoded and always input to DLF to predict a logical form, y , regardless of test language, l .",
"During the backward pass, each output loss back-propagates the gradient signal from the respective objective function.",
"For the encoder, these signals are combined as: L E = LLF E LP LLP E + NL LNL E (12) = 2 1 + e p 1 (13) where { LP , NL } are loss weightings for auxiliary objectives and is the reversed gradient scheduling parameter from Ganin et al. (2016).",
"The value increments with training progress p , scaled by , according to Equation (13), to limit the impact of noisy predictions during early training.",
"We expect that the parser will adapt and recognize an encoding from an unfamiliar language through our joint training process, and successfully connect new language representations to the logical-form decoder at test time.",
"This sequence-to-sequence approach is highly flexible and may be useful for zero-shot approaches to additional generation tasks (e.g., paraphrasing).",
"Semantic Parsing Datasets Our experiments examine whether our zero-shot approach generalizes across languages and domains.",
"We evaluate performance on a new version of the ATIS dataset of travel queries (Hemphill et al., 1990; Dahl et al., 1994).",
"We align existing English utterances and SQL logical forms from Iyer et al. (2017) to the multi-lingual utterances from the MultiATIS++ dataset for spoken language understanding (Xu et al., 2020).",
"This alignment adds executable SQL queries to utterances in Chinese (ZH), German (DE), French (FR), Spanish (ES), and Portuguese (PT).",
"We use the same 4,473/493/448 dataset split for training/validation/test as Kwiatkowski et al. (2011).",
"We also add to the test set Hindi (HI) and Turkish (TR) utterances from Upadhyay et al. 4138 (2018).",
"3 We can now predict SQL from the ATIS test questions in eight natural languages.",
"The Multi-ATIS++ Japanese set was excluded as the utterance alignment to this language was not recoverable.",
"We also examine Overnight (Wang et al., 2015), an eight-domain dataset covering Basketball , Blocks , Calendar , Housing , Publications , Recipes , Restaurants , and Social Network domains.",
"Overnight comprises 13,682 English utterances paired with DCS logical forms, executable in SEMPRE (Berant et al., 2013), split into 8,754/2,188/2,740 for training/validation/test respectively.",
"This training data exists only in EN and we use the ZH and DE test data from Sherborne et al. (2020) for multilingual evaluation.",
"Given the varying linguistic phenomena across domains (e.g. relative spatial reasoning in Blocks or temporal arithmetic in Calendar ), this dataset presents a harder challenge for cross-lingual transfer.",
"We measure performance with denotation accuracy as all inferred logical forms are executable in some knowledge base.",
"This metric compares the retrieved denotation from the prediction, y , to that from executing the gold-standard logical form.",
"Dataset sizes are outlined in Appendix A. Natural Language Data For the reconstruction objective, we used the MKQA corpus (Longpre et al., 2020), a multi-lingual translation of 10,000 samples from NaturalQuestions (Kwiatkowski et al., 2019).",
"This is suitable for our auxiliary objective as the utterances are native-speaker question surface forms, matching our test set while varying in subject.",
"MKQA is also balanced across new languages to limit overexposure bias to one new language.",
"For bi-text, we use the original English and the professionally translated question as a pair.",
"We also report experiments using a sample of crawled data from ParaCrawl 7.1 (Ban et al., 2020).",
"The sample comprises 10,000 web scraped sentences paired with equivalent English to form bi-text.",
"Note that these samples are mostly declarative sentences and as such do not match the surface form of our test inputs (i.e., questions) and are also not parallel between sampled languages.",
"We contrast this to MKQA to examine how the style of natural language data influences performance.",
"For ATIS experiments, we use 60,000 utterances from each source in languages with training data (EN, FR, PT, ES, DE, ZH).",
"For Overnight, we use 3 Misalignment between ATIS versions result in the test sets containing 442 and 381 utterances for HI and TR respectively.",
"Model Configuration The implementation of ZX-PARSE (see Section 4) largely follows parameter settings from Liu et al. (2020) for Transformer encoder and decoder layers (see Appendix A for details on model configuration).",
"ZX-PARSE requires an encoder model to generate multi-lingual latent representations for all objectives.",
"Our main results use only the encoder component of mBART50 (Tang et al., 2020) and we present experiments using other pre-trained models in Appendix B. We use all pre-trained encoder layers and append one additional learnable layer.",
"All decoders are randomly initialized six-layer stacks.",
"Early experiments found this approach superior to any pretrained decoder initialization.",
"The language predictor follows from Ahmad et al. (2019b) as a single linear classification layer mapping from 1,024 inputs to L output languages.",
"Earlier findings supported that if the LP network is larger, then the reversed gradient signal is too strong and therefore less useful as the LP network can memorize the language.",
"Comparison Models We primarily compare to a Translate-Test back-translation baseline wherein the new language test set is translated to English using Google Translate (Wu et al., 2016) and input to a reference sequence-to-sequence model trained on English.",
"We also compare to Translate-Train, where we use MT from English to generate a proxy dataset in each new language (e.g., French, Portuguese, Spanish, German, Chinese, Hindi and Turkish) to train a monolingual parser.",
"We consider improving upon these minimum effort baselines as a lower bound for justifying our approach.",
"Additionally, we compare to an upper-bound monolingual model trained on professional translations of the new languages.",
"We report results on MultiATIS++ for FR, PT, ES, DE, and ZH (profes-sional translations are not available for Overnight training data).",
"This is the maximum effort strategy that we desire to avoid.",
"Parameters for these reference systems match those outlined above e.g., mBART50 encoder to logical form decoder.",
"Our results are outlined to answer four core questions, with additional ablations in Appendix B. Our findings support the hypothesis that we can minimize the cross-lingual transfer penalty by im-4139",
"proving latent alignment with auxiliary objectives.",
"We also examine the latent space directly and find ZX-PARSE learns more similar representations between languages.",
"Our parser achieves state-of-the-art zero-shot results for all non-English languages in the MultiATIS++ and Overnight benchmarks.",
"Better than Translation?",
"We compare between ZX-PARSE and the upperand lower-bounds in Table 1.",
"Our multi-task approach significantly improves upon Translate-Test for all languages included within the auxiliary objectives ( p < 0 . 01 ).",
"For ATIS, we find that Translate-Train performs below Translate-Test for languages similar to English (FR, ES, PT) but worse for more distant languages (DE, ZH).",
"ZX-PARSE performance improves on Translate-Train for all languages included in reconstruction (EN, FR, PT, ES, DE, ZH), however, the general cross-lingual improvement in-sufficiently extends to additional languages (HI, TR) to perform above baselines.",
"Within ZX-PARSE , French and German demonstrate the strongest zero-shot accuracy +2 .",
"4% and +2 .",
"7% above the monolingual upper bound for ATIS.",
"We do not observe similar improvement for Portuguese or Spanish despite their similarity to English.",
"This may be a result of German and French dominating the pre-training corpora compared to other new languages.",
"(Tang et al., 2020, their Table 6).",
"Our model demonstrates similar significant improvement for Overnight ( p < 0 . 01 ), however, we find lesser gain compared to ATIS.",
"This may be a consequence of the compounded challenge of evaluating eight varied domains of complex linguistic constructs.",
"Here, we find that Translate-Train is a stronger approach than Translate-Test, which may be a consequence of machine-translation direction.",
"Our best approach on German still improves above Translate-Train ( +4 . 0% ), however, we find performance on Chinese to be only marginally improved by comparison ( +0 . 6% ).",
"We also observe some contrast in ZX-PARSE performance related to orthographic similarity to English.",
"Parsing accuracy on Overnight in German is +6 .",
"2% above Chinese, with a similar +9 .",
"1% gap between these same languages for ATIS.",
"Which Objective Matters?",
"Ablations to the model are shown in Table 2, identifying the contributions of different objectives.",
"Model",
"(a) shows that without auxiliary objectives, performance in new languages is generally below Translate-Test.",
"This is unsurprising, as this approach uses only pre-trained cross-lingual information without additional effort to improve similarity.",
"Such efforts 4140 are incorporated in Model",
"(b) using the additional reconstruction decoder.",
"Even without the LP loss, domain targeted adaptation (with translation) improves cross-lingual parsing by an average across new languages of +9 .",
"3% for ATIS and +2 .",
"9% for Overnight.",
"Notably, we identified an optimal ratio of translation to reconstruction of 50% (i.e., = 0 . 5 ).",
"This suggests that both monolingual utterances (for domain-adaptive tuning) and bi-text (for translation) contribute to the utility of our method beyond reliance on one technique.",
"Evaluating the LP objective within Model",
"(c) and",
"(d), we find the reversed gradient successfully reduces language discriminability.",
"For Model",
"(d), language prediction accuracy during training peaks at 93% after 2% progress and subsequently decreases to <8% beyond 10% of training.",
"Language prediction accuracy for the test set is 7.2%.",
"We observe a similar trend for Model",
"(c).",
"Comparing individual objectives, we find the addition of the language predictor alone less helpful than the reconstruction decoder.",
"Comparing Model",
"(a) and",
"(c), we observe a smaller average improvement on new languages of +4 .",
"3% for ATIS and +1 .",
"8% for Overnight.",
"This suggests adaptation towards specific surface form patterns can be more effective here than modeling languages as discrete labels.",
"Considering the combination of objectives in Model",
"(d), we identify cumulative benefit to parsing with both objectives.",
"Compared to Model",
"(a), the full model improves by an average of +16 .",
"3% for ATIS and +9 .",
"9% for Overnight across new languages.",
"Our findings support our claim that latent cross-lingual similarity can be improved using auxiliary objectives and we specifically identify that a combination of approaches yields superior parsing.",
"We suggest that this combination benefits from constructive interference, as the language prediction loss promotes invariance in tandem with multilingual generation tasks adapting the encoder to improve modeling the surface form (e.g., questions from native speakers) of the new language test data.",
"Additional objectives also improve parsing for Hindi and Turkish despite neither being included within auxiliary training data (see HI and TR columns in Table 3).",
"By adapting our latent representation to encourage similarity, we improve parsing accuracy for two typologically diverse languages without explicit guidance.",
"To further examine this, we visualize the MultiATIS++ test set in Figure 3 and observe less discriminable encodings 80 60 40 20 0 20 40 60 80 60 40 20 0 20 40 60 mBART50 80 60 40 20 0 20 40 60 80 60 40 20 0 20 40 60 ZX-Parse EN FR PT ES DE ZH HI TR Figure 3: t-SNE comparison using mBART50 and ZX-PARSE encoders (MultiATIS++ test set).",
"from ZX-PARSE compared to mBART50 .",
"Quantitatively, we find the average cosine distance between the sentence-mean of parallel utterances reduces from 0.58 to 0.47.",
"Similarly, the average token-level symmetric Hausdorff distance (Taha and Hanbury, 2015) between languages reduces from 0.72 to 0.41.",
"This further supports that we learn more similar representations and our method has wider utility beyond explicitly targeted languages.",
"Does Language Style Matter?",
"In Table 3 we examine whether our auxiliary objectives are influ-enced by the style of natural language corpora for reconstruction.",
"We find the use of questions positively improves performance compared to crawled sentences.",
"Using questions either as monolingual utterances (i.e., no translation in DNL ) or with as a bi-text sample (i.e., reconstruction and translation in DNL ) improves above the Translate-Test baseline.",
"We observe modest improvements with ParaCrawl, especially when introducing bi-text into DNL , but this is less consistent across languages.",
"Overall, our results suggest that ZX-PARSE is robust even when question-style data is unavailable but can be particularly effective when adapting towards both new languages and domains.",
"We also examined the influence of language family on performance (see Appendix B) and found that best performance utilizes a linguistically varied ensemble of languages.",
"Omitting either Romance (ES/FR/PT) or Sino-Tibetan (ZH) languages in reconstruction negatively impacts performance.",
"Where Does Improvement Come from?",
"Comparing to Translate-Test, on ATIS, our best model generates 32% fewer ill-formed SQL requests and 24% fewer extraneous queries accessing unrelated tables in the database.",
"Translation can fail when entities are mishandled and our model generates 36% fewer queries with erroneous named entities.",
"For Overnight, gains are strongly related to improved numeracy in the model.",
"Between our full 4141 ATIS Overnight Baselines EN FR PT ES DE ZH HI TR EN DE ZH Translate-Train 55.9 56.1 57.1 60.1 56.1 56.3 45.4 62.2 59.4 Translate-Test 58.2 57.3 57.9 56.9 51.4 52.6 52.7 60.1 48.1 ZX-PARSEMKQA = 0 .",
"model and simplest approach (Model",
"(a) in Table 2), we find more well-formed logical forms account for the largest improvement (32.5% fewer ill-formed SQL queries for ATIS and 35.2% fewer ill-formed -DCS queries for Overnight).",
"This supports our notion in Figure 1 that better latent alignment can minimize cross-lingual penalty.",
"However, improved structure prediction is insufficient to solve this task on its own; 58.7% of remaining errors in the best model are due to mishandled entities with the highest entity errors for Chinese (60.2%) and lowest for French (36.7%).",
"This suggests that aligning entities across languages might be necessary for further improvement.",
"We presented a multi-task model for zero-shot cross-lingual semantic parsing which combines logical form generation with auxiliary objectives that require only modest natural language corpora for localization.",
"Through aligning latent representations, ZX-PARSE minimizes the error from cross-lingual transfer and improves accuracy across languages unseen during training.",
"Although we focused exclusively on executable semantic parsing, our approach is general and potentially relevant for linguistically motivated frameworks such as Abstract Meaning Representation (Banarescu et al., 2013; Damonte and Cohen, 2018) or Discourse Representation Theory (Kamp and Reyle, 1993; Evang and Bos, 2016).",
"In the future, we will investigate a few-shot scenario and study sample efficient cross-lingual transfer by explicitly promoting generalization using techniques such as meta-learning (Finn et al., 2017).",
"A key limitation of our work is the limited coverage of eight higher-resource languages.",
"As such, we are unable to test our approach in a genuinely low-resource scenario.",
"We must also consider the risk of over-generalization to dominant dialects within each language as we lack an evaluation of additional dialects (e.g. our English dataset is representative of American English but not Indian English).",
"We hope that such issues can be addressed with additional data collection.",
"Our training requirements are detailed in Appendix A. We hope our work contributes to further usage and development of singular multilingual models as opposed to learning N monolingual models for N languages.",
"We thank the anonymous reviewers for their feedback and Bailin Wang, Kate McCurdy, and Rui Zhang for insightful discussion.",
"The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1; Sherborne) and the European Research Council (award number 681760; Lapata)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this.",
"However, the practical impact of exposure bias is under debate.",
"In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift.",
"In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this.",
"Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size.",
"Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.",
"Neural Machine Translation (NMT) has advanced the state of the art in MT (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), but is susceptible to domain shift.",
"Koehn and Knowles (2017) consider out-of-domain translation one of the key challenges in NMT.",
"Such translations may be fluent, but completely unrelated to the input ( hallucinations ), and their misleading nature makes them particularly problematic.",
"We hypothesise that exposure bias (Ranzato et al., 2016), a discrepancy between training and inference, makes this problem worse.",
"Specifi-cally, training with teacher forcing only exposes the model to gold history, while previous predictions during inference may be erroneous.",
"Thus, the model trained with teacher forcing may over-rely on previously predicted words, which would exacerbate error propagation.",
"Previous work has sought to reduce exposure bias in training (Bengio et al., 2015; Ranzato et al., 2016; Shen et al., 2016; Wiseman and Rush, 2016; Zhang et al., 2019).",
"However, the relevance of error propagation is under debate: Wu et al. (2018) argue that its role is overstated in literature, and that linguistic features explain some of the accuracy drop at higher time steps.",
"Previous work has established a link between domain shift and hallucination in NMT (Koehn and Knowles, 2017; Muller et al., 2019).",
"In this paper, we will aim to also establish an empirical link between hallucination and exposure bias.",
"Such a link will deepen our understanding of the hallucination problem, but also has practical relevance, e.g. to help predicting in which settings the use of sequence-level objectives is likely to be helpful.",
"We further empirically confirm the link between exposure bias and the beam search problem', i.e. the fact that translation quality does not increase consistently with beam size (Koehn and Knowles, 2017; Ott et al., 2018; Stahlberg and Byrne, 2019).",
"We base our experiments on German English IWSLT'14, and two datasets used to investigate domain robustness by Muller et al. (2019): a selection of corpora from OPUS (Lison and Tiedemann, 2016) for German English, and a low-resource German Romansh scenario.",
"We experiment with Minimum Risk Training (MRT) (Och, 2003; Shen et al., 2016), a training objective which inherently avoids exposure bias.",
"Our experiments show that MRT indeed improves quality more in out-of-domain settings, and reduces the amount of hallucination.",
"Our analysis of translation uncertainty also shows how the MLE baseline over-estimates the probability of random translations at all but the initial time steps, and how MRT mitigates this problem.",
"Finally, we show that the beam search problem is reduced by MRT.",
"The de-facto standard training objective in NMT is to minimize the negative log-likelihood L ( ) of the training data D 1 :",
"where x and y are the source and target sequence, respectively, y t is the t th token in y , and y <t denotes all previous tokens.",
"MLE is typically performed with teacher forcing, where y <t are ground-truth labels in training, which creates a mismatch to inference, where y <t are model predictions.",
"Minimum Risk Training (MRT) is a sequence-level objective that avoids this problem.",
"Specifi-cally, the objective function of MRT is the expected loss ( risk ) with respect to the posterior distribution: R ( ) = (cid:88) ( x , y ) D (cid:88) y Y ( x ) P ( y | x ; ) ( y , y ) (2) in which the loss ( y , y ) indicates the discrepancy between the gold translation y and the model prediction y .",
"Due to the intractable search space, the posterior distribution Y ( x ) is approximated by a subspace S ( x ) by sampling a certain number of candidate translations, and normalizing: P ( y | x ; , ) = P ( y | x ; ) (cid:80) y (cid:48) S ( x ) P ( y (cid:48) | x ; ) (3) where is a hyperparameter to control the sharpness of the subspace.",
"Based on preliminary results, we use random sampling to generate candidate translations, and following Edunov et al. (2018), do not add the reference translation to the subspace.",
"To verify the effectiveness of our MRT implementation on top of a strong Transformer baseline (Vaswani et al., 2017), we first conduct experiments on IWSLT'14 German English (DE EN) (Cettolo et al., 2014), which consists of 180 000 sentence pairs.",
"We follow previous work for data splits (Ranzato et al., 2016; Edunov et al., 2018).",
"For DE EN, data comes from OPUS (Lison and Tiedemann, 2016), and is comprised of five domains: medical , IT , law , koran and subtitles .",
"We use medical for training and development, and report results on an in-domain test set and the four other domains (out-of-domain; OOD).",
"German Romansh (DE RM) is a low-resource language pair where robustness to domain shift is of practical relevance.",
"The training data is from the Allegra corpus (Scherrer and Cartoni, 2012) ( law domain) with 100 000 sentence pairs.",
"The test domain are blogs , using data from Convivenza 3 .",
"We have access to 2000 sentences for development and testing, respectively, in each domain.",
"We tokenise and truecase data sets with Moses (Koehn et al., 2007), and use shared BPE with 32 000 units (Sennrich et al., 2016).",
"We implement 4 MRT in the Nematus toolkit (Sen-nrich et al., 2017).",
"All our experiments use the Transformer architecture (Vaswani et al., 2017).",
"Following Edunov et al. (2018), we use 1 -BLEU smooth (Lin and Och, 2004) as the MRT loss.",
"Models are pre-trained with the token-level objective MLE and then fine-tuned with MRT.",
"Hyperparameters mostly follow previous work (Edunov et al., 2018; Muller et al., 2019); for MRT, we conduct limited hyperparameter search on the IWSLT'14 development set, including learning rate, batch size, and the sharpness parameter .",
"We set the number of candidate translations for MRT to 4 to balance effectiveness and efficiency.",
"Detailed hyperparameters are reported in the Appendix.",
"For comparison to previous work, we report low-ercased, tokenised BLEU (Papineni et al., 2002) with multi-bleu.perl for IWSLT'14, and cased, deto-kenised BLEU with SacreBLEU (Post, 2018) 5 otherwise.",
"For settings with domain shift, we report average and standard deviation of 3 independent training runs to account for optimizer instability.",
"The manual evaluation was performed by two native speakers of German who completed bilin-3 https://www.suedostschweiz.ch/blogs/ convivenza 4 Code available at https: //github.com/zippotju/Exposure-Bias-Hallucination-Domain-Shift 5 Signature: BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.4.2 inter-annotator intra-annotator annotation P ( A ) P ( E ) K P ( A ) P ( E ) K fluency 0.66 0.38 0.44 0.87 0.42 0.77 adequacy 0.82 0.61 0.54 0.93 0.66 0.79 Table 1: Inter-annotator (N=250) and intra-annotator agreement (N=617) of manual evaluation.",
"gual (German/English) high school or University programs.",
"We collected 3600 annotations in total, spread over 12 configurations.",
"We ask annotators to evaluate translations according to fluency and adequacy.",
"For fluency, the annotator classifies a translation as fluent, partially fluent or not fluent; for adequacy, as adequate, partially adequate or inadequate.",
"We report kappa coefficient ( K ) (Car-letta, 1996) for inter-annotator and intra-annotator agreement in Table 1, and assess statistical signifi-cance with Fisher's exact test (two-tailed).",
"Table 2 shows results for IWSLT'14.",
"We compare to results by Edunov et al. (2018), who use a convolutional architecture (Gehring et al., 2017), and Wu et al. (2019), who report results with Transfomer-base and dynamic convolution.",
"With 34.7 BLEU, our baseline is competitive.",
"We observe an improvement of 0.5 BLEU from MRT, comparable to Edunov et al. (2018), although we start from a stronger baseline (+2.5 BLEU).",
"Table 3 shows results for data sets with domain shift.",
"To explore the effect of label smoothing (Szegedy et al., 2016), we train baselines with and without label smoothing.",
"MLE with label smoothing performs better by itself, and we also found MRT to be more effective on top of the initial model with label smoothing.",
"For DE EN, MRT increases average OOD BLEU by 0.8 compared to the MLE baseline with label smoothing; for DE RM the improvement is 0.7 BLEU.",
"We note that MRT does not consistently improve in-domain performance, which is a first indicator that exposure bias may be more problematic under domain shift.",
"Our OOD results lag slightly behind those of Muller et al. (2019), but note that the techniques employed by them, namely reconstruction (Tu et al., 2017; Niu et al., 2019), subword regularization (Kudo, 2018), and noisy channel modelling (Li and Jurafsky, 2016) are orthogonal to MRT.",
"We leave the combination of these approaches to future work.",
"BLEU results indicate that MRT can improve domain robustness.",
"In this section, we report on additional experiments to establish more direct links between exposure bias and domain robustness, hallucination, and the beam search problem.",
"Experiments are performed on DE EN OPUS data.",
"We manually evaluate the proportion of hallucinated translations on out-of-domain and in-domain test sets.",
"We follow the definition and evaluation by Muller et al. (2019), considering a translation a hallucination if it is (partially) fluent , but unrelated in content to the source text ( inadequate ).",
"We report the proportion of such hallucinations for each system.",
"Results in Table 4 confirm that hallucinations are much more pronounced in out-of-domain test sets (3335%) than in in-domain test sets (12%).",
"MRT reduces the proportion of hallucinations on out-of-domain test sets (N=500 for each system; reductions statistically significant at p < 0 . 05 ) and improves BLEU.",
"Note that the two metrics do not correlate perfectly: MLE with label smoothing has higher BLEU (+1) than MRT based on MLE without label smoothing, but a similar proportion of hallucinations.",
"This indicates that label smoothing increases translation quality in other aspects, while MRT has a clear effect on the number of hallucinations, reducing it by up to 21% (relative).",
"A closer inspection of segments where the MLE system was found to hallucinate shows that some segments were scored higher in adequacy with MRT, others lower in fluency.",
"One example for each case is shown in Table 5.",
"Even the example where MRT was considered disfluent and inadequate actually shows an attempt to cover the source sentence: the source word Ableugner' (denier) is DE EN DE RM system in-domain average OOD in-domain average OOD SMT (Muller et al., 2019) 58.4 11.8 45.2 15.5 NMT (Muller et al., 2019) 61.5 11.7 52.5 18.9 NMT+RC+SR+NC (Muller et al., 2019) 60.8 13.1 52.4 20.7 MLE w/o LS 58.3 ( 0.53) 9.7 ( 0.25) 52.2 ( 0.19) 15.8 ( 0.39) +MRT 58.4 ( 0.39) 10.2 ( 0.26) 52.1 ( 0.08) 15.9 ( 0.28) MLE w/ LS 58.9 ( 0.45) 11.2 ( 0.16) 53.9 ( 0.16) 18.0 ( 0.17) +MRT 58.8 ( 0.36) 12.0 ( 0.29) 53.9 ( 0.12) 18.7 ( 0.09) Table 3: Average BLEU and standard deviation on in-domain and out-of-domain test sets for models trained on OPUS (DE EN) and Allegra (DE RM).",
"mistranslated into dleugner'.",
"We consider this preferable to producing a complete hallucination.",
"Inspired by Ott et al. (2018), we analyse the model's uncertainty by computing the average probability at each time step across a set of sentences.",
"Besides the reference translations, we also consider a set of distractor' translations, which are random sentences from the in-domain test set which match the corresponding reference translation in length.",
"In Figure 1, we show out-of-domain results for an MLE model and multiple checkpoints of MRT fine-tuning.",
"The left two graphs show probabilities for references and distractors, respectively.",
"The right-most graph shows a direct comparison of probabilities for references and distractors for the MLE baseline and the final MRT model.",
"The MLE baseline assigns similar probabilities to tokens in the references and the distractors.",
"Only for the first time steps is there a clear preference for the references over the (mostly random!) distractors.",
"This shows that error propagation is a big risk: should the model make a wrong prediction initially, this is unlikely to be penalised in later time steps.",
"MRT tends to increase the model's certainty at later time steps 6 , but importantly, the increase is sharper for the reference translations than for the distractors.",
"The direct comparison shows a widening gap in certainty between the reference and distractor sentences.",
"7 In other words, producing a hallucination will incur a small penalty at each time step (compared to producing the reference), presumably due to a higher reliance on the source signal, lessening the risk of error propagation and hallucinations.",
"Our analysis shows similar trends on in-domain references.",
"However, much higher probabilities are assigned to the first few tokens of the references than to the distractors.",
"Hence, it is much less likely that a hallucination is kept in the beam, or will overtake a good translation in overall probability, reducing the practical impact of the model's overreliance on its history.",
"8 4.3 Beam Size Analysis Figure 1 shows that with MLE, distractor sentences are assigned lower probabilities than the references at the first few time steps, but are assigned similar, potentially even higher probabilities at later time steps.",
"This establishes a connection between exposure bias and the beam search problem, i.e. the problem that increasing the search space can lead 6 The uncertainty of the baseline is due to label smoothing.",
"to worse model performance.",
"9 With larger beam size, it is more likely that hallucinations survive pruning at the first few time steps, and with high probabilities assigned to them at later time steps, there is a chance that they become the top-scoring translation.",
"We investigate whether the beam search problem is mitigated by MRT.",
"In Table 6, we report OOD BLEU and the proportion of hallucinations with beam sizes of 1, 4 and 50.",
"While MRT does not eliminate the beam search problem, performance drops less steeply as beam size increases.",
"With beam size 4, our MRT models outperform the MLE baseline by 0.5-0.8 BLEU; with beam size 50, this difference grows to 0.6-1.5 BLEU.",
"Our manual evaluation (N=200 for each system for beam size 1 and 50) shows that the proportion of hallucinations increases with beam size, and that MRT consistently reduces the proportion by 11-21% (relative).",
"For the system with label smoothing, the relative increase in hallucinations with increasing beam size is also smaller with MRT (+33%) than with MLE (+44%).",
"9 The beam search problem has previously been linked to length bias (Yang et al., 2018; Murray and Chiang, 2018) and the copy mode (Ott et al., 2018).",
"We consider hallucinations another result of using large search spaces with MLE models.",
"Our results and analysis show a connection between the exposure bias due to MLE training with teacher forcing and several well-known problems in neural machine translation, namely poor performance under domain shift, hallucinated translations, and deteriorating performance with increasing beam size.",
"We find that Minimum Risk Training, which does not suffer from exposure bias, can be useful even when it does not increase performance on an in-domain test set: it increases performance under domain shift, reduces the number of hallucinations substantially, and makes beam search with large beams more stable.",
"Our findings are pertinent to the academic debate how big of a problem exposure bias is in practice we find that this can vary substantially depending on the dataset , and they provide a new justification for sequence-level training objectives that reduce or eliminate exposure bias.",
"Furthermore, we believe that a better understanding of the links between exposure bias and well-known translation problems will help practitioners decide when sequence-level training objectives are especially promising, for example in settings where the test domain is unknown, or where hallucinations are a common problem.",
"Chaojun Wang was supported by the UK Engineering and Physical Sciences Research Council (EP-SRC) fellowship grant EP/S001271/1 (MTStretch).",
"Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).",
"This project has received support from Samsung Electronics Polska sp.",
"z o.o. Samsung R&D Institute Poland."
] | [
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Online abuse can inflict harm on users and communities, making online spaces unsafe and toxic.",
"Progress in automatically detecting and classifying abusive content is often held back by the lack of high quality and detailed datasets.",
"We introduce a new dataset of primarily English Reddit entries which addresses several limitations of prior work.",
"It (1) contains six conceptually distinct primary categories as well as secondary categories, (2) has labels annotated in the context of the conversation thread, (3) contains rationales and (4) uses an expert-driven group-adjudication process for high quality annotations.",
"We report several baseline models to benchmark the work of future researchers.",
"The annotated dataset, annotation guidelines, models and code are freely available.",
"Social media platforms have enabled unprecedented connectivity, communication and interaction for their users.",
"However, they often harbour harmful content such as abuse and hate, inflicting myriad harms on online users (Waseem and Hovy, 2016; Schmidt and Wiegand, 2017a; Fortuna and Nunes, 2018; Vidgen et al., 2019c).",
"Automated techniques for detecting and classifying such content increasingly play an important role in moderating online spaces.",
"Detecting and classifying online abuse is a complex and nuanced task which, despite many advances in the power and availability of computational tools, has proven remarkably difficult (Vid-gen et al., 2019a; Wiegand et al., 2019; Schmidt and Wiegand, 2017b; Waseem et al., 2017).",
"As Jurgens et al. (2019) argued in a recent review, research has struggled to move beyond the most obvious tasks in abuse detection.' One of the biggest barriers to creating higher performing, more robust, nuanced and generalisable classification systems is the lack of clearly annotated, large and detailed training datasets.",
"However, creating such datasets is time-consuming, complicated and expensive, and requires a mix of both social and computational expertise.",
"We present a new annotated dataset of 25,000 Reddit entries.",
"It contains four innovations that address limitations of previous labelled abuse datasets.",
"First, we present a taxonomy with six conceptually distinct primary categories (Identity-directed, Person-directed, Affiliation-directed, Counter Speech, Non-hateful Slurs and Neutral).",
"We also provide salient subcategories, such as whether personal abuse is directed at a person in the conversation thread or to someone outside it.",
"This taxonomy offers greater coverage and granularity of abuse than previous work.",
"Each entry can be assigned to multiple primary and/or secondary categories (Section 3).",
"Second, we annotate content in context, by which we mean that each entry is annotated in the context of the conversational thread it is part of.",
"Every annotation has a label for whether contextual information was needed to make the annotation.",
"To our knowledge, this is the first work on online abuse to incorporate a deep level of context.",
"Third, annotators provided rationales.",
"For each entry they highlighted the part of the text which contains the abuse (and the relevant parts for Counter Speech and Non-hateful Slurs).",
"Fourth, we provide high quality annotations by using a team of trained annotators and a time-intensive discussion-based process, facilitated by experts, for adjudicating disagreements (Section 4).",
"This work addresses the need for granular and nuanced abusive content datasets, advancing efforts to create accurate, robust, and generalisable classification systems.",
"We report several baseline models to benchmark the work of future researchers (Section 5).",
"The annotated dataset, annotation codebook and code have been made available.",
"1 A full description of the dataset is given in our data statement in the Appendix (Bender and Friedman, 2018).",
"Taxonomies of abuse Taxonomies vary in terms of the scope of abusive behaviours they cover.",
"Some offer categories for abuse against both individuals and groups (Zampieri et al., 2020), others cover only abuse against identities (Davidson et al., 2017; Fortuna and Nunes, 2018; Kiela et al., 2020), against only a single identity, such as misogyny (Anzovino et al., 2018) or Islamophobia (Vidgen and Yasseri, 2019), or only abuse against individuals (Wulczyn et al., 2017).",
"Some research distinguishes between content in different languages or taken from different platforms (Kumar et al., 2018).",
"Waseem et al. (2017) outline two dimensions for characterising online abuse.",
"First, whether it is directed against individuals or groups.",
"Second, whether it is implicit or explicit (also referred to as covert' or overt' (Kumar et al., 2018) and weak' or strong' (Vidgen and Yasseri, 2019)).",
"These two dimensions (strength and target) have been further developed in other studies.",
"Zampieri et al. (2019) use a hierarchical three-level approach to annotation, separating",
"(a) offensive from not-offensive tweets,",
"(b) offensive into targeted and untargeted statements and",
"(c) for targeted statements, identification of what is attacked (group, individual or other).",
"Vidgen et al. (2019a) propose a tripartite distinction, also separating concept-directed' abuse from group-directed and person-directed abuse.",
"However, this is problematic as concept-directed content may be better understood as legitimate critique.",
"Many taxonomies include fine-grained labels for complex subcategories of abuse.",
"Palmer et al. (2020) label implicit varieties of hate, including adjectival nominalization', distancing' and Oth-ering' language.",
"Anzovino et al. (2018) label content for six subtypes of misogyny: discrediting, using stereotypes, objectifying, sexually harassing, threatening violence, dominating or derailing.",
"Sanguinetti et al. (2018) provide annotations for which group is targeted and the linguistic action (i.e., dehumanizing, delegitimizing or aiming to inflict harm).",
"They provide flags for aggressive-1 https://github.com/dongpng/cad_ naacl2021 ness, offensiveness, irony and stereotypes.",
"Sap et al. (2020) provide annotations for social frames' (i.e., biases and stereotypes) about groups.",
"They provide labels for",
"(a) offence (yes/no),",
"(b) whether a group is targeted, and",
"(c) whether the abuse is intentional.",
"Wulczyn et al. (2017) identify different interpersonal abuse, including toxicity, aggression and attacks.",
"Some taxonomies explicitly separate abuse from closely-related but non-abusive forms of online expression.",
"This reflects social scientific insights which emphasize the importance, but also difficulty, of making such distinctions (Rossini, 2019, 2020).",
"Vidgen et al. (2020) distinguish hostility against East Asia from criticism of East Asia, as well as counter speech and discussion of prejudice.",
"Procter et al. (2019) distinguish cyber hate from counter speech, as do Qian et al. (2019) and Mathew et al. (2019), amongst others.",
"Annotation and Data The quality of annotations for abusive datasets has been widely critiqued, and inter-rater agreement scores are often remarkably low.",
"Wulczyn et al. (2017) report an Alpha of 0.45, Sanguinetti et al. (2018) Kappas from k=0.37 for offence to k=0.54 for hate, Gomez et al. (2020) report Kappa of 0.15 in the MMH150 dataset of hateful memes, and Fortuna and Nunes (2018) report a Kappa of 0.17 for a text-only task.",
"In a classification study of prejudice against East Asia, Vidgen et al. (2020) find that 27% of classification errors are due to annotation mistakes.",
"Low agreement is partly because abuse is inherently ambiguous and subjective, and individuals can perceive the same content very differently (Salminen et al., 2019, 2018).",
"Many abusive content datasets use crowdsourced annotations (Zampieri et al., 2019; Fortuna and Nunes, 2018; Davidson et al., 2017).",
"They are cheap and scalable but can be low quality and are often ill-suited to complicated tasks (Sabou et al., 2014).",
"Trained experts with clear guidelines are often preferable for ensuring consistency (Vid-gen and Derczynski, 2020).",
"Whether expertsor crowdsourced annotators are used, a diverse pool is needed as annotators encode their biases, backgrounds and assumptions into their annotations (Sap et al., 2019; Waseem et al., 2017).",
"Most datasets use a simple majority vote over annotations to determine the final labels.",
"However, majority agreement does not guarantee that content is correctly labelled, especially for complex edge-cases.",
"One option is to use a method that adjusts annotators' impact based on their quality, such as MACE (Hovy et al., 2013).",
"However, this may not work well on the most ambiguous content.",
"Group-decision making processes present a promising way of improving annotation quality.",
"Breitfeller et al. (2019) use a collaborative multi-stage process to label micro-aggression and Card et al. (2015) use a similar process for labelling news articles.",
"This ensures more oversight from experts and reflection by annotators on the difficult content.",
"It also provides a feedback loop for annotators to learn from mistakes and improve.",
"A well-established problem with abusive content datasets is that each bit of content is marked up individually, without taking into account any content that came before (Gao and Huang, 2017; Mubarak et al., 2017).",
"This can lead to poor quality annotations when content is ambiguous or unclear without knowing the context.",
"Detection systems which do not account for context are likely to be less applicable in the real-world, where nearly all content appears in a certain context (Seaver, 2015).",
"Pavlopoulos et al. (2020) systematically investigate the role of context in a dataset of Wikipedia comments by providing annotators the parent' before showing them the child' entry.",
"In one experiment at least 5% of the data was affected.",
"In a study of Twitter conversations Procter et al. (2019) label replies to tweets based on whether they agree' or disagree' with the original message.",
"Notwithstanding these studies, further work is needed to better understand the role of context and how abuse emerges within threads, as well as the challenges of detecting deeply contextual content.",
"We present a hierarchical taxonomy of abusive content, which comprises six primary categories and additional secondary categories.",
"It builds on critical social scientific research (Marwick and Miller, 2014; Citron and Norton, 2011; Lenhart et al., 2016), and addresses issues in previous taxonomies, including those provided by Zampieri et al. (2020), Waseem et al. (2017), Founta et al. (2018) and Vidgen et al. (2019a).",
"It offers greater coverage by including three conceptually distinct types of abusive content (Identity-directed abuse, Affiliation-directed abuse and Person-directed abuse) as well as three types of non-abusive content (Neutral, Counter Speech and Non-hateful Slurs).",
"The tax-Entry Abusive Identity-directed abuse Derogation Animosity Threatening Glorification Dehumanization Affiliation-directed abuse Derogation Animosity Threatening Glorification Dehumanization Person-directed abuse Abuse to them Abuse about them Non-abusive Non-hateful Slurs Counter speech Against Identity-directed abuse Against Affiliation-directed abuse Against Person-directed abuse Neutral Figure 1: Primary and Secondary categories.",
"Content which contains a negative statement made against an identity.",
"An identity' is a social category that relates to a fundamental aspect of individ-uals' community, socio-demographics, position or self-representation (Jetten et al., 2004).",
"It includes but is not limited to Religion, Race, Ethnicity, Gender, Sexuality, Nationality, Disability/Ableness and Class.",
"The secondary category comprises five subtypes of identity-directed abuse: Derogation, Animosity, Threatening language, Glorification and Dehumanization.",
"Derogation Language which explicitly attacks, demonizes, demeans or insults a group.",
"Derogation includes representing or describing a group in extremely negative terms and expressing negative emotions about them.",
"Derogation is the basis of most explicit' forms of abuse in existing hateful content taxonomies, although it is often referred to Primary Secondary Example Identity-directed Derogation Muslims cant speak English, they're savages Identity-directed Animosity I dont think black people face any discrimination Identity-directed Threatening Gotta kick those immigrants out... now!",
"with different terms.",
"For instance, Davidson et al. (2017) define hate as content that is derogatory', Waseem and Hovy (2016) include attacks' in their account of hate and Zampieri et al. (2019) insults'.",
"Animosity Language which expresses abuse against a group in an implicit or subtle manner.",
"The lynchpin of this category is that negativity is directed at the group (i.e., there must be some aspect which is discernibly abusive or demeaning about the group in question) but this is not expressed explicitly.",
"Animosity includes undermining the experiences and treatment of groups, ridiculing them, and accusing them of receiving special treatment'.",
"Animosity is similar to the implicit' category used in other taxonomies (Waseem et al., 2017; Vidgen and Yasseri, 2019; Kumar et al., 2018).",
"Threatening language Language which either expresses an intent/desire to inflict harm on a group, or expresses support for, encourages or incites such harm.",
"Harm includes physical violence, emotional abuse, social exclusion and harassment.",
"This is one of the most harmful forms of hateful language (Marwick and Miller, 2014; Citron and Norton, 2011) yet usually it is part of an explicit' hate category (Zampieri et al., 2019; Wulczyn et al., 2017; Waseem and Hovy, 2016) and few datasets have treated it as a separate category, see Golbeck et al. (2017), Anzovino et al. (2018), and Hammer (2014) for exceptions.",
"Dehumanization Language which maliciously describes groups as insects, animals and non-humans (e.g., leeches, cockroaches, insects, germs, rats) or makes explicit comparisons.",
"Dehumanization has been linked with real-world violence and is a particularly important focus for computational work (Leader Maynard and Benesch, 2016; Matsuda et al., 1993), yet is often combined into a broader explicit' category (Palmer et al., 2020; Vidgen et al., 2020; Kiela et al., 2020) and has been insufficiently studied on its own, apart from Mendelsohn et al. (2020).",
"Glorification of hateful entities Language which explicitly glorifies, justifies or supports hateful actions, events, organizations, tropes and individuals (which, collectively, we call entities').",
"It includes denying that identity-based atrocities took place (e.g., Genocide).",
"Glorification is one of the least studied forms of hate computationally, likely because it is more ambiguous, particularly when individuals only express interest in the entities (de Gibert et al., 2018).",
"Content which express negativity against an affiliation.",
"We define affiliation' as a (more or less) voluntary association with a collective.",
"Affiliations include but are not limited to: memberships (e.g. Trade unions), party memberships (e.g. Re-publicans), political affiliations (e.g. Right-wing people) and occupations (e.g. Doctors).",
"The same secondary categories for Identity-directed abuse apply to Affiliation-directed.",
"In some previous taxonomies, affiliations have been mixed in with identities (Founta et al., 2018; Zampieri et al., 2019), although in general they have been excluded as out of scope (e.g. Waseem and Hovy (2016)).",
"Content which directs negativity against an identifiable person, who is either part of the conversation thread or is named.",
"Person-directed abuse includes serious character based attacks, such as accusing the person of lying, as well as aggression, insults and menacing language.",
"Personand Identitydirected forms of abuse are often addressed in separate taxonomies, although in some studies they have been merged into a more general toxic' category (Wulczyn et al., 2017; Golbeck et al., 2017).",
"Recent work have addressed both types of content, recognising that they are conceptually different but often co-occur in the real-world and share syntactical and lexical similarities (Zampieri et al., 2019; Mandl et al., 2019).",
"We provide two secondary categories for person-directed abuse: Abuse at a person who is part of the conversation thread and Abuse about a person who is not part of the conversation thread.",
"The person must be clearly identified, either by their actual name, username or status (e.g. the president of America').",
"To our knowledge, this distinction has not been used previously.",
"Content which challenges, condemns or calls out the abusive language of others.",
"Counter Speech can take several forms, including directly attack-ing/condemning abusive language in unambiguous terms, challenging the original content and calling out' the speaker for being abusive.",
"We use a similar approach to Qian et al. (2019) and Mathew et al. (2019) who also treat counter speech as a relational act that responds to, and challenges, actual abuse.",
"A slur is a collective noun, or term closely derived from a collective noun, which is pejorative.",
"Slurs include terms which are explicitly insulting (e.g. n*gga' or kebabi') as well as terms which implicitly express animosity against a group (e.g. Rainy' or Chad').",
"A slur by itself does not indicate identity-directed abuse because in many cases slurs are not used in a derogatory way but, rather, to comment on, counter or undermine genuine prejudice (Jeshion, 2013) or they have been reclaimed by the targeted group, such as use of n*gga' by black communities (Davidson et al., 2017; Davidson and Weber, 2019).",
"In this category we mark up only the non-hateful use of slurs.",
"Hateful uses of slurs would fall under Identity-directed abuse.",
"Content which does not contain any abuse, Non-hateful Slurs or Counter Speech and as such would not fall into any of the other categories.",
"The low prevalence of online abuse in the wild' (likely as little as 0.1% in English language social media (Vidgen et al., 2019b)) means that most training datasets have used some form of purposive (or directed') sampling to ensure enough entries are in the positive class (Fortuna et al., 2020).",
"However, this can lead to biases in the dataset (Ousid-houm et al., 2020) which, in turn may impact the performance, robustness and fairness of detection systems trained on them (Sap et al., 2019).",
"Notably, the widely-used practice of keyword sampling can introduce topic and author biases, particularly for datasets with a high proportion of implicit abuse (Wiegand et al., 2019).",
"Accordingly, like Qian et al. (2019), we use community-based sampling, selecting subreddits which are likely to contain higher-than-average levels of abuse and a diverse range of abuse.",
"This should lead to a more realistic dataset where the abusive and non-abusive content share similarities in terms of topic, grammar and style.",
"We identified 117 subreddits likely to contain abusive content, which we we filtered to just 16, removing subreddits which (1) had a clear political ideology, (2) directed abuse against just one group and (3) did not have recent activity.",
"187,806 conversation threads were collected over 6 months from 1st February 2019 to 31st July 2019, using the PushShift API (Gaffney and Matias, 2018).",
"We then used stratified sampling to reduce this to 1,394 posts and 23,762 comments (25,156 in total) for annotation.",
"See Data Statement in the Appendix for more information on how the initial 117 subreddits were identified.",
"All posts and comments were annotated.",
"The titles main body of posts were treated separately, resulting in 1,394 post titles, 1,394 post bodies and 23,762 comments being annotated (26,550 entries in total).",
"All entries were assigned to at least one of the six primary categories.",
"Entries could be assigned to several primary categories and/or several secondary categories.",
"The dataset contains 27,494 distinct labels.",
"All entries were first independently annotated by two annotators.",
"Annotators underwent 4 weeks training and were either native English speakers or fluent.",
"See Data Statement in the Appendix for more information.",
"Annotators worked through entire Reddit conversations, making annotations for each entry with full knowledge of the previous content in the thread.",
"All disagreements were surfaced for adjudication.",
"We used a consensus-based approach in which every disagreement was discussed by the annotators, facilitated by an expert with reference to the annotation codebook.",
"This is a time-consuming process which helps to improve annotators' understanding, and identify areas that guidelines need to be clarified and improved.",
"Once all entries were annotated through group consensus they were then reviewed in one-go by the expert to ensure consistency in how labels were applied.",
"This helped to address any issues that emerged as annotators' experience and the codebook evolved throughout the annotation process.",
"In some cases the labels may appear counter-intuitive.",
"For instance, one entry starts ITT: Bernie Sanders is imperfect and therefore is a garbage human being.",
"This might appear like an insult, however the remainder of the statement shows that it is intended ironically.",
"Similarly, use of orange man bad may appear to be an attack against Donald Trump.",
"However, in reality it is supporting Trump by mocking left-wing people who are opposed to him.",
"Nuances such as these only become apparent after multiple reviews of the dataset and through group-based discussions.",
"Targets of abuse For Identity-directed, Affiliation-directed and Non-hateful Slurs, annotators inductively identified targets.",
"Initially, 1,500 targets were identified (including spelling variations), which was reduced to 185 through review and cleaning.",
"All important distinctions, including intersectional identities and specific subgroups and outlooks (e.g., non-gender dysphoric transgender people') were retained.",
"The identities were then grouped into 8 top level categories.",
"The top level categories for Identity-directed abuse include Gender, Ableness/disability and Race.",
"Context For every annotation a flag for context' was given to capture how the annotation was made.",
"If the primary/secondary label was based on just the entry by itself then Current' was selected.",
"If knowledge of the previous content in the conversation thread was required then Previous' was selected.",
"Context was primarily relevant in two ways.",
"First, for understanding who a generic pronoun referred to (e.g., they').",
"Second, to express support for another users' abuse (e.g., Person 1 writes I want to shoot some X' and person 2 responds Go do it!').",
"If this context is not taken into account then the abuse would be missed.",
"In some cases, only the context of a single previous statement was needed to understand an entry (as with the example just given), whereas in other cases several previous statements were required.",
"For Neutral, no label is given for context.",
"For Non-hateful Slurs, only Current' could be selected.",
"Our definition of Counter Speech is relational, and so all Counter Speech require Previous' context.",
"For Affiliation-, Identity-, and Persondirected approximately 25-32% of content were labelled with Previous' context.",
"Rationales For all categories other than Neutral, annotators highlighted the part of the entry related to the category.",
"This is important for Reddit data where some comments are very long; the longest entry in our dataset has over 10k characters.",
"As part of the adjudication process, just one rationale was selected for each entry, giving a single gold standard'.",
"Inter annotator agreement Inter annotator agreement for the primary categories was measured using Fleiss' Kappa.",
"It was moderate' overall (0.583) (Mchugh, 2012).",
"This compares favourably with other abusive content datasets (Gomez et al., 2020; Fortuna and Nunes, 2018; Wulczyn et al., 2017), especially given that our taxonomy contains six primary categories.",
"Agreement was highest for Non-hateful slurs (0.754).",
"It was consistently moderate' for Neutral (0.579), Person (0.513), Affiliation (0.453) and Identity (0.419) but was lower for Counter Speech (0.267).",
"This reflects Counter Speech's low prevalence (meaning annotators were less experienced at identifying it) and the subjective nature of judging whether content counters abuse or is implicitly supportive.",
"One challenge is that if annotators missed a category early on in a thread then they would also miss all subsequent context-dependent entries.",
"hateful Slurs and Neutral entries do not have secondary categories and so only the total is shown.",
"Neutral entries dominate, accounting for 79.8% of the data, followed by Identity-directed abuse which accounts for 9.9%, Affiliation-directed abuse (5.0%), Person-directed abuse (4.0%), Counter Speech (0.8%) and Non-hateful use of slurs (0.5%).",
"Animosity and Derogation are the most frequent secondary categories in Identity-directed and Affiliation-directed abuse, with Threatening language, Dehumanization and Glorification accounting for less than 5% combined.",
"This is unsurprising given the severity of such language.",
"Other training datasets for online abuse generally report similar or slightly higher levels of non-neutral content, e.g., in Gomez et al. (2020) 82% is neutral, in Waseem and Hovy (2016) 68% is not hateful, in both Zampieri et al. (2019) and Vidgen et al. (2020) 67%, and in Founta et al. (2018) 58% is neutral.",
"Data splits For our classification experiments, we exclude entries that are [removed], [deleted] or empty because they were either a blank entry associated with a post title or a entry that only contained an image.",
"We also exclude entries written by two prolific bots (SnapshillBot and AutoModera-tor) and non-English entries, which were identified by langid.py (Lui and Baldwin, 2012) and then manually verified.",
"Entries with an image were included but the image was not used for classification.",
"The dataset used for experiments contains 23,417 entries and is split into a train (13,584; 58%), development (4,526; 19.3%) and test set (5,307; 22.7%).",
"All entries belonging to the same thread are assigned to the same split.",
"A small set of subreddits only occur in either the development or the test set; this allows us to test performance on entries in subreddits that were not included in training.",
"Classification task We automatically classify the primary categories.",
"Due to the low prevalence of Non-hateful Slurs, these are not used as a separate category in the classification experiments.",
"Instead, for the experiments, we re-assign entries with only a Non-hateful Slur label to Neutral.",
"For entries that have a Non-hateful Slur label and at least one other label, we simply ignore the Non-hateful Slur label 2 .",
"1.94% of entries in the training set have more than one primary category.",
"When we exclude Neutral entries (because these entries cannot have another category), this increases to 10.5%.",
"The training data has a label cardinality of 1.02 (Tsoumakas and Katakis, 2007).",
"We thus formulate the task as a multilabel classification problem.",
"It is challenging given the highly skewed label distributions, the influence of context, and the multilabel setup.",
"We compare several popular baseline models.",
"We only use the texts of entries as input.",
"The context of entries (e.g., previous entries in a thread) are 2 This is in-line with our taxonomy, whereby entries assigned to Neutral cannot be assigned to any of the other categories.",
"Logistic Regression (LR) We use Logistic Regression with L2 regularization, implemented using scikit-learn (Pedregosa et al., 2011).",
"There are different approaches to multilabel classification (Boutell et al., 2004; Tsoumakas and Katakis, 2007).",
"One common approach is the Label Powerset method, where a new label is created for each unique label combination.",
"However, this approach is not suitable for our data; many label combinations only have a few instances.",
"Furthermore, classifiers would not be able to recognise unseen label combinations.",
"We therefore use a binary relevance setup, where binary classifiers are trained for each label separately.",
"Because the class distribution is heavily skewed, classes are weighted inversely proportional to their frequencies in the training data.",
"BERT and DistilBERT We finetune the BERT base uncased model (Devlin et al., 2019) with commonly used hyperparameters (see the Appendix).",
"Given BERT's sensitivity to random seeds (Dodge et al., 2020), each setting was run with five different random seeds.",
"Our implementation uses the Hugging Face's Transformers library (Wolf et al., 2019).",
"We use a binary cross entropy loss and encode the labels as multi-hot vectors.",
"Classes are weighted by their ratio of negative over positive examples in the training data.",
"We also finetune DistilBERT (Sanh et al., 2019), a lighter version of BERT trained with knowledge distillation.",
"Evaluation metrics The precision, recall and F1 score for each primary category are reported in Table",
"4. In Table 5, we report micro and macro average F1 scores.",
"Because of the highly skewed class distribution, we favor macro F1 scores.",
"We also report the exact match accuracy (the fraction of entries for which the full set of labels matches).",
"Classifier comparison BERT performs best and achieves a substantial performance improvement over Logistic Regression (Macro F1 of 0.455 vs. 0.343).",
"The performance of DistilBERT is slightly lower, but very close to BERT's performance.",
"With both BERT and DistilBERT there is still much room for improvement on most categories.",
"Note that a majority class classifier which labels everything as Neutral would achieve a high accuracy (0.818) but a low F1 macro score (0.180).",
"There were no clear performance differences between entries from subreddits that were or were not included in the training data.",
"Primary categories Performance differs substantially between the different categories (Ta-ble 4).",
"All classifiers attain high F1 scores on Neutral entries (LR: 0.859, BERT: 0.902); this is expected as the class distribution is highly skewed towards Neutral.",
"Performance is lowest on Counter Speech (LR: 0.042, BERT: 0.091), possibly due to a combination of factors.",
"First, this category has the lowest number of training instances.",
"Second, inter-annotator agreement was lowest on Counter Speech.",
"And third, all Counter Speech annotations are based on previous content in the thread.",
"Error analysis Qualitative analysis shows that the BERT model often misclassifies neutral content which mention identities (e.g., non-misogynistic discussions of women) or contains profanities and aggressive language.",
"It tends to classify Affiliation-and Identity-directed abuse which uses less aggressive language and contains fewer abusive keywords as Neutral.",
"Surprisingly, many of the Person-directed entries which are misclassified as Neutral contain clear signals of abuse, such as profanities and overt aggression.",
"No discernible pattern was observed with Counter Speech which was misclassified as a different category.",
"For this category, the low performance may be attributed mostly to its low frequency in the training data.",
"Context Our benchmark models do not explicitly take into account context for prediction.",
"As expected, all our models are worse at predicting the primary categories of entries where context was required for the annotation.",
"For example, with logistic regression, the recall for Identity-directed abuse is 21.1% for entries where the annotation was based on previous content compared with 46.3% for entries where the annotation is based only on the current content.",
"Similarly, with BERT the recall for Identity-directed abuse increases from 25.3% (Previous') to 60.1% (Current').",
"Secondary categories We compare recall between the secondary categories.",
"For Person-directed abuse, the recall with LR for abuse targeting a person who is not in the thread is substantially lower than for entries that are directed to a person in the thread with (25.2% vs. 35.6%).",
"For BERT and DistilBERT, the performance difference LR DistilBERT BERT P R F1 P R F1 P R F1 Neutral 0.872 0.845 0.859 0.880 0.917 0.898 0.883 0.922 0.902 Identity-directed 0.281 0.398 0.330 0.414 0.473 0.441 0.411 0.510 0.455 Affiliation-directed 0.229 0.395 0.290 0.368 0.450 0.405 0.368 0.481 0.416 Person-directed 0.145 0.304 0.196 0.359 0.404 0.380 0.356 0.488 0.411 Counter Speech 0.032 0.061 0.042 0.083 0.073 0.076 0.107 0.088 0.091 Table 4: Scores per category on the test set.",
"between these two secondary categories is small (e.g., BERT: 48.6% vs. 49.0%).",
"Furthermore, for Identity-directed abuse the recall for animosity (LR: 36.2%, BERT: 45.3%) tends to be lower than the recall for derogation (LR: 49.0%, BERT: 65.9%), which is expected as animosity expresses abuse in an implicit manner and is often more nuanced.",
"The larger difference for BERT vs. logistic regression shows the promise of more advanced models in distinguishing subcategories.",
"For Affiliation-directed abuse, the differences are smaller.",
"Here, the recall for animosity is (unexpectedly) slightly higher (LR: 43.3%, BERT: 49.5%) than for derogation (LR: 36.1%, BERT: 48.0%).",
"Label dependence The multilabel setup of this classification task makes this a challenging problem.",
"All models tend to assign too many labels.",
"For example, DistilBERT predicts only too few labels in 1.17% of the cases, the remainder predicting the right number (91.88%) or too many (6.96%).",
"For BERT, the difference is even higher (1.06% too few; 9.21% too many labels).",
"Dependencies between labels are sometimes violated.",
"In our taxonomy, entries which are Neutral cannot have another label, but our models violate this constraint in many cases.",
"With DistilBERT 3.8% of the entries are classified as Neutral and at least one other class, this is even more so for BERT (5.4%) and (LR: 10.7%).",
"Future work could therefore explore modeling relationships between labels.",
"We have presented a detailed dataset for training abusive content classification systems.",
"It incorporates relevant social scientific concepts, providing a more nuanced and robust way of characterising and therefore detecting abuse.",
"We have also presented benchmark experiments, which show much room for improvement.",
"Our analyses indicate numerous areas to explore further, including creating systems which explicitly model the conversation threads to account for context.",
"Predictive methods could be applied to understand and forecast when a conversation is turning toxic, potentially enabling real-time moderation interventions.",
"More powerful models could also be applied to better distinguish the primary categories and to begin classification of the secondary categories.",
"This could be achieved by also using the images to classify the content, which we did not do.",
"Finally, we would also expect the rationales to be of considerable use in future experiments, both for classification and to understand the annotation process.",
"The current work has several limitations.",
"First, the class distribution is heavily skewed towards the Neutral class and some abusive categories have low frequencies.",
"This better reflects real-world prevalence of abuse but can limit the signals available for classification.",
"Second, inter-annotator agreement was in-line with other research in this domain but could still be improved further, especially with edge case' content.",
"We follow the ACM's Code of Ethics and Professional conduct 3 , as well as academic guidelines for ethically researching activity on social media (Townsend and Wallace, 2017; Williams, 2019).",
"Online abuse poses substantial risk of harm to online users and their communities, and there is a 3 https://www.acm.org/code-of-ethics strong social justification for conducting this work.",
"Dataset collection We used the Pushshift API to collect data from Reddit 4 , which we accessed through the data dumps on Google's BigQuery using R 5 .",
"The Pushshift API is a wrapper which allows large quantities of Reddit data to be accessed reliably and easily (Baumgartner et al., 2020; Gaffney and Matias, 2018).",
"Our collection is consistent with Reddit's Terms of Service.",
"Ethical approval This project was given ethical approval on 18th March 2019, before any research had started, by The Alan Turing Institute (sub-mission C1903-053).",
"Reddit can be considered a public space in that discussion are open and posts are aimed at a large audience.",
"In this way, it differs from a one-to-one or private' messaging ser-vice.",
"When users sign up to Reddit, they consent to have their data made available to third parties, such as academics.",
"Many users are aware of this and choose to use non-identifiable pseudonyms.",
"Existing ethical guidance indicates that in this situation explicit consent is not required from each user (which is often infeasible), provided that harm to users is minimized at all times (Williams, 2019) and no real' quotes are attributed to them in the paper.",
"We follow this guidance and do not provide any direct quotes.",
"The examples given in Table 1 are synthetic.",
"We also minimized how many entries we collected from each user so that each one comprises only a small part of the total dataset.",
"At no point did any of the research team contact any Reddit users, minimizing the risk that any harm could be caused to them.",
"Further, we decided not to review any profile information about the users, substantially minimizing the risk that any personally identifiable information is included in the dataset.",
"Treatment of annotators We used trained annotators that were carefully recruited through the host institution (in line with their HR procedures).",
"Crowdsourced workers were not used.",
"Annotators were carefully supervised with weekly meetings and regular one-to-one discussions.",
"We followed the guidelines provided by Vidgen et al. (2019a) for ensuring annotator welfare during the work.",
"We provided annotators with access to support services throughout the project, including counselling support, although they were not used.",
"Annotators were 4 https://pushshift.io/api-parameters/ 5 https://pushshift.io/ using-bigquery-with-reddit-data/ paid substantially above the living wage.",
"They were paid holiday and all meetings and training time was paid.",
"Research team wellbeing To protect the wellbeing of the research team, we had regular catchup discussions, and made sure that the lead researchers were not exposed excessively to harmful content.",
"We did not post anything about the project whilst it was conducted (to minimize the risk of attracting the attention of malicious online actors) and did not engage with any of the Reddit users or communities being studied.",
"Dataset information and quality We provide a Data Statement in the Appendix, following Bender and Friedman (2018), with full information about the dataset.",
"Baseline models We present baseline classification models in the paper.",
"We have carefully considered how these models could be deployed and believe that this is highly unlikely given their performance.",
"There is a risk of bias in any dataset, and associated models, and we have sought to provide as much information as possible in our dataset, documentation and other artefacts to enable future researchers to investigate these issues.",
"We do not use demographic or identity characteristics in the formation of the dataset.",
"We also do not provide information about individual annotators, only giving the overall profile of the annotation team.",
"The computational time/power involved in creating the baselines was minimal.",
"This work was supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1, particularly the Criminal Justice System theme within that grant, and The Alan Turing Institute.",
"The authors would like to thank Zeerak Waseem and Paul Rttger for their helpful advice, as well as feedback from attendees at the Hertie School's 2020 workshop on online hate regulation."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Multiple-choice question answering (MCQA) is one of the most challenging tasks in machine reading comprehension since it requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations.",
"Unfortunately, most existing MCQA datasets are small in size, which increases the difficulty of model learning and generalization.",
"To address this challenge, we propose a multi-source meta transfer (MMT) for low-resource MCQA.",
"In this framework, we first extend meta learning by incorporating multiple training sources to learn a generalized feature representation across domains.",
"To bridge the distribution gap between training sources and the target, we further introduce the meta transfer that can be integrated into the multi-source meta training.",
"More importantly, the proposed MMT is independent of backbone language models.",
"Extensive experiments demonstrate the superiority of MMT over state-of-the-arts, and continuous improvements can be achieved on different backbone networks on both supervised and unsupervised domain adaptation settings.",
"Recently, there has been a growing interest in making machines to understand human languages, and a great progress has been made in machine reading comprehension (MRC).",
"There are two main types of MRC task: 1) extractive/abstractive question answering (QA) such as SQuAD (Rajpurkar et al., 2018) and DROP (Dua et al., 2019); 2) multiple-choice QA (MCQA) such as MultiRC (Khashabi et al., 2018) and DREAM (Sun et al., 2019a).",
"Different from extractive/abstractive QA whose answers are usually limited to the text spans exist in the passage, the answers of MCQA may not appear in the text passage and may involve complex Corresponding author.",
"Task in source 2 Task in source 3 Task in source 4 Task in source 1 Source representation MMT representation MMLMTL",
"language inference.",
"Thus, MCQA usually requires more advanced reading comprehension abilities, including arithmetic operation, summarization, logic reasoning and commonsense reasoning (Richard-son et al., 2013; Sun et al., 2019a), and etc.",
"In addition, the size of most existing MCQA datasets is much smaller than that of the extractive/abstractive QA datasets.",
"For instance, all the span-based QA datasets, except CQ (Bao et al., 2016), contain more than 100 k samples.",
"In contrast, the data size of most existing MCQA datasets are far less than 100 k (see Table 1), and the smallest one only contains 660 samples.",
"The above two major challenges make MCQA much more difficult to optimize and generalize, especially for the low resource issue.",
"In order to achieve better performance on downstream NLP tasks, it is inevitable to fine-tune the pre-trained deep language models (Devlin et al., 2019; Raffel et al., 2019; Dai et al., 2019; Liu et al., 2019; Yang et al., 2019) with a large number of supervised target data for reducing the discrepancy between the training source and target data.",
"Due to the low resource nature, the performance of most existing MCQA methods is far from satisfactory.",
"To alleviate such issue in MCQA, one straightforward solution is to merge all available data resources for training (Palmero Aprosio et al., 2019).",
"However, the data heterogeneity of datasets ( e.g., resource domains, answer types and varies diversity of choice size across different MCQA datasets.) hinders the practical use of this strategy.",
"To better discover the hidden knowledge across multiple data sources, we propose a novel framework termed Multi-source Meta Transfer (MMT) .",
"In this framework, we first propose a module named multi-source meta learning (MML) that extends traditional meta learning to multiple sources where a series of meta-tasks on different data resources is constructed to simulate low-resource target task.",
"In this way, a more generalized representation could be obtained by considering multiple source datasets.",
"On the top of it, the meta transfer learning (MTL) is integrated into multi-source meta training to further reduce the distribution gap between training sources and the target one.",
"Different from traditional meta learning that assumes tasks generated from the similar dis-tribution/same dataset, MMT is able to discover the knowledge across different datasets and transfer it into the target task.",
"More importantly, MMT is agnostic to the upstream framework, i.e., it can be seamlessly incorporated into any existing backbone language models to improve performance.",
"Figure 1 briefly illustrates both meta learning and the proposed MMT.",
"Meta learning, a.k.a learning to learn, intends to design models that can learn general data representation and adapt to new tasks with a few training samples (Finn et al., 2017; Nichol et al., 2018).",
"Early works have demonstrated that meta learning is capable of boosting the performance of natural language processing (NLP) tasks, such as named entity recognition (Munro et al., 2003) and grammatical error correction (Seo et al., 2012).",
"Recently, meta learning gains more and more attention.",
"Many works explore to adopt meta learning to address low resource issues in various NLP tasks, such as machine translation (Gu et al., 2018; Sennrich and Zhang, 2019), semantic parsing (Guo et al., 2019), query generation (Huang et al., 2018), emotion distribution learning (Zhao and Ma, 2019), relation classification (Wu et al., 2019; Obamuyide and Vlachos, 2019) and etc.",
"These methods have all achieved good performance due to their powerful data representation ability.",
"Meanwhile, the strong learning capability of meta learning also provides deep models with a better initialization, and boosts deep models fast adaptation to new tasks under both supervised (Qian and Yu, 2019; Obamuyide and Vlachos, 2019) and unsupervised (Sri-vastava et al., 2018) scenarios.",
"Unfortunately, meta learning is seldom studied in multiple-choice question answering in existing methods.",
"To our best knowledge, it is also the first time to extend meta learning into multi-source scenarios.",
"Multiple-choice question answering (MCQA) is a challenging task, which requires understanding the relationships and handle the interactions between passages, questions and choices to select the correct answer (Chen and Durrett, 2019).",
"As one of the hot track of question answering tasks, MCQA has seen a great surge of challenging datasets and novel architectures recently.",
"These datasets are built through considering different contexts and scenes.",
"For instance, Guo et al. (2017) present an open-domain comprehension dataset; Lai et al. (2017) build a QA dataset from examinations, which requires more complex reasoning on questions; and Zellers et al. (2018) introduce a QA dataset that requires both natural language inference and commonsense reasoning.",
"Meanwhile, various approaches have been proposed to address the MCQA task using different neural network architectures.",
"Some works propose to compute the similarity between question and each of the choices through an attention mechanism (Chaturvedi et al., 2018; Wang et al., 2018).",
"Kumar et al. (2016) construct the context embedding for semantic representation.",
"Liu et al. (2018) and Yu et al. (2019) apply the recurrent memory network for question reasoning.",
"Chung et al. (2018) and Jin et al. (2019) further incorporate an attention mechanism into recurrent memory networks for multi-step reasoning.",
"Most existing works only strive to increase the reasoning capability by constructing complex models, but ignore the low resource nature of those available MCQA datasets.",
"Many existing MCQA tasks suffer from the low-resource issue, which requires a special training strategy to tackle it.",
"Recent advance of meta learning shows its advantages in solving the few-shot learning problem.",
"Typically, it can rely on only a very small number of training samples to train a model with good generalization ability (Finn et al., 2017; Nichol et al., 2018).",
"Unfortunately, the existing meta learning algorithms are unable to be applied in our problem setting directly, since they are based on the assumption that the meta tasks are generated from the same data distribution (Fallah et al., 2019).",
"For example, one of the most popular benchmarks is the Mini-ImageNet dataset that was proposed by Lake et al. (2011), and it consists of 100 sub-classes from ImageNet dataset.",
"All the meta tasks generated from the same training dataset have similar properties.",
"In contrast, in our studied problem MCQA, data properties such as answer, question type, and commonsense are greatly vary across the MCQA datasets.",
"Specifically, the passages and questions come from different scenarios (such as exams, dialogues, and stories), and the answering choice contains more complex semantic information than the fixed categories in Mini-ImageNet.",
"Therefore, simply combining all the data resources into one and feeding it into existing meta learning algorithms is not an optimal solution (the experimental results in Figure 5 also support this point).",
"To address the data heterogeneity challenge and cater to the MCQA task, we extend the traditional meta learning method to multiple training sources scenarios, where we fully exploit multiple inter-domain sources to learn more generalized representations.",
"Specifically, multi-source meta learning performs meta learning among multiple sources in sequence, thereby completing one iteration.",
"However, multi-source meta learning alone cannot guarantee the desirable performance due to the data distribution gap between multiple sources and target data.",
"Therefore, transfer learning from multi-sources to target is required.",
"Here we introduce meta transfer learning into each meta learning iteration, which aims at reducing the discrepancy between the learned meta representation from multi-source and target.",
"The proposed multi-source meta transfer (MMT) method consists of two modules: multi-source meta learning (MML) and meta transfer learning (MTL).",
"As shown in Figure 2, the MML contains fast adaptation, meta-model update and target fine-tuning steps; and the MTL performs to transfer the knowledge initialized by MML to the target task.",
"Note that MMT is agnostic to backbone models, i.e., it can be seamlessly incorporated into any stronger backbone to boost performance.",
"In this work, we select pre-trained BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) as the backbone for MMT.",
"Generally, MMT first learns meta features from multiple sources of inputs such that those features could be mapped into a latent representation space.",
"Then, the fine-tuning step performs to reduce the representation gap between different sources and the meta representation.",
"Finally, MTL is applied to transfer the well-initialized meta representations to the target task.",
"The details of MMT are summarized in Algorithm 1, where the procedures of MML and MTL are presented in lines 2 16 and lines 17 21 , respectively.",
"In MML, we sequentially sample data to construct the tasks in meta learning from multiple source distributions { p s ( ); s S } , where S denotes the sources index set.",
"Note that the support-tasks and query-tasks, in one iteration of MML, should be sampled simultaneously to satisfy the same distribution requirement.",
"The learning rates for each of the learning modules are different, where denotes the learning rate for fast adaptation module, is utilized for both meta-model updating and target fine-tuning, and represents the learning rate for MTL.",
"Moreover, the parameter of MMT is initialized from the backbone language model, i.e., BERT, RoBERTa.",
"In the sequence, we introduce each step in multi-source meta learning (MML) module.",
"The first step is fast adaptation (lines 4 8 ), which aims to learn the meta information from support-tasks si .",
"The task-specific parameter (cid:48) is updated by (cid:48) = L si ( f ( )) , (1) where the gradient L si ( f ( )) is computed by the cost function L si ( f ( )) with respect to model parameter .",
"The second step is meta-model update (line 9 ), where its cost function, (cid:80) si p s ( ) L si ( f ( (cid:48) )) , is calculated with respect to (cid:48) , and it is adopted to evaluate the performance of fast adaptation on the corresponding newly sampled query-tasks ( si ).",
"It is worth noting that f ( (cid:48) ) is an implicit function of (see Equation 1), and the second-order Hessian gradient matrix is required for the gradient computation (Nichol et al., 2018).",
"However, the use of second derivatives is computationally expensive, so we employ a first-order approximation (Obamuyide and Vlachos, 2019) to update the meta-model gradient by = (cid:88) si p s ( ) L si ( f ( (cid:48) )) .",
"The last step of MML is target fine-tuning (lines 10 14 ).",
"Although the learnt meta representations carry sufficient semantic knowledge and are well generalized, the data distribution discrepancy between meta representation and target still exists.",
"This fine-tuning step is utilized to reduce the distance between the meta representation and target task on the latent representation space.",
"Generally, all the steps in MML are sequentially conducted until the meta-model converges.",
"After performing MML, the meta transfer learning (MTL) module will be applied upon the learnt meta representations for the final transfer learning on target data.",
"In this section, we extend MMT to the unsupervised domain adaptation setting, where no labeled data from the target domain will be given.",
"In this Algorithm 1: The procedure of MMT.",
"setting, the difficulty of unsupervised domain adaptation arises due to the different number of choices between source and target datasets.",
"This issue hinders the pre-trained model to be applied to the target task whose choices differ from the source task, i.e., only the knowledge of feature encoders are transferable.",
"To address this issue, unsupervised MMT constructs the support/query-tasks by sampling, which makes the choice number of tasks in the source equal to the target task.",
"With this manner, the unsupervised MMT is able to transfer the knowledge of both feature encoders and classifier to the target task.",
"Some prior works (Chung et al., 2018) also investigated on the unsupervised transfer learning in QA, but they did not well solve the category difference issue exists in multi-sources learning.",
"To the best of our knowledge, we are the first to apply meta learning to address knowledge transfer issue between tasks with different choices in the unsupervised domain adaptation setting.",
"Next, we term our proposed method as unsupervised MMT in short.",
"The framework of unsupervised MMT is shown in Figure",
"3. A specific source is pre-trained, as an initial state of meta model, to reduce the optimization cost of MMT learning without prior information.",
"With this initial state, unsupervised MMT conducts meta learning by the steps of fast adaption and meta-model update iteratively.",
"Correspondingly, the training of unsupervised MMT is implemented by removing the fine-tuning procedures (lines 10-14 and lines 17-21) in Algorithm",
"1. By this manner, unsupervised MMT shortened the target representation discrepancy from the specific transferred representation to a generalized meta representation.",
"Moreover, unsupervised MMT fast adapts to category variable tasks without supervised fine-tuning, which relaxes the fixed-category constraint in transfer learning.",
"Source selection is a prerequisite step for MMT.",
"Due to the data heterogeneity of different sources, the performance of meta learning may drop if we consider some undesirable data sources in training.",
"In other words, these undesirable or called dis-similar data sources will cause negative transfer when their distribution is far away from the target one.",
"To eliminate such drawback, we may consider those similar datasets from all the available data sources.",
"In the experiments, we also evaluate the transfer performance of the all source datasets on the target task.",
"The more similar of source to target data, the better improvements can be achieved through MMT on the target tasks.",
"Therefore, we use the transfer performance as a guidance for the sequential multi-source meta transfer training, i.e., learns from dissimilar sources to a similar one.",
"We conduct experiments to evaluate the performance of MMT on the following MCQA benchmark datasets.",
"DREAM (Sun et al., 2019a) is a dialogue-based dataset designed by education experts to evaluate the English level of nonnative English speakers.",
"It focuses on multi-tune multi-party dialogue understanding, which contains various types of questions, like summary, logic, arithmetic, commonsense, etc.",
"MCTEST (Richardson et al., 2013) is a fictional stories dataset which aims to evaluate open-domain machine comprehension.",
"The stories contain open domain topics, and the questions and choices are created by crowd-sourcing with strict grammar, quality guarantee.",
"RACE (Lai et al., 2017) is a dataset about passage reading comprehension, which collected from middle/high school English examinations.",
"Human experts design the questions, and the passages cover various categories of human articles: news, stories, advertisements, biography, philosophy, etc.",
"SemEval-2018-Task11 (Ostermann et al., 2018) consists of scenario-related narrative text and various types of questions.",
"The goal is to evaluate the machine comprehension for commonsense knowledge.",
"SWAG (Zellers et al., 2018) is a dataset about rich grounded situations, which is constructed de-biased with adversarial filtering and explores the gap between machine comprehension and human.",
"The statistics of DREAM, MCTEST, RACE, SemEval-2018-Task11 (SemEval) and SWAG are summarized in Table",
"1. Name DREAM RACE MCTEST SemEval SWAG Type Dialogue Exam Story Narrative Text Scenario Text Ages 15+ 12-18 7+ -Generator Expert Expert Crowd.",
"To demonstrating the versatility of MMT, we adopt both BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) as the backbone.",
"Due to the resource limitation, the maximal sequence input lengths of BERT and RoBERTa can only be set as 512 and 256, respectively.",
"For all datasets, the model optimization is performed by Adam (Kingma and Ba, 2014), the initial learning rate of fast adaptation is set to 1 e 3 , and the rest ones are set to 1 e 5 .",
"The results of MCQA under supervised setting are summarized in Table",
"2. Note that we reproduce the results of BERT-Base and RoBERTa-Large on the benchmark datasets in our experiment setting for fair comparison.",
"From the results, we can see that MMT(RoBERTa) achieves the best performances overall benchmark datasets and outperforms current SOTAs with significant margins (i.e., from 5% to 13% ).",
"Second, MMT is able to boost up performance over different pre-trained language models.",
"While, the weaker backbone network is, the better improvement MMT can achieve.",
"For example, the MMT(BERT-Base) improves BERT-Base over 14% on MCTEST.",
"In contrast, MMT(RoBERTa) only achieves 1 .",
"54% on MCTEST.",
"The performance difference between MMT(RoBERTa) and MMT(BERT-Base) is mainly related to the performance of backbone itself and the scale of backbone parameter in MMT optimization.",
"We also want to point out that one of the advantages for MMT is backbone-free, which indicates that its performance can be improved progressively with the advance of language models.",
"In this experiment, we further evaluate the performance of MMT under the unsupervised domain adaptation, where no labeled data from the target domain will be available.",
"We use BERT-Base as the backbone, and the model is trained on SWAG and RACE training sources, which is termed as unsupervised MMT(S+R).",
"We also compare it with other SOTAs as well as some transfer learning baselines TL( ).",
"For example, TL(R-S) denotes that BERT-Base is first fine-tuned in sequence on RACE and SWAG, and then test on MCTEST.",
"The results of MCTEST are summarized in Table",
"3. From the results, we observe that the unsupervised MMT significantly outperforms other unsupervised domain adaptation methods, e.g., MemN2N (Chung et al., 2018) and QACNN (Chung et al., 2018) by a large margin.",
"Moreover, unsupervised MMT can beat some supervised methods, such as BERT-Base, IMC (Yu et al., 2019), even without any labeled data from Method Sup.",
"the target domain.",
"For a more fair comparison, we also create several transfer learning baselines that can utilize multiple training sources such as TL(R-S) and TL(S-R).",
"From the results, we can conclude that unsupervised MMT is a better solution to make full use of multiple training sources than sequential transfer learning.",
"Similar observations hold on SWAG dataset.",
"Reported in Table 4, unsupervised MMT outperforms other methods significantly.",
"Note we follow the same setting in KagNet (Lin et al., 2019) that only the development set of SWAG is evaluated.",
"We conduct ablative experiments to analyze the two modules of MMT, i.e., multi-source meta learning (MML) and meta transfer learning (MTL).",
"The MTL is the transfer learning module specifically designed for MML, and TL denotes the traditional transfer learning without MML.",
"The experiments are based on BERT-Base model, and all the results are reported in Table",
"5. Dream Dev Test BERT-Base 60.05 61.58 +MML(M) 49.85 52.87 +MML(R) 49.56 51.69 +MML(M R) 29.60 29.20 +TL(M) 60.31 60.14 +TL(R) 68.72 67.72 +TL(R-M) 68.97 67.38 +TL(M+R) 68.61 68.15 +MMT(M) 67.99 68.54 +MMT(R) 68.04 68.69 +MMT(M R) 61.72 60.12 MMT(M+R) 68.38 68.89 Table 5: Ablation study of MMT on DREAM.",
"In the first experiments, we present the results of the MML module.",
"When the input source for MML is a single source, MML downgrades to the traditional meta learning.",
"From the results, we observe that MML fine-tuned on MCTEST (MML(M)) is better than that on RACE (MML(R)), which is caused by the large difference between the RACE and DREAM datasets.",
"We also compare the baseline that simply combines RACE and MCTEST datasets to be one large training source, denoted by MML(M R), dramatically drops the performance and only achieves 29 .",
"20% on DREAM dataset, which is 23 .",
"67% lower than that of MML(M).",
"This suggests that a simple combination of the two different training datasets for meta training is not a good choice.",
"For the transfer learning (TL) module, we can observe that the performance improvement is more significant by transferring knowledge from RACE to DREAM, compared to that from MCTEST.",
"In addition, TL(R-M) also benefits from fine-tuning on RACE and MCTEST sequentially, and achieves better results.",
"With the help of MTL, MMT further boosts the performance on DREAM and outperforms both MML and TL baselines.",
"For instance, MMT(M) outperforms MML(M) and TL(M) with 15 .",
"67% and 8 .",
"40% , respectively.",
"Moreover, MMT is also helpful in alleviating the overfitting issue that exists in TL baselines.",
"The results of development set for TL( ) are higher than the test set, which indicates the poor generalization ability of TL( ).",
"Fortunately, MMT( ) is able to address this issue.",
"The MMT(R+M) that is trained on both RACE and MCTEST in meta learning manner, achieves the best results in all evaluated methods.",
"Source selection is a prerequisite step for MMT.",
"In previous experiments, we assume that training resources are given without selection.",
"Due to the data heterogeneity of different sources, the performance of meta learning may drop if we incorporate some undesirable data sources in training.",
"In this experiment, we evaluate the transferability between different datasets and further give the suggestion on the source selection for MMT.",
"The results are summarized in Figure",
"4. In Figure, the X-axis denotes the source, and Y-axis denotes the target.",
"The values in the boxes indicate transferability from source to the target data in terms of accuracy.",
"For example, 14 denotes transferring RACE to the target MCTEST will obtain 14% accuracy improvement over that only trained on the MCTEST.",
"The negative value in the transferability matrix suggests the negative transfer.",
"There is no source that can be used to improve the performance of SWAG effectively.",
"In MMT, we employ this transferability matrix to guide the source selection for MML training.",
"Specifically, in supervised MMT, we only choose those training sources with the significant positive transfer.",
"In unsupervised MMT, the source with the highest score is selected to be the initial state.",
"To verify the impact of different dataset to MMT, we further study the improvement on target SemEval by training with different sources.",
"The results is shown in Figure",
"5. The performance of SemEval drops when we incorporate DREAM and SWAG into training.",
"Recall the transferability matrix in Figure 4, the DREAM and SWAG datasets show little help in improving the performance on SemEval compared to RACE and MCTEST.",
"In summary, more source data do not guarantee better performance.",
"Only the similar source data will be beneficial for multi-source meta learning.",
"In this work, we propose a novel method named multi-source meta transfer for multiple-choice question answering on low resource setting.",
"Our method considers multiple sources meta learning and target fine-tuning into a unified framework, which is able to learn a general representation from multiple sources and alleviate the discrepancy between source and target.",
"We demonstrate the superiority of our methods on both supervised setting and unsupervised domain adaptation settings over the state-of-the-arts.",
"In future work, we explore to extend this approach for other low resource tasks in NLP.",
"The paper is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other"
] |
[
"A neural machine translation (NMT) system is expensive to train, especially with high-resource settings.",
"As the NMT architectures become deeper and wider, this issue gets worse and worse.",
"In this paper, we aim to improve the efficiency of training an NMT by introducing a novel norm-based curriculum learning method.",
"We use the norm (aka length or module) of a word embedding as a measure of 1) the difficulty of the sentence, 2) the competence of the model, and 3) the weight of the sentence.",
"The norm-based sentence difficulty takes the advantages of both linguistically motivated and model-based sentence difficulties.",
"It is easy to determine and contains learning-dependent features.",
"The norm-based model competence makes NMT learn the curriculum in a fully automated way, while the norm-based sentence weight further enhances the learning of the vector representation of the NMT.",
"Experimental results for the WMT'14 English German and WMT'17 ChineseEnglish translation tasks demonstrate that the proposed method outperforms strong baselines in terms of BLEU score (+1.17/+1.56) and training speedup (2.22x/3.33x).",
"The past several years have witnessed the rapid development of neural machine translation (NMT) based on an encoderdecoder framework to translate natural languages (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015).",
"Since NMT benefits from a massive amount of training data and works in a cross-lingual setting, it becomes much hungrier for training time than other natural language processing (NLP) tasks.",
"Based on self-attention networks (Parikh et al., 2016; Lin et al., 2017), Transformer (Vaswani et al., 2017) has become the most widely used architecture for NMT.",
"Recent studies on improving Transformer, e.g. deep models equipped with up to 30-layer encoders (Bapna et al., 2018; Wu et al., 2019; Wang et al., 2019; Zhang et al., 2019a), and scaling NMTs which use a huge batch size to train with 128 GPUs (Ott et al., 2018; Edunov et al., 2018), face a challenge to the efficiency of their training.",
"Curriculum learning (CL), which aims to train machine learning models better and faster (Bengio et al., 2009), is gaining an intuitive appeal to both academic and industrial NMT systems.",
"The basic idea of CL is to train a model using examples ranging from easy to difficult in different learning stages, and thus the criterion of difficulty is vital to the selection of examples.",
"Zhang et al. (2018) summarize two kinds of difficulty criteria in CL for NMT: 1) linguistically motivated sentence difficulty, e.g. sentence length, word frequency, and the number of coordinating conjunctions, which is easier to obtain (Kocmi and Bojar, 2017; Platanios et al., 2019); 2) model-based sentence difficulty, e.g. sentence uncertainties derived from independent language models or the models trained in previous time steps or epochs, which tends to be intuitively effective but costly (Zhang et al., 2017; Kumar et al., 2019; Zhang et al., 2019b; Zhou et al., 2020).",
"In this paper, we propose a novel norm-based criterion for the difficulty of a sentence, which takes advantage of both model-based and linguistically motivated difficulty features.",
"We observe that the norms of the word vectors trained on simple neural networks are expressive enough to model the two features, which are easy to obtain while possessing learning-dependent features.",
"For example, most of the frequent words and context-insensitive rare words will have vectors with small norms.",
"Unlike existing CL methods for NMT, relying on a hand-crafted curriculum arrangement (Zhang et al., 2018) or a task-dependent hyperparameter (Platanios et al., 2019), the proposed norm-based model competence enables the model to arrange the curriculum itself according to its ability, which is beneficial to practical NMT systems.",
"We also introduce a novel paradigm to assign levels of difficulty to sentences, as sentence weights, into the objective function for better arrangements of the curricula, enhancing both existing CL systems and the proposed method.",
"Empirical results for the two widely-used benchmarks show that the proposed method provides a significant performance boost over strong baselines, while also significantly speeding up the training.",
"The proposed method requires slightly changing the data sampling pipeline and the objective function without modifying the overall architecture of NMT, thus no extra parameters are employed.",
"NMT uses a single large neural network to construct a translation model that translates a source sentence x into a target sentence y .",
"During training, given a parallel corpus D = {(cid:104) x n , y n (cid:105)} Nn =1 , NMT aims to maximize its log-likelihood: = L ( D ; 0 ) = arg max 0 N (cid:88) n =1 log P ( y n | x n ; 0 ) (1) where 0 are the parameters to be optimized during the training of the NMT models.",
"Due to the intractability of N , the training of NMT employs mini-batch gradient descent rather than batch gradient descent or stochastic gradient descent , as follows: B 1 , , B t , , BT = sample( D ) (2) = L ( BT ; L ( BT 1 ; L ( B 1 , 0 ))) (3) where T denotes the number of training steps and B t denotes the t th training batch.",
"In the training of the t th mini-batch, NMT optimizes the parameters t 1 updated by the previous mini-batch.",
"CL supposes that if mini-batches are bucketed in a particular way (e.g. with examples from easy to difficult), this would boost the performance of NMT and speed up the training process as well.",
"That is, upgrading the sample( ) to B 1 , , B t , , B T = sample ( D ) (4) where the order from easy to difficult (i.e. B 1 B T ) can be: 1) sentences with lengths from short to long; 2) sentences with words whose frequency goes from high to low (i.e. word rarity); and 3) uncertainty of sentences (from low to high uncertainties) measured by models trained in previous epochs or pre-trained language models.",
"Table 1 shows the sentences of the training curricula provided by vanilla Transformer and the proposed method.",
"Most NLP systems have been taking advantage of distributed word embeddings to capture the syntactic and semantic features of a word (Turian et al., 2010; Mikolov et al., 2013).",
"A word embedding (vector) can be divided into two parts: the norm and the direction: w = || w || (cid:124)(cid:123)(cid:122)(cid:125) norm w || w || (cid:124) (cid:123)(cid:122) (cid:125) direction (5) In practice, the word embedding, represented by w , is the key component of a neural model (Liu et al., 2019a,b), and the direction w || w || can also be used to carry out simple word/sentence similarity and relation tasks.",
"However, the norm || w || is rarely considered and explored in the computation.",
"Surprisingly, the norm which is simply derived from a single model parameter, can also capture delicate features during the optimization of a model.",
"Schakel and Wilson (2015) observe that in the word embedding model (Mikolov et al., 2013), the word vector norm increases with a decrease of the word frequency, while polysemous words, such as May, tend to have an average norm weighted over its various contexts.",
"Wilson and Schakel (2015) further conduct controlled experiments on word vector norm and find that besides the word frequency, the diversities of the context of the word are also a core factor to determine its norm.",
"The vector of a context-insensitive word is assigned a higher norm.",
"In other words, if a word is usually found in specific contexts, it should be regarded as a significant word (Luhn, 1958).",
"The word embedding model can exactly assign these significant words higher norms, even if some of them are frequent.",
"The sentences consisting of significant words share fewer commonalities with other sentences, and thus they can also be regarded as difficult-to-learn examples.",
"Figure 1 shows the relationship between the word vector norm and the word frequency in the English data of the WMT'14 EnglishGerman translation task.",
"The results stay consistent with prior works (Wilson and Schakel, 2015), showing that the rare words and significant words obtain a high norm from the word embedding model.",
"Motivated by these works and our preliminary experimental results, we propose to use the word vector norm as a criterion to determine the difficulty of a sentence.",
"Specifically, we first train a simple word embedding model on the training corpus, and then obtain an embedding matrix E w2v .",
"Given a source sentence x = x 1 , , x i , , x I , it can be mapped into distributed representations x 1 , , x i , , x I through E w2v .",
"The norm-based sentence difficulty is calculated as d ( x ) = I (cid:88) i =1 || x i || (6) Long sentences and sentences consisting of rare words or significant words tend to have a high sentence difficulty for CL.",
"The proposed norm-based difficulty criterion has the following advantages: 1) It is easy to compute since the training of a simple word embedding model just need a little time and CPU resources; 2) Linguistically motivated features, such as word frequency and sentence length, can be effectively modeled; 3) Model-based features, such as learning-dependent word significance, can also be efficiently captured.",
"Besides finding an optimal sentence difficulty criterion, arranging the curriculum in a reasonable order is equally important.",
"As summarized by Zhang et al. (2019b), there are two kinds of CL strategies: deterministic and probabilistic.",
"From their observations, probabilistic strategies are superior to deterministic ones in the field of NMT, benefiting from the randomization during mini-batch training.",
"Without loss of generality, we evaluate our proposed norm-based sentence difficulty with a typical probabilistic CL framework, that is, competence-based CL (Platanios et al., 2019).",
"In this framework, a notion of model competence is defined which is a function that takes the training step t as input and outputs a competence value from 0 to 1: 1 c ( t ) (0 , 1] = min(1 , (cid:115) t 1 c 20 t + c 20 ) (7) where c 0 = 0 .",
"01 is the initial competence at the beginning of training and t is a hyperparameter determining the length of the curriculum.",
"For the sentence difficulty, they use cumulative density function (CDF) to transfer the distribution of sentence difficulties into (0 , 1] : d ( x n ) (0 , 1] = CDF( { d ( x n ) } Nn =1 ) n (8) 1 We introduce the square root competence model since it has the best performance in Platanios et al. (2019).",
"The score of difficult sentences tends to be 1, while that of easy sentences tends to be 0.",
"The model uniformly samples curricula whose difficulty is lower than the model competence at each training step, thus making the model learn the curriculum in a probabilistic way.",
"One limitation of competence-based CL is that the hyperparameter t is task-dependent.",
"In detail, for each system, it needs to first train a vanilla baseline model and then use the step reaching 90% of its final performance (BLEU score) as the value of the length hyperparameter.",
"As we know, training an NMT baseline is costly, and arbitrarily initializing the value might lead to an unstable training process.",
"To alleviate this limitation and enable NMT to learn curricula automatically without human interference in setting the hyperparameter, it is necessary to find a way for the model to determine the length of a curriculum by itself, according to its competence, which should be independent of the specific task.",
"To this aim, we further introduce a norm-based model competence criterion.",
"Different from the norm-based difficulty using the word vector norm, the norm-based model competence uses the norm of the source embedding of the NMT model E nmt : m t = || E nmt t || (9) where m t denotes the norm of E nmt at the t th training step, and we write m 0 for the initial value of the norm of E nmt .",
"This proposal is motivated by the empirical results shown in Figure 2, where we show the BLEU scores and the norms of the source embedding matrix at each checkpoint of a vanilla Transformer model on the WMT'14 EnglishGerman translation task.",
"We found the trend of the growth of the norm m t to be very similar to that of the BLEU scores.",
"When m t stays between 15K to 20K, which is about from twice to three times larger than the initial norm m 0 , both the growth of the norm and that of the BLEU score have slowed down.",
"It shows strong clues that m t is a functional metric to evaluate the competence of the model, and thus we can avoid the intractability of t in Equation 7: c ( t ) = min(1 , (cid:115) ( m t m 0 )1 c 20 m m 0 + c 20 ) (10) where m is a task-independent hyperparameter to control the length of the curriculum.",
"With this criterion, the models can, by themselves, fully automatically design a curriculum based on the feature (norm).",
"At the beginning of the training, there is a lower m t , so the models tend to learn with an easy curriculum.",
"But with an increase of the norm m t , more difficult curricula will be continually added into the learning.",
"In competence-based CL, the model uniformly samples sentences whose difficulty level is under the model competence, and then learns with the samples equally.",
"As a result, those simple sentences with low difficulty (e.g. d ( x ) < 0 .",
"1 ) are likely to be repeatedly used in the model learning.",
"This is somewhat counterintuitive and a waste of computational resources.",
"For example, when students are able to learn linear algebra, they no longer need to review simple addition and subtraction, but can keep the competence during the learning of hard courses.",
"On the other hand, a difficult (long) sentence is usually made up of several easy (short) sentences.",
"Thus, the representations of easy sentences can also benefit from the learning of difficult sentences.",
"To alleviate this limitation of competence-based CL and further enhance the learning from the curriculum of different levels of difficulty, we propose a simple yet effective norm-based sentence weight : w ( x , t ) = ( d ( x ) c ( t ) ) w (11) Algorithm 1 Norm-based Curriculum Learning Strategy Require: Parallel corpus D = {(cid:104) x n , y n (cid:105)} Nn =1 ; Translation system ; 1: Train the word2vec Embedding E w2v on { x n } Nn =1 .",
"where w is the scaling hyperparameter smoothing the weight, d ( x ) is the norm-based sentence difficulty, and c ( t ) is the model competence.",
"For each training step t , or each model competence c ( t ) , the weight of a training example w ( x , t ) is included in its objective function: l ( (cid:104) x , y (cid:105) , t ) = log P ( y | x ) w ( x , t ) (12) where l ( (cid:104) x , y (cid:105) , t ) is the training loss of an example (cid:104) x , y (cid:105) at the t th training step.",
"With the use of sentence weights, the models, at each training step, tend to learn more from those curricula whose difficulty is close to the current model competence.",
"Moreover, the models still benefit from the randomization of the mini-batches since the length weight does not change the curriculum sampling pipeline.",
"Algorithm 1 illustrates the overall training flow of the proposed method.",
"Besides the component and training flow of vanilla NMT models, only some low-cost operations, such as matrix multiplication, have been included in the data sampling and objective function, allowing an easy implementation as a practical NMT system.",
"We have also found, empirically, that the training speed of each step is not influenced by the introduction of the proposed method.",
"We conducted experiments on the widely used benchmarks, i.e. the medium-scale WMT'14 EnglishGerman (En-De) and the large-scale WMT'17 ChineseEnglish (Zh-En) translation tasks.",
"For En-De, the training set consists of 4.5M sentence pairs with 107M English words and 113M German words.",
"The development is newstest13 and the test set is newstest14.",
"For the Zh-En, the training set contains roughly 20M sentence pairs.",
"The development is newsdev2017 and the test set is newstest2017.",
"The Chinese data were segmented by jieba , 2 while the others were tokenized by the tokenize.perl script from Moses.",
"3 We filtered the sentence pairs with a source or target length over 200 tokens.",
"Rare words in each data set were split into sub-word units (Sennrich et al., 2016).",
"The BPE models were trained on each language separately with 32K merge operations.",
"All of the compared and implemented systems are the base Transformer (Vaswani et al., 2017) using the open-source toolkit Marian (Junczys-Dowmunt et al., 2018).",
"4 We tie the target input embedding and target output embedding (Press and Wolf, 2017).",
"The Adam (Kingma and Ba, 2015) optimizer has been used to update the model parameters with hyperparameters 1 = 0.9, 2 = 0.98, = 10 9 .",
"We use the variable learning rate proposed by Vaswani et al. (2017) with 16K warm up steps and a peak learning rate 0 .",
"0003 .",
"We employed FastText (Bojanowski et al., 2017) 5 with its default settings to train the word embedding model for calculating the norm-based sentence difficulty; an example is given in Figure 1.",
"The hyperparameters m and w controlling the norm-based model competence and norm-based sentence weight were tuned on the development set of En-De, with the value of 2.5 and 0.5, respectively.",
"To test the adaptability of these two hyperparameters, we use them directly for the Zh-En translation task without any tuning.",
"We compare the proposed methods with the re-implemented 2 https://github.com/fxsjy/jieba 3 http://www.statmt.org/moses/ 4 https://marian-nmt.github.io/ 5 https://github.com/facebookresearch/ fastText ID Model Dev.",
"competence-based CL (Platanios et al., 2019).",
"6 During training, the mini-batch contains nearly 32K source tokens and 32K target tokens.",
"We evaluated the models every 2.5K steps, and chose the best performing model for decoding.",
"The maximum training step was set to 100K for En-De and 150K for Zh-En.",
"During testing, we tuned the beam size and length penalty (Wu et al., 2016) on the development data, using a beam size of 6 and a length penalty of 0.6 for En-De, and a beam size of 12 and a length penalty of 1.0 for Zh-En.",
"We report the 4-gram BLEU (Papineni et al., 2002) score given by the multi-bleu.perl script.",
"The codes and scripts of the proposed norm-based CL and our re-implemented competence-based CL are freely available at https://github.com/NLP2CT/ norm-nmt .",
"Table 2 shows the results of the En-De translation task in terms of BLEU scores and training speedup.",
"Models (1) to (4) are the existing baselines of this translation benchmark.",
"Model (5) is our implemented base Transformer with 100K training steps, obtaining 27.64 BLEU scores on the test set.",
"By applying the competence-based CL with its proposed sentence rarity and square root competence function, i.e. model (6), it reaches the performance of model (5) using 60K training steps and also gets a better BLEU score.",
"For the proposed method, we first show the performance of each sub-module, that is: model (7), which uses the norm-based model competence instead of the square root competence of model (6); model (8), which uses the proposed norm-based sentence complexity instead of the sentence rarity of model (6); and model (9), which adds the norm-based sentence weight to model (6).",
"The results show that after applying each sub-module individually, both the BLEU scores and the learning efficiency are further enhanced.",
"Model (10) shows the results combining the three proposed norm-based methods for CL, i.e. the norm-based sentence difficulty, model competence, and sentence weight.",
"We call the combination of the proposed method norm-based CL.",
"It shows its superiority in the BLEU score, which has an increase of 1.17 BLEU scores compared to the Trans-ID Model Dev.",
"One can note that all of our implemented systems have the same number of model parameters; besides, the training step of each model involves essentially the same execution time, resulting in a deployment-friendly system.",
"Table 3 shows the effects of the two hyperparameters used in the proposed method.",
"For each experiment, we kept the other parameters unchanged and only adjusted the hyperparameter.",
"For m , controlling curriculum length, the higher the value, the longer the curriculum length.",
"When setting m to 2.5 with the curriculum length of nearly 29K steps, it achieves the best performance.",
"For w , the scaling sentence weight of the objective function, one achieves satisfactory results with a value of 0.5, which maintains the right balance between the learning of simple and hard examples.",
"Although the hyperparameters m and w have been sufficiently validated on the En-De translation, the generalizability of the model trained using these two hyperparameters is still doubtful.",
"To clear up any doubts, we further conducted the experiments on the large-scale Zh-En translation without tuning these two hyperparameters, that is, directly using m = 2 .",
"5 and w = 0 .",
"5 .",
"Specifically, the only difference is the use of a large number of training steps in Zh-En, namely, 150K, for the purpose of better model fitting.",
"We first confirm the effectiveness of competence-based CL in large-scale NMT, that is model (14), which shows both a performance boost and a training speedup.",
"Model (15), which trains NMT with the proposed norm-based CL, significantly improves the BLEU score to 25.25 (+1.56) and speeds up the training by a factor of 3.33, showing the generalizability of the proposed method.",
"The results Source Last year a team from the University of Lincoln found that dogs turn their heads to the left when looking at an aggressive dog and to the right when looking at a happy dog .",
"show that large-scale NMT obtains a greater advantage from an orderly curriculum with enhanced representation learning.",
"The proposed norm-based CL enables better and faster training of large-scale NMT systems.",
"As discussed in Section 3.3, competence-based CL over-trains on the simple curriculum, which might lead to a bias in the final translation.",
"To verify this, we quantitatively analysed the translations generated by different systems.",
"Figure 3 presents the performance of the vanilla Transformer, and of the NMTs trained by competence-based CL and norm-based CL.",
"By dividing the En-De test set (3,003 sentences) into three subsets (1001 sentences) according to the length-based sentence difficulty, the frequency-based sentence difficulty, and the norm-based sentence difficulty, we calculated the BLEU scores of each system on each subset.",
"The results confirm our above assumption, although competence-based CL performs much better in translating simple sentences due to its overtraining, the translation of sentences of medium difficulty worsens.",
"However, the norm-based CL benefits from the norm-based sentence weight, successfully alleviating this issue by applying a scale factor to the loss of simple curricula in the objective function, leading to a consistently better translation performance over the vanilla Transformer.",
"To further prove the effectiveness of the proposed norm-based sentence weight, we explore the model integrating norm-based sentence weight with competence-based CL, and find that it can also strike the right balance between translating simple and medium-difficulty sentences.",
"Table 5 shows an example of a translation of a difficult sentence consisting of several similar clauses in the norm-based difficulty bucket.",
"We observe that the translation by the vanilla model omits translating the last clause, but NMT with norm-based CL translates the entire sentence.",
"The proposed method enhances the representation learning of NMT, leading to better understandings of difficult sentences, thus yielding better translations.",
"The norm of a word embedding has been sufficiently validated to be highly correlated with word frequency.",
"Schakel and Wilson (2015) and Wilson and Schakel (2015) train a simple word embedding model (Mikolov et al., 2013) on a monolingual corpus, and find that the norm of a word vector is relevant to the frequency of the word and its context sensitivity: frequent words and words that are insensitive to context will have word vectors of low norm values.",
"For language generation tasks, especially NMT, there is still a correlation between word embedding and word frequency.",
"Gong et al. (2018) observe that the word embedding of NMT contains too much frequency information, considering two frequent and rare words that have a similar lexical meaning to be far from each other in terms of vector distance.",
"Gao et al. (2019) regard this issue as a representation degeneration issue that it is hard to learn expressive representations of rare words due to the bias in the objective function.",
"Nguyen and Chiang (2019) observe a similar issue during NMT decoding: given two word candidates with similar lexical meanings, NMT chooses the more frequent one as the final translation.",
"They attribute this to the norm of word vector, and find that target words with different frequencies have different norms, which affects the NMT score function.",
"In the present paper, for the sake of obtaining an easy and simple word vector norm requirement, we use the norm derived from a simple word embedding model.",
"In the future, we would like to test norms of various sorts.",
"There are two main avenues for future research regarding CL for NMT: sentence difficulty criteria and curriculum training strategies.",
"Regarding sentence difficulty, there are linguistically motivated features (Kocmi and Bojar, 2017; Platanios et al., 2019) and model-based features (Zhang et al., 2017; Kumar et al., 2019; Zhang et al., 2019b; Zhou et al., 2020).",
"Both types of difficulty criteria have their pros and cons, while the proposed norm-based sentence difficulty takes the best of both worlds by considering simplicity and effectiveness at the same time.",
"Regarding the training strategy, both deterministic (Zhang et al., 2017; Kocmi and Bojar, 2017) and probabilistic strategies (Platanios et al., 2019; Zhang et al., 2019b; Kumar et al., 2019) can be better than the other, depending on the specific scenario.",
"The former is easier to control and explain, while the latter enables NMT to benefit from the randomization of mini-batch training.",
"However, both kinds of strategy need to carefully tune the CL-related hyperparameters, thus making the training process somewhat costly.",
"In the present paper, we have designed a fully automated training strategy for NMT with the help of vector norms, removing the need for manual setting.",
"We have proposed a novel norm-based curriculum learning method for NMT by: 1) a novel sentence difficulty criterion, consisting of linguistically motivated features and learning-dependent features; 2) a novel model competence criterion enabling a fully automatic learning framework without the need for a task-dependent setting of a feature; and 3) a novel sentence weight, alleviating any bias in the objective function and further improving the representation learning.",
"Empirical results on the mediumand large-scale benchmarks confirm the generalizability and usability of the proposed method, which provides a significant performance boost and training speedup for NMT.",
"This work was supported in part by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of the Science and Technology Development Fund, Macau SAR and National Natural Science Foundation of China (Grant No. 045/2017/AFJ), the Science and Technology Development Fund, Macau SAR (Grant No. 0101/2019/A2), and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST).",
"We thank the anonymous reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other"
] |
[
"Exploiting label hierarchies has become a promising approach to tackling the zero-shot multi-label text classification (ZS-MTC) problem.",
"Conventional methods aim to learn a matching model between text and labels, using a graph encoder to incorporate label hierarchies to obtain effective label representations (Rios and Kavuluru, 2018).",
"More recently, pretrained models like BERT (Devlin et al., 2018) have been used to convert classification tasks into a textual entailment task (Yin et al., 2019).",
"This approach is naturally suitable for the ZS-MTC task.",
"However, pretrained models are un-derexplored in the existing work because they do not generate individual vector representations for text or labels, making it unintuitive to combine them with conventional graph encoding methods.",
"In this paper, we explore to improve pretrained models with label hierarchies on the ZS-MTC task.",
"We propose a Reinforced Label Hierarchy Reasoning (RLHR) approach to encourage interdependence among labels in the hierarchies during training.",
"Meanwhile, to overcome the weakness of flat predictions, we design a rollback algorithm that can remove logical errors from predictions during inference.",
"Experimental results on three real-life datasets show that our approach achieves better performance and outperforms previous non-pretrained methods on the ZS-MTC task.",
"Multi-label text classification (MTC) is a basic NLP problem that underlies many real-life applications like product categorization (Partalas et al., 2015) and medical records coding (Du et al., 2019).",
"The labels in the output space are often interdependent and in many applications organized in a hierarchy, as shown in the example in Figure",
"1. A significant challenge for real-life development of MTC applications is severe deficiencies of annotated data for each label in the hierarchy, which demands better solutions for zero-shot learning.",
"The existing zero-shot learning for multi-label text classification (ZS-MTC) mostly learns a matching model between the feature space of text and the label space (Ye et al., 2020).",
"In order to learn effective representations for labels, a majority of existing work incorporates label hierarchies via a label encoder designed as Graph Neural Networks (GNNs) that can aggregate the neighboring information for labels (Chalkidis et al., 2020; Lu et al., 2020).",
"Recently, pretrained models like BERT (Devlin et al., 2018) have been widely used as strong matching models due to their superior representation ability (Qiao et al., 2019).",
"They have been applied to convert a classification task to a textual entailment task, by treating the text to be classified as the premise, and its label as the hypothesis, which is naturally suitable for the ZS-MTC study (Yin et al., 2019).",
"However, the problem of this approach is that pretrained models cannot generate individual vector representations for labelsa label is coupled with the corresponding text in learning joint representationthus conventional methods, like GNNs which utilize the label hierarchy to obtain better label representations, cannot be directly applied to pretrained models, making them underex-plored in the existing research.",
"Although pretrained models have shown potential on ZS-MTC, as discussed above, it is not intuitive to introduce structural information of label hierarchies to the learning procedure.",
"Flattening all the labels without considering their hierarchical structures, however, will result in predictions that contain logical errors, which are known as the class-membership inconsistency (Silla and Freitas, 2011).",
"The problem will be even more salient for pretrained models because they only take the literal tokens of the labels as input.",
"An example with logical errors is shown in Figure",
"1. Without label hierarchy information, the model correctly predicts Bikes as a true label, but fails to predict its parent label, Sporting Goods .",
"Meanwhile, the model does not choose the label Local Services while predicting its child label Bike Repair due to the fact that Bike Repair has tokens similar to those in the input text.",
"To overcome the forementioned weakness, we propose a Reinforced Label Hierarchy Reasoning (RLHR) approach to introduce label structure information to pretrained models.",
"Instead of regarding labels to be independent, we cast ZS-MTC as a deterministic Markov Decision Process (MDP) over the label hierarchy.",
"An agent starts from the root label and learns to navigate to the potential labels by hierarchical deduction in the label hierarchy.",
"The reward is based on the correctness of the deduction paths, not simply on the correctness of each label.",
"Thus the reward received by one predicted label will be determined by both the label itself and other labels on the same path, which can help to strengthen the interconnections among labels.",
"Meanwhile, we find that the hierarchical inference method (Huang et al., 2019) will broadcast the errors arising at the higher levels of label hierarchies.",
"Thus we further design a rollback algorithm based on the predicted matching scores of labels to reduce the logical errors in the flat prediction mode during inference.",
"We apply our approach to different pretrained models and conduct experiments on three real-life datasets.",
"Results demonstrate that pretrained models outperform conventional non-pretrained methods by a substantial margin.",
"After being combined with our approach, pretrained models can attain further improvement on both the classification metrics and logical error metrics 1 .",
"We summarize our contributions as follows: We demonstrate that pretrained models outperform conventional methods on ZS-MTC.",
"We design a novel Reinforced Label Hierarchy Reasoning (RLHR) approach and a 1 Code and data available at https://github.com/ layneins/Zero-shot-RLHR matching-score-based rollback algorithm to introduce the structural information of label hierarchies to pretrained models in both the training and inference stage.",
"Experiments with different pretrained models are performed on three real-life datasets.",
"We show the effectiveness of our proposed approach and provide detailed analyses.",
"Exploiting the prior distribution of the label space has proven to be an effective method to tackle the multi-label text classification problem because it can provide the model with information about the label structure.",
"Mao et al. (2019); Huang et al. (2019) took the explicitly represented label hierarchy as the structural information, while Wu et al. (2019) assumed the prior distribution to be implicit and trained their model to learn the distribution during learning.",
"Leveraging the label hierarchy to tackle ZS-MTC has shown to be promising in previous work, which mostly aimed to learn a matching model between texts and labels.",
"Chalkidis et al. (2020, 2019); Xie et al. (2019) adopted Label-Wise Attention Networks to encourage interactions between text and labels.",
"Rios and Kavuluru (2018); Lu et al. (2020) used Graph Neural Networks to capture the structural information in the label hierarchy.",
"However, few existing works investigate the effectiveness of pretrained models on the ZS-MTC task, despite pretrained models being effective as matching models for many natural language processing tasks (Ma et al., 2019; Qiao et al., 2019; Nogueira et al., 2019).",
"The logical error problem in flat predictions has been widely discussed in previous MTC work (Silla and Freitas, 2011; Wehrmann et al., 2018; Mao et al., 2019), which is mostly solved through a hierarchical procedure during inference.",
"In our work, we will investigate such a method and see that the hierarchical inference method is not optimal for pretrained models on the ZS-MTC task because it broadcasts errors top-down in the label hierarchy.",
"Path reasoning is effective for exploiting explicit relationships in structured data, which can be combined with reinforcement learning, e.g., knowledge graph reasoning (Wan et al., 2020; Xian et al., 2019; Xiong et al., 2017).",
"We propose to introduce the label hierarchy to pretrained models through path reasoning, with the aim to strengthen the interconnections between labels.",
"To the best of our knowledge, our work is the first to improve pretrained models through label hierarchies for ZS-MTC.",
"In general, a label hierarchy is defined as (cid:71) = ( (cid:76) , (cid:69) ) , where (cid:76) and (cid:69) are a set of labels and relations, respectively.",
"The latter represent parent-child relations between labels.",
"The root of (cid:71) is a special label R .",
"A data instance (cid:120) is defined as a tuple ( T, P ) with T as the input text and P = { p 1 , p 2 , , p N } as deduction paths, and a path p i = { R , l 1 i , , l K 1 i , l Ki } where l ki (cid:76) is at the k th layer of (cid:71) and l k 1 i is the parent of l ki .",
"A deduction path must be contiguous, starting with R , and is not required to terminate at a leaf label.",
"Let (cid:76) s and (cid:76) u denote the seen and unseen labels, respectively, where (cid:76) s (cid:76) u = (cid:76) .",
"Given a training set (cid:68) s = { (cid:120) si } N 1 i =1 where the labels of (cid:120) si are all seen labels, we aim to learn a matching model f ( (cid:68) s ; ) and make prediction on (cid:68) u = { (cid:120) ui } N 2 i =1 .",
"Some deduction paths of (cid:120) ui consist of seen labels while some contain both seen and unseen labels.",
"Notice that the children of an unseen label are also unseen labels.",
"Evaluations on (cid:68) u will be conducted in two settings: (1) evaluate the performance on (cid:76) u , which is known as the zero-shot (ZS) setting, and (2) evaluate the performance on (cid:76) s (cid:76) u , which is the generalized zero-shot (GZS) setting (Huynh and Elhamifar, 2020).",
"The goal of our RLHR approach is to learn a policy (cid:80) that can make more consistent predictions by traversing the label hierarchy (cid:71) to generate deduction paths.",
"Given a training instance (cid:120) , an agent will start from the root R and follow (cid:80) at each time step to extend the deduction paths by navigating to the children labels at the next level.",
"By measuring the correctness of the generated deduction paths with reinforcement learning (RL), the label hierarchy is introduced to the model during the training time and the interconnections of labels will hence be strengthened, which can help to reduce logical errors in prediction.",
"As we will show in our experiments, hierarchical inference, which is used in previous work (Mao et al., 2019), will propagate the errors occurring at the high levels of hierarchies during inference, resulting in inferior performance.",
"Thus we still adopt the flat prediction during inference, but further design a rollback algorithm based on the structure of (cid:71) and the predicted matching scores.",
"We will introduce the details of our proposed RLHR and the rollback algorithm in the following subsections.",
"Our base model adopts pretrained models (cid:77) , e.g., BERT (Devlin et al., 2018), which have proven to be effective in matching modelling.",
"Given the input text T and the label l , we follow Yin et al. (2019) by transforming the text-label pair into textual entailment representation as [CLS] T [SEP] hypothesis of l .",
"The hidden vector v cls of [CLS] is regarded as the aggregate representation and will be used in the classification layer to calculate the matching score ms .",
"The overall calculation process of ms is abbreviated as: ms = (cid:77) ( T, l ) (1) If ms where is a threshold, we then say T belongs to label l .",
"Different from vanilla pretrained models that rely on flat prediction during training, we propose to formulate the ZS-MTC task as a deterministic Markov Decision Process (MDP) over label hierarchies.",
"For the input text, the agent trained by RLHR will predict M deduction paths from the root label R .",
"When all deduction paths are generated, the rewards will be received, which are determined by the correctness of the paths.",
"An overall illustration of the RLHR approach is shown in Figure",
"2. We introduce the details of the RL modules in this subsection.",
"Maintaining just one deduction path for one data instance will result in an inefficient learning process.",
"However, the number of potential deduction paths will increase exponentially as the model goes deeper into the lower levels of the hierarchies.",
"To maintain a good trade-off between computational resources and time efficiency, we keep the beam of deduction paths to be M .",
"Thus for a data instance (cid:120) , the global state S k at step k is composed of the sub-states of M deduction paths: S k = { s ki } Mi =1 (2) p 1 p 2 p 3 p 4 Start Step 1 Step 2 Step 3 Path : p 1 : p 2 : p 3 : p 4 Correct?",
"where C ( l ki ) denotes the child labels of l ki .",
"For the deduction path p i at the time step k , an action a ki is to select one label l k +1 i from A ki .",
"Notice that the agent may not select any labels from A ki , which means path p i ends before it arrives at a leaf label and a stop action is taken.",
"By adding this early stop mechanism, we can make the agent automatically learn when to stop assigning new labels to the deduction paths.",
"We parameterize the action a ki by a policy network ( | s, A ; ) where is parameters.",
"For deduction path p i at time step k , the policy network takes as input the state s ki and the corresponding action space A ki , emitting the matching score of each action in A k i , which is calculated by the base pretrained model (cid:77) .",
"Finally an action a k is sampled based on the matching score distribution of the actions in A ki .",
"The calculation is formulated as follows: ( a ki | s ki , A ki ; ) = { (cid:77) ( T, l ) | l A ki } (4) a ki ( a ki | s ki , A ki ; ) (5) 4.2.4 Reward In our approach, the reward is based on the correctness of a complete deduction path.",
"Instead of treating all labels to be flat, our approach encourages the interdependence among the labels.",
"The reward received by a label l ki is not only decided by the correctness of itself but also the correctness of other labels on the same deduction path p i .",
"Given the golden deduction paths P = { p 1 , p 2 , , p N } , p i will obtain a positive reward if p i is in P or p i is a sub-path of a path in P .",
"Formally the reward of path p i is defined as: r i = (cid:40) 1 , if p i p j where p j P 1 , otherwise, (6) where is a hyper-parameter for scaling.",
"Under most circumstances, the number of wrong deduction paths will be greater than the correct ones.",
"The problem will be even more severe for the MTC tasks because the distribution of positive labels and negative labels is usually imbalanced given a data instance (cid:120) .",
"A larger can encourage the model to focus more on the correct paths.",
"Notice that our approach differs from existing methods which adopt hierarchical classification (Sun and Lim, 2001; Peng et al., 2018).",
"A hierarchical classification method based on the label hierarchy can only cast the influence from parent label to child label, while in our approach the influence is mutual between parent label and child label, which can hence strengthen the reasoning ability of the models.",
"Our goal is to learn a stochastic policy that maximize the expected total reward J ( ) of the M sampled deduction paths, which can be formulated as: M",
"where is the parameter of policy network.",
"We adopt policy gradient (Sutton et al., 2000) as the optimization algorithm which updates as: + J ( ) (8) where is the discount learning rate.",
"Since there are multiple deduction paths for one data instance, the gradient can be approximated by J ( ) = 1 MM (cid:88) i =1 (cid:88) k log ( a ki | s ki ; ) ( r i r b ) (9) r b is a constant for the stabilization of the training procedure, for which we use the average reward of the last training epoch in our experiments.",
"Existing methods mostly adopt the hierarchical inference method (Mao et al., 2019), which will avoid logical errors, i.e., class-membership inconsistency (Silla and Freitas, 2011), but bring a serious problem: the prediction errors made at the high levels of a hierarchy are often severely propagated to the lower levels.",
"For instance, if a correct label at the first layer is missing, then all the descendant labels will not be considered during inference.",
"This will no doubt harm the performance.",
"On the contrary, if the model still makes flat prediction, all labels will be visited during inference, while more logical errors will probably arise.",
"To overcome the forementioned weaknesses, we propose a rollback algorithm during the inference stage based on the predicted matching scores of all labels.",
"For a data instance (cid:120) , we obtain the predicted labels in flat prediction mode as P , which consists of two parts: (1) labels that can form complete deduction paths, and (2) labels with logical errors, which we denote as P e = { l k 1 1 , l k 2 2 , , l k NN } .",
"For a label l k i i P e , we extract its deduction path from (cid:71) as p i = { R , l 1 i , , l k i 1 i , l k i i } and their corresponding predicted matching scores { 1 , ms 1 i , , ms k i 1 i , ms k i i } 2 .",
"Meanwhile we set a rollback threshold k for the labels in the k th layer of (cid:71) , where { k } are hyper-parameters tuned on the development set.",
"As long as the matching scores meet the requirements { ms ji j } k i 1 j =1 , we add the labels in p i back to P .",
"i",
"The motivation behind this matching-score-based rollback algorithm is that for a label hierarchy (cid:71) , the labels at higher-level hierarchy contain more training instances but their meaning are more 2 Root label R always has a matching score",
"abstract, while the labels at lower levels are more specific such as the labels Active Life and Bike Rentals in Figure",
"1. Pretrained models just take as input the literal tokens of a label and thus are possible to obtain a better performance on certain labels at the lower levels than those at higher levels.",
"We conduct experiments on three real-life datasets from different domains; the details are provided in Table",
"1. Yelp 3 is a customer review dataset, in which we need to classify customer reviews into correct business categories.",
"WOS (Kowsari et al., 2017) is a scientific paper dataset which provides the abstracts of published papers and the corresponding topics.",
"QCD is a query classification dataset we create for the ZS-MTC task.",
"It is composed of search queries and target product types, which is collected from e-commerce web-sites.",
"The layer numbers of the label hierarchies in Yelp, WOS and QCD are 4, 2, and 3, respectively.",
"For examples of the three datasets, please refer to Appendix A.1.",
"We test our proposed approach with two pretrained models, BERT (Devlin et al., 2018) and DistilBERT (Sanh et al., 2019).",
"For BERT, we use the uncased base version, which is of 12-layer transformer blocks, 768-dimension hidden state, 12 attention heads and 110M parameters in total.",
"For DistilBERT, it contains 6-layers transformer blocks, 768-dimension hidden state and 12 attention heads, totally 66M parameters.",
"For training, we use Adam (Kingma and Ba, 2014) for optimization and learning rate is set to 1e-6.",
"Meanwhile we adopt early stopping to avoid overfitting on the training data.",
"is set to 30 on Yelp, 20 on QCD, and 5 on WOS, 3 https://www.yelp.com/dataset Method Setting Yelp WOS QCD Ma-F Mi-F EBF Err Ma-F Mi-F EBF Err Ma-F Mi-F EBF Err CNN ZS 0.33 2.02 16.35 0.3211 0.36 4.43 28.22 0.2977 5.02 6.58 26.94 2.9386 GZS 1.31 14.97 7.00 29.58 9.66 26.22 CNN ZS 4.24 7.15 19.38 0.9303 0.54 4.53 26.88 0.3079 5.02 7.09 28.24 4.3923 +LWAN GZS 4.67 19.26 6.81 29.00 10.03 28.86 ZAGCNN ZS 17.94 18.75 28.24 1.3136 12.02 17.17 24.72 2.5827 5.22 10.01 40.65 2.0212 GZS 16.30 25.97 19.59 36.37 23.85 42.52 DistilBERT ZS 41.42 40.33 30.44 0.4039 70.69 65.19 55.18 0.5178 23.68 24.95 33.57 1.0854 GZS 21.29 28.18 68.03 63.64 24.43 34.29 +RLHR ZS 42.16 43.87 40.85 0.3347 74.56 72.44 61.06 0.4732 24.58 27.79 37.46 0.8389 GZS 26.95 40.43 71.65 68.05 26.10 38.37 BERT ZS 44.49 42.61 34.59 0.3755 77.87 77.27 56.69 0.1983 28.18 27.45 36.88 1.2497 GZS 23.38 31.53 74.69 70.56 27.04 37.20 +RLHR ZS 45.46 48.26 49.52 0.2952 78.46 79.19 64.43 0.2488 28.32 28.80 39.99 1.1984 GZS 32.09 49.75 75.51 72.62 28.67 41.08 Table 2: Results of different methods on the three datasets under two settings.",
"which we will discuss more in Section 5.3.4.",
"We set M to 5 with DistilBERT and 3 with BERT by trading off between training time and GPU memory usage.",
"The RL training procedure is unstable and slow if the agent is trained from scratch (Silver et al., 2016).",
"So with both BERT and DistilBERT, we pretrain the policy network in flat prediction mode on the training data with the learning rate of 1e-5.",
"In our experiments, we use standard metrics Micro-F1 and Macro-F1 to evaluate the classification performance for both the zero-shot and generalized zero-shot setting.",
"Meanwhile, we also adopt Example-based F1 (Peng et al., 2016) to measure the performance from the instance level, which is different from Micro/Macro-F1 measuring from the label level.",
"Though some previous works adopted ranking based metrics (Rios and Kavuluru, 2018) for large-scale MTC, they are not appropriate in our settings because the datasets used in this work contain smaller label space.",
"For logical errors, we report the logical error rate , which is defined as the average number of logical errors in one data instance.",
"We take the number of logical errors in one data instance as the number of labels that cannot form a complete deduction path.",
"Evaluation is conducted in two settings: (1) evaluate the performance on unseen labels only, which is the zero-shot (ZS) setting, and (2) evaluate the performance on both seen labels and unseen labels, i.e., the generalized zero-shot (GZS) setting (Huynh and Elhamifar, 2020).",
"We use two different types of baselines.",
"(1) The type of models where label hierarchy is not utilized, and we use CNN and CNN with Label-Wise Attention Networks ( CNN+LWAN ) (Chalkidis et al., 2019) in our experiments.",
"(2) The type of models where GNNs are utilized to encode the label hierarchy to capture the label structure information.",
"Specifically we use ZAGCNN proposed by Rios and Kavuluru (2018).",
"Table 2 shows the experimental results of the baseline models and our proposed RLHR approach on three real-life datasets in both the zero-shot and generalized zero-shot setting.",
"setting while the performance under GZS setting is better, which suggests CNN and CNN+LWAN cannot provide accurate predictions for unseen labels due to the lack of label structure information.",
"In contrast, ZAGCNN, which utilizes the label hierarchy, performs better, particularly on unseen labels, which demonstrates the importance of label hierarchy for ZS-MTC.",
"On the other hand, pretrained models, including DistilBERT and BERT, both outperform conventional non-pretrained methods with substantial improvements on three datasets, though ZAGCNN shows slight advantages on Micro-F1 and Example-based F1 on the QCD dataset under the GZS setting.",
"When incorporated with RLHR, the performance of pretrained models can be further improved by a relatively large margin.",
"We notice that the improvement under GZS setting is more significant than in the ZS setting, suggesting that seen labels benefit more from our RLHR than unseen labels.",
"As shown in Table 2, utilizing label hierarchies does not necessarily reduce the logical error rate for conventional methods, though it can improve the classification performance.",
"For example, the logical error rate of ZAGCNN is higher than CNN and CNN+LWAN on Yelp and WOS.",
"The logical error rate of pretrained models is generally lower than the conventional methods.",
"However, pretrained models still face the logical error problem though they perform well on the classification metrics.",
"We can also see that our RLHR can help reduce the logical error rate for DistilBERT and BERT under most circumstances.",
"Note that better classification performance does not necessarily lead to a lower logical error rate.",
"From Table 2, we can see although CNN and CNN+LWAN perform poorly on classification metrics, they achieve a better logical error rate than ZAGCNN and DistilBERT on the WOS dataset.",
"Similarly, the logical error rate of BERT is higher than DistilBERT on QCD even though BERT has a better classification performance.",
"Our proposed RLHR approach can improve both the classification performance and logical error performance, which demonstrates the effectiveness of RLHR.",
"Due to the limit of space, we only report the results of our proposed rollback algorithm based on BERT and put the results on DistilBERT in Appendix A.2.",
"As shown in Table 3, we can see that when being combined with our proposed rollback algorithm, the performance of BERT+RLHR can be further improved, raising Example-based F1 on Yelp, WOS, and QCD from 49.52%, 64.43%, 39.99% to 50.01%, 69.32% and 40.13%, respec-1 5 10 20 30 40 50 0 .",
"tively.",
"Our proposed rollback algorithm can also be combined with BERT only, while the gain is relatively marginal.",
"We further investigate this and observe that at the same level of the label hierarchy, the matching scores obtained in RLHR is more polarized, compared to those obtained with BERT, suggesting RLHR is more confident about the predictions when the label hierarchy is provided.",
"This yields a better prediction performance of RLHR when the rollback algorithm is adopted.",
"Meanwhile, we compare the hierarchical inference method (Huang et al., 2019) with our rollback algorithm.",
"Both methods can completely remove logical errors from the predicted results.",
"However, as we can see in the table, the performance of the hierarchical inference method is not consistent on the three datasets, with either BERT or BERT+RLHR.",
"When conducting hierarchical inference, BERT+RLHR achieves the best Micro-F1 and Example-based F1 on QCD dataset, while the performance is harmed with a significant gap on the WOS dataset.",
"Similarly, the performance of hierarchical inference with BERT achieves minor improvement on the QCD dataset, while on WOS and Yelp, the performance is sometimes improved marginally or sometimes worse.",
"The effectiveness of hierarchical inference method depends mainly on the classification difficulty of labels at the higher levels of label hierarchies.",
"As we know, such labels are usually more abstract and general, thus making the performance of hierarchical inference susceptible.",
"We discuss the influence of the parameter on logical error rates and useen label classification in this section.",
"Due to the limit of space, we only represent the results with BERT and put the results based on DistilBERT in Appendix A.3.",
"As shown in Figure 3, for datasets with large hierarchy, like Yelp and QCD, a larger helps achieve better classification performance on unseen labels, while it will bring more logical errors.",
"On the contrary, a relatively small yields better classification performance and lower logical error rates on datasets with small hierarchies like WOS, as shown in Figure 3b.",
"The reason is that for a large hierarchy, the number of sampled correct deduction paths will be much less than that of the wrong paths which is common in the ZS-MTC task because the positive labels are usually much less than negative labels, while for a small label hierarchy, the number of sampled correct paths are close to the false ones.",
"A large will encourage a model to focus more on sampled correct paths, which will hence improve the classification performance.",
"Meanwhile, if is too large, it will bring a bias to the dominating labels which appear more in the datasets.",
"Thus it will reduce the generalization ability of the model, which will harm the performance.",
"We propose a Reinforced Label Hierarchy Reasoning approach to incorporate label hierarchies into pretrained models in order to better solve the zero-shot multi-label text classification tasks.",
"We train an agent that starts from the root label, navigates to potential labels in the label hierarchies and generates multiple deduction paths.",
"By rewarding based on the sampled deduction paths, our approach can strengthen the interconnections among the labels during the training stage.",
"To overcome the weakness of hierarchical inference methods, we further design a rollback algorithm that can remove the logical errors in flat predictions.",
"Experiments on the three datasets demonstrate that our proposed approach improves the performance of pretrained models and enable the models to make more consistent predictions."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"objective",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective"
] |
[
"We explore learning web-based tasks from a human teacher through natural language explanations and a single demonstration.",
"Our approach investigates a new direction for semantic parsing that models explaining a demonstration in a context, rather than mapping explanations to demonstrations.",
"By leveraging the idea of inverse semantics from program synthesis to reason backwards from observed demonstrations, we ensure that all considered interpretations are consistent with executable actions in any context, thus simplifying the problem of search over logical forms.",
"We present a dataset of explanations paired with demonstrations for web-based tasks.",
"Our methods show better task completion rates than a supervised semantic parsing baseline (40% relative improvement on average), and are competitive with simple exploration-and-demonstration based methods, while requiring no exploration of the environment.",
"In learning to align explanations with demonstrations, basic properties of natural language syntax emerge as learned behavior.",
"This is an interesting example of pragmatic language acquisition without any linguistic annotation.",
"People routinely perform repetitive web-based tasks, involving sequences of clicking and typing actions.",
"These include activities such as forwarding emails, booking flight tickets, ordering pizza, etc.",
"These activities largely consist of small sequences of actions in an environment with restricted semantics, and are potentially amenable to automation.",
"In this work, we explore whether an AI agent can be taught such tasks through natural language explanations and a single demonstration by a user (as one might teach such a task to a human assistant).",
"Figure 1 : AI assistants that can be taught web-based procedures by their users can have diverse practical applications.",
"Here, we explore learning very simple tasks from the Mini World-of-Bits framework using natural language explanations and a single demonstration of the task From the perspective of language understanding, this involves challenges such as converting instructional language to actions, resolving ambiguities through pragmatics, and learning script-like behavior.",
"The web domain is rich in textual, structural and spatial features, allowing for exploration of multiple types of grounding behavior including spatial and visual language understanding, as well as reasoning over semi-structured data.",
"Also, despite its richness, the tasks involved usually do not require much background knowledge.",
"From a practical perspective, teachable AI assistants can change the way people interact with computers.",
"Today's conversational assistants such as Alexa or Cortana act on a small number of preprogrammed language commands (e.g., What is the weather going to be like?).",
"However, they cannot be taught new functionalities important to a user (as in Figure 1).",
"Enabling users to teach computers personalized procedures through explained demonstrations can make conversational AI sys-tems fundamentally more useful.",
"In Section 2, we situate our work in the broader body of work on grounded semantic parsing and learning from language.",
"Section 3 summarizes our framework and dataset.",
"In Section 4, we describe our approach in detail.",
"Here, we investigate a new paradigm for interpreting language in grounded contexts.",
"Instead of mapping statements to logical forms that then execute in a context as in traditional semantic parsing, the method considers the set of possible typing and clicking actions in a context, identifies features of corresponding web elements and their relationships with other elements on the webpage, and aligns these to natural language explanations through a generative model.",
"Section 5 describes the empirical evaluation.",
"Our contributions are: An approach towards learning web-based tasks from a single explained demonstration.",
"A dataset of explanations and demonstrations for tasks from the MiniWoB framework.",
"Empirical results showing that explained demonstrations can be an effective mode of supervision for learning such tasks.",
"Language can significantly reduce the number of samples needed compared to learning from demonstrations alone.",
"Semantic Parsing: Supervised models for converting statements to logical forms have long been studied in a wide range of settings (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowksi et al., 2010; Yin and Neubig, 2017).",
"More recent approaches focused on using weaker forms of supervision such as denotations or observations of world state (Berant et al., 2013; Clarke et al., 2010; Krishnamurthy and Mitchell, 2012) and semi-supervised methods aimed at efficient prototyping (Pasupat and Liang, 2015; Wang et al., 2015).",
"These methods require more readily available supervision, such as ques-tion/answer pairs for model training, rather than annotations of logical forms.",
"(Artzi and Zettlemoyer, 2013) learn to follow instructions in the context of robot navigation by conditioning parsing on environmental context.",
"Artzi and Zettlemoyer (2011) use conversational feedback as a signal to induce logical forms for individual utterances from transcripts of conversations in a dialog-based setup.",
"Some other recent approaches (Long et al., 2016; Guu et al., 2017) explore learning language from sequences of utterances and interactions in simple environments, which is conceptually similar to our work.",
"Muhlgay et al. (2019) and Guu et al. (2017) explore better strategies to search the space of logical forms.",
"While all of these methods are related to multiple facets of work, our method diverges from them in that the space of candidate logical forms is driven by the constraints of possible actions in an environment rather than the natural language utterance.",
"This guarantees that all of the considered logical forms during search are consistent with executable actions in any novel context.",
"Finally, some recent methods (Andreas et al., 2016) marginalize over latent interpretations of language in context of downstream tasks.",
"We use a similar Bayesian approach, where actions are chosen by marginal-izing logical forms (rather than choosing a single interpretation of an explanation).",
"Interactive Learning from Language: Several frameworks have leveraged natural language supervision to learn new tasks, starting with early work on the SHRLDU system (Winograd, 1972) and Interactive Task Learning (Laird et al., 2017).",
"In particular, several reinforcement learning approaches have been explored in text-based environments for learning strategies, following instruction manuals, game playing, etc. (Branavan et al., 2009; Goldwasser and Roth, 2014; Misra et al., 2018; Narasimhan et al., 2015).",
"These approaches leverage the ability to explore and interact with the environment to learning policies that lead to favourable outcomes.",
"This is different from our goal here, where the agent needs to learn from a single explained demonstration of a task, and no interactivity with the environment is assumed.",
"Some recent approaches have shown language explanations to be effective for learning realistic tasks including relation extraction, concept learning and question answering (Hancock et al., 2018; Srivastava et al., 2017, 2018; Andreas et al., 2018).",
"In terms of the goal and problem formulation, our approach extends multiples lines of previous work.",
"Quirk et al. (2015)'s work is similar to ours in motivation in learning user-specified recipes, but has no aspects of grounding or",
"demonstra-tions.(Wang et al., 2016) explore interactive parser training through language games in context of block-world environments.",
"Pasupat et al. (2018) explore mapping natural language to specific elements on complex and realistic web-pages, although not in context of learning from demonstrations.",
"Our framework directly extends previous work on learning web-based tasks from the Mini Figure 2 : Crowd-worker interface used for collecting natural language explanations and demonstrations Word-of-Bits framework using multiple demonstrations and exploration of the environment (Shi et al., 2017; Liu et al., 2018).",
"In particular, our DSL extends the constraint language defined in Liu et al. (2018) to explore learning from explained demonstrations instead.",
"We build on the Mini World-of-bits (MiniWoB) framework (Shi et al., 2017), a collection of web-based tasks initially proposed as a testbed for reinforcement learning agents.",
"The tasks vary in dif-ficulty in terms of the number of actions required, variability between instances of the task, and types of reasoning involved (including clicking specified buttons, forwarding emails and playing tic-tac-toe).",
"See the top half of Figure 2 for an example of a task.",
"Each task consists of a task description (yel-low box), and an interactive web interface.",
"While previous methods have focused on learning sequential decision making to complete these tasks through a mixture of exploration (the framework provides simulators, where correctly completing a task yields a reward) and behavior cloning (by observing multiple demonstrations from human users); our focus is on learning to complete these tasks in a one-shot sense (without any exploration).",
"This is because the one-shot case is a much more realistic scenario for learning web-based procedures from a teacher.",
"In practical situations (where there are no simulators), it would not be feasible for an AI agent to learn to book flights by booking multiple incorrect tickets, or manage a user's email by sending multiple incorrect emails.",
"On the other hand, a paradigm where the agent attempts to generalize from a single demonstration and explanations can be feasible for many more of such scenarios.",
"Figure 3 : Examples of collected explanations",
"We created a dataset of natural language explanations paired with demonstrations by human users for tasks from the MiniWoB framework.",
"For this, crowdsourced workers on Amazon Mechanical Turk were asked to demonstrate how to complete these tasks and provide stepwise explanations to an AI assistant on how to complete the task.",
"Since users would be unfamiliar with most of the tasks, for each task they were allowed to experiment with the interface as many times as they liked, and only the final demonstration was logged.",
"In all, we collect 520 demonstrations (each consisting of a sequence of click/type actions in the context of a MiniWoB task) paired with stepwise explanation sequences.",
"Figure 3 shows samples of collected explanations.",
"On average, each explanation sequence contains 3.3 explanations.",
"The dataset contains 1719 explanations in total (indi-vidual steps), averaging 8.4 words per explanation.",
"The size of the vocabulary of the explanations is 995.",
"In general, workers found the teaching process to be engaging, with an average rating of 8.3 on a 1-10 scale on how they enjoyed the HIT in a post-completion survey.",
"The dataset is available at https://aka.ms/Web-D-E .",
"Data characteristics: From a manual analysis of 100 randomly selected explanation sequences and task demonstrations, we find that in almost all cases (97%), the sequence of actions described in the explanations corresponds to the sequence of actions in the demonstration.",
"More than 85% of explanations mention a clicking or typing action, while around 10% identify an entity/string on the webpage that is used in an action in the next step (e.g., the first explanation for the second task in Figure 3).",
"Around 3% of the explanations correspond to conditionals and hypotheticals, which go beyond the scope of our approach.",
"Roughly 15% of the explanations mention multiple entities on the webpage usually specifying one element in relation to the other (e.g., the radio button to the right of the text-box ).",
"Table 1 : Major operators in DSL for learning of web-based procedures 3.2 DSL for semantic parsing We define a domain specific language (DSL) for describing web-based procedures in terms of DOM elements by expanding on the constraint language in Liu et al. (2018).",
"The DSL operators correspond to actions on DOM elements, element features and relations between them.",
"The DSL defines the vocabulary of logical forms for parsing of user explanations, and grounds sensors and effectors in the web environment.",
"Table 1 summarizes the DSL.",
"There are three types of operations: (1) click and type actions on specified web elements (with a specified string, in case of a type action), (2) operations that filter elements on a page that satisfy a criterion, and (3) operations that filter strings based on a criterion.",
"We include a special operator FindMatchingContext to accommodate cases in which the users provide explanations for an instance of a task with specific arguments mentioned in the task description (e.g., see the last row in Table 1).",
"In this case, the operator can pick out the corresponding argument for the new instance by looking at the surrounding context in the new task description.",
"The evaluation of logical forms in the DSL in the context of a webpage consists of set operations over all DOM elements on the webpage (and text-spans of up to two tokens for string operators).",
"For example, the logical form HasTag(type=button) will evaluate to the set of elements on a page that have a HTML tag type with value button .",
"Our approach for learning web-based tasks, which we call LED for Learning from Explained Demon-",
"Figure 4 : Modeling principle for Learning from Explained Demonstrations (LED).",
"We prefer logical forms ( l ) that are both consistent with the user demonstration ( d ) in the context ( c ), and relevant to the user's explanations ( x ).",
"strations , models the process of explaining a demonstration of a task in a grounded context.",
"We assume that the reasoning behind each action in a demonstration can be described by a logical form, l , in the DSL.",
"1 LED 's essential idea is that preferred logical forms are both (1) consistent with the user demonstration, d , in the observed context, c , and (2) relevant to the user's language explanations, x .",
"Figure 4 illustrates this for a toy-example, where the context consists of a web-page with three elements, the demonstration consists of a single action, and a corresponding explanation is provided.",
"Based on the observed demonstration (that elem3 was clicked), it is hard to infer the reason behind clicking it.",
"Multiple logical forms in the DSL can be consistent with clicking elem3 in this context.",
"e.g., it is at the top of the page, its color is blue, etc.",
"However, these interpretations would not justify the provided explanation as those logical forms are not relevant to the explanation.",
"Modeling relevance between logical forms and explanations can help identify the reasoning behind user demonstrations.",
"cal forms l (e.g., database queries), which are then are executed against a context c (e.g., a knowledge base) to get a denotation (corresponds here with a demonstration) d .",
"i.e., d = (cid:74) l ( x ) (cid:75) c .",
"In this model-theoretic view of semantics, parsed logical forms are not informed by the environmental context until execution.",
"In comparison, LED roots logical forms in the observed context, and thus pragmatic consistency is ensured by design.",
"2 We maximize the log-likelihood of observing the explanations given the demonstration in a grounded context: L ( ) = log p ( x | d, c ) = log (cid:80) l p ( x | l ) (cid:124) (cid:123)(cid:122) (cid:125) relevance p ( l | d, c ) (cid:124) (cid:123)(cid:122) (cid:125) consistency (1) Here, the first term corresponds to scoring relevance between logical forms and explanations (modeled using a semantic parsing model).",
"4.1 Grounded Logical forms as latent variables Eqn 1 marginalizes over latent logical forms.",
"The second term enforces consistency between candidate logical forms and the demonstration in the context, and can be deterministically evaluated.",
"As we see in Section 4.2, consistency is enforced by temperature-based annealing during training.",
"To make this tractable, we represent a logical form in a grounded context as an assignment of a tuple of discrete variables, l := ( e 0 , f 0 , r, e 1 , f 1 , a, t, f t ) .",
"These variables indicate things such as which DOM element is acted upon ( e 0 ), if its relation ( r ) with another element on the page ( e 1 ) is relevant, and so on.",
"These are defined below.",
"e 0 domElements ( c ) denotes the DOM-element on which an action is performed.",
"(e.g., e 0 = elem 3 in Fig 4)",
"This is observed from the demonstration, thus p ( e 0 ) = I e 0 = e observed .",
"3 f 0 = ( f 01 . . . f 0 n F ) is a set of selector variables, where f 0 i denotes if feature i of element e 0 is relevant for choosing it.",
"Its domain is { F i } , where F i is the range of values feature i can take.",
"f 0 i = denotes that the feature was not relevant for choosing e 0 (e.g., f 0 color = in Fig 4).",
"If f 0 i (cid:54) = , it can only take the observed value of the feature for e 0 in the context (e.g., f 0 tag = square in Fig 4).",
"In Table 1, these correspond to operators that return web-elements and have names with prefix Has .",
"2 For example, in Figure 4, click(tag=triangle & rightOf(square)) won't be considered for the provided utterance, as it is inconsistent with the context.",
"3 I condition denotes an indicator function for condition .",
"r denotes if relation r between e 0 and another element on the webpage is relevant for choosing it.",
"Its domain is { R} , where R is the set of (binary) relations between elements in the DSL.",
"In Table 1, these are operators that have names with prefix Reln .",
"r = denotes that the no relation was relevant for choosing e 0 .",
"If r (cid:54) = , it can only take the value of a relation that exists between e 0 and another element.",
"(e.g., in Fig 4, r can't take the value LeftOf , since elem 3 is the rightmost element in the context).",
"Our choice of having a single variable for r disallows logical forms with multiple or nested relations.",
"This was guided by an analysis of our dataset, where none of the collected explanations show such behavior.",
"e 1 denotes that relation r between elements e 0 and e 1 is relevant for choosing e 0 .",
"Its domain is { domElements ( c ) } .",
"e 1 = if and only if r = , i.e. if no relation is relevant for choosing e 0 .",
"If r = reln , e 1 can only take values of elements such that reln ( e 0 , e 1 ) is true in the context.",
"f 1 = ( f 11 . . . f 1 n F ) is a set of selector variables, where f 1 i denotes if feature i of element e 1 is relevant.",
"e.g., for click the checkbox next to the button that says submit' , the HasText feature of the button is relevant).",
"f 1 i = denotes that feature i was not relevant.",
"If f 1 i (cid:54) = , it can only take the observed value of the feature for e 1 .",
"a denotes the action performed on e 0 (click or type).",
"This is observed from the demonstration.",
"t denotes the string to type, if a = type .",
"This is observed from the demonstration (and is a substring of the task description text).",
"f t = ( f t 1 . . . f tn T ) is a set of selector variables, where f tj denotes if the text feature j of t is relevant for choosing it (In Table 1, operators with a string return type correspond to text features).",
"Inverse Semantics: Assignments of values to these variables represents a search in the DSL space, since given any context, there is a mapping a from logical forms to an assignment of these variables.",
"A key idea here is that, borrowing from program synthesis, we can leverage the inverse semantics of operators in the DSL (Polozov and Gulwani, 2015) to guarantee consistency of logical forms with the grounded context.",
"i.e., at any step, the space of candidate logical forms we consider is consistent with the observed demonstration.",
"This is possible because in our case, computing the inverse semantics for all operators in the DSL is feasible.",
"4 4 Since there is only a relatively small number of candidates As just described, our approach will use the context of the webpage leverage DSL inverse semantics to maintain an implicit set of candidate logical forms that are consistent with the observed demonstration.",
"We will use variational inference to infer the logical forms that are most relevant to the seen explanations, and choose the action to take based on the inferred distribution over logical forms.",
"In Eqn 1, the second term corresponds to a prior probability overs logical forms given a demonstration and context (webpage).",
"Our representation of logical forms as latent variable assignments (from Section 4.1) enables us to decompose this probability into local factor distributions.",
"We choose these local priors to correspond to distributions that are uniform over assignments that are consistent, and has zero support otherwise, similar to previous work on pragmatic reasoning (Frank and Goodman, 2012; Monroe et al., 2017).",
"In other words, these distributions are proportional to indicator function over valid assignments of variables in each factor.",
"As seen below, these define a prior over l that is also proportional to a simple indicator function over values of l that are consistent with the observed demonstration and context.",
"p ( l | d, c ) = p ( e 0 , f 0 , r, e 1 , f 1 , a, t, f t | d, c ) = p ( e 0 | d ) p ( f 0 | e 0 , c ) p ( e 1 , r | e 0 , c ) p ( f 1 | e 1 , c ) p ( a, t | d ) p ( f t | t, c ) IV alid ( e 0 ,d ) IV alid ( f 0 ,e 0 ,c ) IV alid ( e 1 ,r,e 0 ,c ) IV alid ( f 1 ,e 1 ,c ) IV alid ( a,t,d ) IV alid ( f t ,t,c ) = IV alid ( l,d,c ) (2) Substituting this in Eqn 1 and using Jensen's inequality, any distribution q over logical forms provides a lower-bound on the log-likelihood: L ( ) (cid:88) l q ( l ) log p ( x | l ) IV alid ( l ) q ( l ) = (cid:88) l q ( l ) (cid:0) log p ( x | l ) + log IV alid ( l ) (cid:1) + H q (3) where H q is the entropy for distribution q .",
"In Sec 4.1, we represent l as a tuple of variables.",
"Next, we make a mean field approximation by assuming the distribution q ( l ) decomposes as: DOM elements or strings on the webpage to search over.",
"Compare this with an operation in arithmetic, e.g., add(int, int) , which might require a search over infinite co-domains.",
"Parsing model: We assume that the probability of an explanation decomposes into the probability of individual words as log p ( x | l ) = (cid:80) w x log p ( w | f 0 , r, f 1 , f t , a ) .",
"Further, we assume that individual words are generated from features, relations and actions in the logical form as: log p ( w | f 0 , r, f 1 , f t , a ) = log 1 C { f 0 ,r,f 1 , f t ,a } (cid:88) k p ( w | k ) z kw p ( z kw ) { f 0 ,r,f 1 , f t ,a } (cid:88) k b kw (cid:0) log p ( w | k, z kw ) + log p ( z kw ) (cid:1) + H b kw (5) Here, k is an index over values of f 0 , r , f 1 , f t and a .",
"z kw denotes an alignment between a particular value of a feature, relation or action ( k ) and word w in the explanation, in which case the word is generated from the distribution p ( w | k ) .",
"The presence of a summation inside of a logarithm makes maximizing this objective hard.",
"We again use Jensen's inequality to get a bound by introducing variational distributions b kw over alignments z kw .",
"b kw can be thought of as representing the proportions of an explanation word contributed by specific feature values, relations or actions k in the logical form.",
"Each p ( w | k ) is parameterized as a multinomial distribution, kw , over the vocabulary.",
"Training and Inference: Our model training follows a variational EM approach, where in the E-step, we perform inference for the latent logical form variables and alignment proportions, keeping the model parameters as fixed.",
"In the M-step, we update the parameters, kw , taking the variational distributions and alignments as fixed.",
"Combining Eqn 2, Eqn 3 and Eqn 5, we get: L ( ) (cid:88) l q f 0 q e 1 q r q f 1 q f t (cid:16)(cid:0) (cid:88) w (cid:88) k b kw [log z kw kw + log p ( z kw )] + H b kw (cid:1) + log IV alid ( l,d,c ) (cid:17) + H f 0 + H e 1 + H r + H f 1 + H f t (6) Maximizing this objective w.r.t. the variational 5 Using q f 0 as shorthand notation for the product of variational distributions (cid:81) i q f 0 i , and so on.",
"distributions yields the following E-step updates: 6 q f 0 i ( v f 0 i ) exp (cid:16) (cid:88) w b v f 0 i w log( v f 0 i w ) + log IV alid ( v f 0 i , e 0 ,c ) (cid:17) q e 1 ( v e 1 ) exp (cid:16)(cid:0) (cid:88) f 1 i (cid:88) w b f 1 i w (cid:88) v f 1 i q f 1 i ( v f 1 i ) log v f 1 iw + log IV alid ( v f 1 i ,e 1 ,c ) (cid:1) + (cid:0) (cid:88) v r q r ( v r ) (cid:88) w b v r w log v r w + log IV alid ( e 1 ,v r ,e 0 ,c ) (cid:1)(cid:17) q r ( v r ) exp (cid:16) (cid:88) v e 1 q e 1 ( v e 1 ) (cid:0) (cid:88) w b v r w log v r w (cid:1) + log IV alid ( e 1 ,v r ,e 0 ,c ) (cid:17) q f 1 i ( v f 1 i ) exp (cid:16) (cid:88) v e 1 q e 1 ( v e 1 ) (cid:0) (cid:88) w b v f 1 iw log v f 1 iw (cid:1) + log IV alid ( v f 1 i ,e 1 ,c ) (cid:17) q f tj ( v f tj ) exp (cid:16) (cid:88) w b v ftj w log( v ftj w ) + log IV alid ( v ftj ,t,c ) (cid:17) (7) Similarly, the updates for the alignment proportions (taking p ( z kw ) in Eqn 6 to be uniform) are: b kw exp (cid:0) (cid:88) k q k ( k ) log kw (cid:1) (8) LED(+syntax) : The above approach allows for arbitrary alignments between words and features, relations or actions in the grounded logical form ( k ), essentially representing x as a bag-of-words.",
"7 We also explore a variant that models x as a sequence of tokens by introducing a prior over joint alignments z kx = z k 1 w 1 . . . z k T w T in a sentence x := w 1 . . . w T (in Eqn 5).",
"This is done by simply modeling p ( z kx ) with pairwise transition probabilities as p ( z kx ) := (cid:81) n p ( z k t | z k t 1 ) = (cid:81) n T k t ,k t 1 .",
"In this case, updates for alignment proportions (Eqn. 8) correspond with emission probabilities in a HMM (which we omit here for brevity).",
"Since the updates in Eqn 7 and Eqn 8 are cyclic, in each E-step, we make 20 iterations of updates to the variational distributions and alignment proportions in a round-robin schedule.",
"We note that consistency is enforced during training by the log-of-indicator-variable terms in Eqn 7.",
"This is because any inconsistent assignments get a score of log(0) , which tends to negative infinity.",
"However, to ensure smooth training (and alleviate modeling issues from our mean field approximation), we leverage an annealing based strategy, where we incremen-6 The optimal value for the concave problem (cid:80) j x j log y j x j s.t. (cid:80) j x j = 1 is achieved when x j y j .",
"7 E.g., this won't differentiate between click the URL below the button and click the button below the URL .",
"tally increase the penalty for log(0) terms during training as N/ 2 for the N 'th EM iteration (for large N , this also is a prohibitive penalty).",
"In our experiments, this was seen to improve training.",
"In the M-step, we maximize the objective w.r.t. k : k ( w ) exp (cid:16) (cid:88) n (cid:88) w x n b nkw q k (cid:17) (9) The one exception is a special copy mechanism for string-valued features.",
"For these, kw is not learned, but simply corresponds to an indicator function denoting if w matches the value of the feature.",
"e.g., HasText(submit') , submit' = 1 .",
"First, we evaluate the method for completion rates on tasks from the MiniWoB framework.",
"Following Liu et al. (2018), we filtered 40 tasks from the MiniWoB framework (Shi et al., 2017) that require only clicking and typing actions.",
"During training of the LED model, we sample an explained demonstration for each of the 40 tasks, and models are trained on the aggregate of these (the model sees one explanation-demonstration pair for a task).",
"For testing, models are evaluated on a new instance of a task, where the model greedily computes the demonstration d (specifying a click or typing action on a web element in the current DOM) that would maximize p ( x | d, c ) (see Eqn 1) and executes the corresponding actions.",
"The method then moves to the next explanations.",
"This requires an enumeration of all possibly clicking and typing actions that can be performed in a context c at every step.",
"8 Since the number of actions in a demonstration can be different from the number of steps in the explanation, we heuristically align the sequence of actions in demonstrations to the sequence of sentences in the explanations in our dataset based on a small manually defined list of trigger words.",
"A direct comparison of LED with other approaches is not possible, since they differ considerably in the type of supervision and resources used.",
"Nonetheless, here we compare LED 's performance with the following two methods to get a coarse sense of its effectiveness: 8 This is possible since the set of actionable elements on a webpage, and the set of candidate strings that can be typed (up to two length tokens from task description) are not large.",
"Figure 5 : Task-completion rates for MiniWoB tasks with varying difficulty.",
"Rates are calculated over 100 new instances of each task",
"1. SemParse : This is a supervised semantic parsing baseline, trained on a manually annotated dataset of around 300 explanations labeled with their DSL logical forms (covering roughly one annotated explanation sequence for every task).",
"The model is based on a sequence-to-sequence neural semantic parser from Jia and Liang (2016).",
"During testing, the method parses the sequence of explanations to logical forms, and sequentially (attempts to) executes the predicted logical forms.",
"In contrast, LED requires no logical form annotations.",
"However, it leverages the inverse semantics of the DSL operators, which may not be feasible for every DSL.",
"2. BC+RL : This is the original approach from Shi et al. (2017), who proposed the MiniWoB framework and consists of behavior cloning and exploration.",
"This learns a task by supervised learning on about 200 demonstrations, followed by exploration via reinforcement learning to fine-tune the learned policies.",
"In comparison, LED requires no exploration of the environment but leverages additional supervision in the form of natural language explanations.",
"Multiple methods have since explored other RL-based approaches, resulting in much improved performance (Liu et al., 2018; Luo, 2019; Jia et al., 2019).",
"In particular, Liu et al. (2018) leverage a constraint language similar to our DSL to train a RL policy to get large gains in performance.",
"However, all these methods require multiple demonstrations and exploration of the environment.",
"Figure 5 shows task completion performance for different methods on a subset of tasks from the MiniWoB framework.",
"We compute task comple-Approach Action-prediction accuracy LED (+Syntax) 0.45 LED 0.43 SemParse 0.39 Random 0.28 Table 2 : Semantic parsing performance (predicted action match) for interpreting individual explanations in a context tion rates over 100 randomly selected test instances of each task.",
"The differences between instances involves different arguments for a task and differences in the state of the environment.",
"Firstly, we note that the LED approaches consistently outperforms SemParse across all tasks.",
"This is a strong result, since LED does not have access to logical form annotations for explanations as SemParse does.",
"This strongly indicates that knowledge of the pragmatic context is important for language interpretation in this domain, since our approach which roots logical forms in observed demonstrations performs better or as well for all but one task.",
"We note that there is a large variance among tasks in terms of amenability to learning from explanations or exploration.",
"For tasks like tic-tac-toe , explanation-based methods perform poorly as expected, since learning the game involves reasoning that is hard to explain through step-wise explanation of a demonstration, but can be more naturally learned from exploration.",
"On the other hand, explanation-based methods perform well on tasks that are easily expressed through language.",
"On the whole, the LED approaches and are roughly competitive with BC+RL , while requiring no exploration and only a single demonstration.",
"Note that unlike exploration-based methods, LED and SemParse can potentially generalize to new tasks during testing (where no demonstration is seen during training) from explanations and context only.",
"We also note that LED(+Syntax) generally outperforms vanilla LED , although the effect size is not large.",
"However, this trend is statistically significant (binomial test, p < 0 . 1 ).",
"Next, we quantitatively evaluate the parsing performance of our method at the level of individual explanations (rather than task completion rate).",
"For this, we evaluate the trained models on explanations from a set of 80 demonstrations from the dataset (unseen during training), where we calculate the match between the predicted action from an explanation in the context, and the actual action in the logged demonstration (accuracy of preFigure 6 : Heatmap showing learned values of kw for 20 frequent words w in and representative values of k . Darker shades correspond to higher probability values. dicted action in a context).",
"Table 2 summarizes this performance, which shows a similar trend as Section 5.1.",
"Both LED methods perform substantially better than SemParse , and all three methods perform much better than randomly choosing the next executable action in the context ( Random ).",
"We note again that LED 's involves no logical form annotations, and is driven purely by grounding explanations in observed demonstrations.",
"Figure 6 depicts the learned lexicon by visualizing a representative subset of learned kw values for LED (+Syntax) (from Sec 4.2) as a heatmap.",
"We note that the model correctly induces mappings between words and DSL operators.",
"The rows and columns are manually ordered to emphasize the block diagonal structure.",
"Table 3 shows the learned transition probabilities, T k 1 ,k 2 , for LED (+Syntax) .",
"To reduce model size, we share parameters for values of k corresponding to types f 0 , r , f 1 , f t and a .",
"A common template about the general structure of user explanations is reflected from the parameter values.",
"Most explanations start with the description of the action a , followed by mentioning features that identify the relevant element f 0 .",
"In fact, f 0 distributions generate the majority of words in most explanations.",
"Relation mentions, when present, usually follow this, in turn followed by features corresponding to f 0 , reflective of a VSO word order in most explanations.",
"Diagonal values are substantially higher, indicating that words describing specific objects and actions tend to cluster together, as would be expected from the semantics of natural language.",
"From a qualitative error analysis, we note that most errors in task learning come from three sources.",
"Firstly, although the method learns reasonable mappings between words and semantic operators, the method often misaligns attributes of different ele-a f 0 r f 1 f t a 0.12 0.70 0.07 0.04 0.07 f 0 0.05 0.82 0.08 0.04 0.01 r 0.01 0.12 0.57 0.26 0.04 f 1 0.03 0.07 0.22 0.63 0.04 f t 0.08 0.30 0.09 0.06 0.47 Table 3 : Learned transition probabilities between latent variable categories for LED (+Syntax) .",
"These reflect a prominence of VSO sentence structures in user explanations.",
"ments, even with the LED(+Syntax) model.",
"This is likely because the training data is not adequate to learn these constraints, and methods that enforce these through informed priors maybe more effective.",
"Another common error is due to challenges with anaphora resolution and discourse referents.",
"Finally, a large number of explanations are not explicit in describing the sequence of actions required to perform a task, and some needed actions remain unmentioned.",
"While this would be expected in realistic computer-human interactions, fixing these errors is beyond the scope of the current method.",
"Our work here is a step in the direction of teachable AI agents that can learn new behavior from conversational interactions with ordinary users.",
"In terms of technique, our bottom-up approach to generating logical forms ensures consistency between interpretations and the ambient context during search.",
"Conversely, this would be complicated in domains with rich composition and nesting in logical forms, which go beyond simple features and relations.",
"e.g., click the third email from Jeanette , and where modeling inverse semantics is infeasible.",
"Here, we posed the learning of web-based tasks as similar to instruction-following problem, with no aspect of interactivity or exploration of the environment.",
"In future work, the possibility of learning from a mix of explanations, exploration and a limited budget of interaction with the environment can be explored.",
"Also, language grounding models that incorporate richer alignments between explanations and demonstrations can lead to more effective learning.",
"Since LED only requires tok-enization as pre-processing, it can possibly extend to low resource scenarios.",
"In terms of problem framing, interactive use-cases that enable the agent to ask questions when it is confused may also be realistic.",
"Future work can also explore curriculum learning in this domain, by first learning simpler tasks, which can be compositionally invoked in explanations for complex tasks."
] | [
"objective",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Continued training is an effective method for domain adaptation in neural machine translation.",
"However, in-domain gains from adaptation come at the expense of general-domain performance.",
"In this work, we interpret the drop in general-domain performance as catastrophic forgetting of general-domain knowledge.",
"To mitigate it, we adapt Elastic Weight Consolidation (EWC)a machine learning method for learning a new task without forgetting previous tasks.",
"Our method retains the majority of general-domain performance lost in continued training without degrading in-domain performance, outperforming the previous state-of-the-art.",
"We also explore the full range of general-domain performance available when some in-domain degradation is acceptable.",
"Neural Machine Translation (NMT) performs poorly without large training corpora (Koehn and Knowles, 2017).",
"Domain adaptation is required when there is sufficient data in the desired language pair but insufficient data in the desired domain (the topic, genre, style or level of formality).",
"This work focuses on the supervised domain adaptation problem where a small in-domain parallel corpus is available for training.",
"Continued training (Luong and Manning, 2015; Sennrich et al., 2015) (also called fine-tuning), where a model is first trained on general-domain data and then domain adapted by training on in-domain data, is a popular approach in this setting as it leads to empirical improvements in the targeted domain.",
"One downside of continued training is that the adapted model's ability to translate general-domain sentences is severely degraded during adaptation (Freitag and Al-Onaizan, 2016).",
"We interpret this drop in general-domain performance as catastrophic forgetting (Goodfellow et al., 2013) of general-domain translation knowledge.",
"Degradation of general-domain performance may be problematic when the domain adapted NMT system is used to translate text outside its target domain, which can happen if there is a mismatch between the data available for domain-specific training and the test data.",
"Poor performance may also concern end users of these MT systems who are expecting good performance on easy' generic sentences.",
"1 Elastic Weight Consolidation (EWC) (Kirk-patrick et al., 2017) is a method for training neural networks to learn a new task without forgetting previously learned tasks.",
"We extend EWC to continued training in NMT (see 3): Our first task is to translate general-domain sentences, and our second is to translate domain-specific sentences (without forgetting how to translate general-domain sentences).",
"EWC works by adding a per-parameter regularizer, based on the Fisher Information matrix, while training on the second task.",
"At a high level, the regularization term keeps parameters which are important to general-domain performance close to the initial general-domain model values during continued training, while allowing parameters less important to general-domain performance to adapt more aggressively to the in-domain data.",
"We show that when adapting general-domain models to the domain of patents, EWC can substantially improve the retention of general-domain performance (up to 18.1 BLEU) without degrading in-domain translation quality.",
"Our proposed method outperforms the previous state-of-the-art method (Dakwale and Monz, 2017) at retaining general-domain performance while adapting to a new domain.",
"1 See Cadwell et al. (2018) and Porro Rodriguez et al. (2017) for discussions about lack of trust in MT. 2 Related Work A few prior studies address the drop in general-domain NMT performance during continued training.",
"Freitag and Al-Onaizan (2016) found that ensembling generaland in-domain models provides most of the in-domain gain from continued training while retaining most of the general-domain performance.",
"Ensembling doubles memory and computational requirements at translation time, which may be impractical for some applications and does not address our more fundamental goal of building a single model that is robust across domains.",
"Chu et al. (2017) found that mixing general-domain data with the in-domain data used for continued training improved general-domain performance of the resulting models, at the expense of training time.",
"Dakwale and Monz (2017) share our goal of improving the general-domain performance of continued training.",
"They introduce two novel approaches which use the initial, general-domain model to supervise the in-domain model during continued training.",
"The first, multi-objective fine-tuning , which they denote MCL, trains the network with a joint objective of standard log-likelihood loss plus a second term based on knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016) of the general-domain model.",
"The second, multiple-output layer fine tuning , adds new parameters to the output layer during continued training that are specific to the new domain.",
"They found both methods performed similarly, significantly outperforming ensembling in the more challenging case where domain shift is significant, so we select the simpler MCL as our baseline.",
"We do not assume that the domain of input sentences is known, thus we do not compare to methods such as LHUC (Vilar, 2018).",
"Our work applies a regularization term to continued training, similar to Miceli Barone et al. (2017) and Khayrallah et al. (2018), but for the purpose of retaining general-domain performance as opposed to improving in-domain performance.",
"Compared to Kirkpatrick et al. (2017), we present a more general derivation of EWC to address the fact that our tasks are not independent.",
"We also show that the diagonal of the Fisher matrix used in EWC is intractable to compute for sequence-to-sequence models with large vocabularies.",
"Instead we propose to approximate it with the diagonal of the empirical Fisher (Martens, 2014), which can be computed efficiently using gradients from back-propagation.",
"At a high level, our method works as follows:",
"1. Train on the general-domain data, resulting in parameters G .",
"2. Compute the diagonal of the empirical Fisher matrix F .",
"F i,i estimates how important the i th parameter Gi is to the general-domain translation task.",
"3. Initialize parameters to G and train on in-domain data, using an EWC regularization term which incorporates the diagonal of F .",
"Intuitively, the regularization term during continued training keeps a parameter i close to corresponding general-domain parameter Gi if the model's general-domain performance is sensitive to that parameter (i.e., large F i,i ).",
"Parameters to which general-domain performance is less sensitive (i.e., small F i,i ) are allowed to be updated more aggressively to fit the in-domain data.",
"For the following discussions, let X be the set of all well-formed source sentences and Y be the set of all possible sequences of target words.",
"Training data D consists of translations ( x, y ) .",
"We assume x X is drawn from a true underlying distribution of source sentences Q x , and y Y is drawn from a true conditional distribution of correct translations Q y | x .",
"Our model, parameterized by , computes the conditional probability P y | x (cid:44) P ( y | x ; ) , which estimates Q y | x .",
"Our dataset D is assumed to have come from two distinct tasks: general-domain translation with data DG and in-domain translation with data DS (domain-specific).",
"Without loss of generality, p ( D )= p ( DG ) p ( DS | DG ) .",
"Applying Bayes rule to log p ( | D ) and simplifying gives: log p ( | D )= log p ( DS | DG , ) + log p ( | DG ) log p ( DS | DG ) (1) We aim to maximize Equation 1 for : = arg max (cid:2) log p ( DS | DG , )+ log p ( | DG ) (cid:3) (2) 3.2 Approximating log p ( | DG ) To efficiently compute Equation 2, we first approximate p ( | DG ) as a multivariate Gaussian 2 with mean G , obtained by training the network on DG with standard negative log likelihood (NLL) loss, and diagonal precision matrix (inverse of the covariance matrix) given by the diagonal of the Fisher Information Matrix F : F = EP x,y (cid:2) log p ( x, y | ) log p ( x, y | ) T (cid:3) = EQ x (cid:104) EP y | x (cid:2) log p ( y | x, ) log p ( y | x, ) T (cid:3)(cid:105) This is the expected variance of the likelihood function's gradient at .",
"3 The magnitude of F i,i indicates the model's sensitivity to parameter i , on the general-domain translation task.",
"Note that the first expectation is taken with respect to the true distribution of x and can be approximated by training samples.",
"The second expectation is taken with respect to the model distribution P y | x , which is impractical for a large sequence-to-sequence model as it requires summing over all possible output sequences.",
"We approximate the true Fisher with the empirical Fisher F (Martens, 2014), where y is not enumerated but fixed to be the training labels: F = 1 | DG | (cid:88) ( x,y ) DG log p ( y | x, ) log p ( y | x, ) T Thus we approximate maximizing log p ( | DG ) in Equation 2 by minimizing (cid:80) i F i,i (cid:104) i Gi (cid:105) 2 .",
"Note that the diagonal of F is easily computed from backpropagation gradients.",
"Tasks are assumed to be independent in the original EWC work (Kirkpatrick et al., 2017), which is unrealistic in the continued training scenario since both tasks are translation in the same language.",
"4 Since we assume source sentences in DG and DS are sampled independently, all dependencies can be attributed to Q y | x , representing knowledge of translation (i.e., DG | = DS | Q y | x ).",
"Q y | x is unknown, so we approximate it with our general-domain model ( G ).",
"Furthermore, we will regularize continued training such that stays in a region 2 For background, see MacKay (1992).",
"3 See Martens (2014) for detailed derivation.",
"4 The fact that continued training works is strong evidence that the in-domain translations are not independent of the general-domain translations.",
"near G .",
"Thus we assume DG | = DS | during continued training.",
"This allows us to approximate log p ( DS | DG , ) in Equation 2 with log p ( DS | ) , which is simply the likelihood function on DS .",
"Where LSNLL ( ) is the standard NLL loss on DS and is a hyper-parameter which weights the importance of the general-domain task.",
"Note that the left-hand side of Equation 3 is still the loss over both the generaland in-domain translation tasks, but the right-hand side is based only on in-domain data.",
"All information from the general-domain data has been collapsed into the second term, which is in the form of a regularizer.",
"Our general-domain training data is the concatenation of the parallel portions of the WMT17 news translation task (Bojar et al., 2017) and OpenSub-titles18 (Lison et al., 2018) corpora.",
"For De En and Ru En, we use newstest2017 and the final 2500 lines of OpenSubtitles as our test set.",
"We use newstest2016 and the penultimate 2500 lines of OpenSubtitles as the development set.",
"For Zh En, we use the final and penultimate 4000 lines of the UN portion of the WMT data and the final and penultimate 2500 lines of OpenSubtitles as our test and development sets, respectively.",
"We use the World Intellectual Property Organization (WIPO) COPPA-V2 corpus (Junczys-Dowmunt et al., 2016) as our in-domain dataset.",
"The WIPO data consist of parallel sentences from international patent application abstracts.",
"WIPO De En data are large enough to train strong in-domain systems (Thompson et al., 2018), so we truncate to 100k lines to simulate a more interesting domain adaptation scenario.",
"enizer (Koehn et al., 2007) and byte-pair encoding (BPE) (Sennrich et al., 2016).",
"We train separate BPE models for the source and target languages, each with a vocabulary size of approximately 30k.",
"BPE is trained on the out-of-domain corpus only and then applied to the training, development, and test data for both out-of-domain and in-domain datasets.",
"Token counts for corpora are shown in Table",
"1. We implemented 5 both EWC and MCL in Sockeye (Hieber et al., 2017).",
"To avoid floating point issues, we normalize the empirical Fisher diagonal to have a mean value of 1 .",
"0 instead of dividing by the number of sentences.",
"For efficiency, we compute gradients for a batch of sentences prior to squaring and accumulating them.",
"Fisher regularization is implemented as weight decay (towards G ) in Adam (Kingma and Ba, 2014).",
"Preliminary experiments in Ru En found no meaningful difference in general-domain or in-domain performance when computing the diagonal of F on varying amounts of data ranging from 500k sentences to the full dataset.",
"We also tried computing the diagonal of F on held-out data, as 5 github.com/thompsonb/sockeye_ewc there is some evidence that estimating Fisher on held out data reduces overfitting in natural gradient descent (Pascanu and Bengio, 2013).",
"However, we again found no meaningful differences.",
"All results presented herein estimate the the diagonal of F on 500k training data sentences, which took less than an hour on a GTX 1080 Ti GPU.",
"We use a two-layer LSTM network with hidden unit size 512.",
"The general-domain models are trained with a learning rate of 3E-4.",
"We use dropout ( 0 . 1 ) on both RNN inputs and states.",
"We compute lower-cased multi-bleu.perl .",
"We use label smoothing ( 0 . 1 ) for all experiments except with MCL, because MCL explicitly regularizes the output distribution.",
"MCL uses an interpolation of the cross entropy between the output distribution of the model being trained and the general-domain models output distribution (scaled by ) and the standard training loss (scaled by 1 ).",
"For MCL, we do a grid search over learning rates ( 10 4 , 10 5 , 10 6 ) and values of ( 0 . 1 , 0 . 3 , 0 . 5 , 0 . 7 , 0 . 9 ).",
"For EWC, we do a grid search over the same learning rates and weight decay values of ( 10 2 , 10 3 , 10 4 , 10 5 ).",
"We present the full inand general-domain performance trade-off 6 for both EWC and MCL in Figure",
"1. This is computed by taking the convex hull of a grid search over learning rate and regularization amount for each method.",
"EWC outperforms MCL at all operating points with the exception of Ru En, where MCL provides a small in-domain performance improvement at lower general-domain performance; this was also observed in Khayrallah et al. (2018).",
"Figure 2 shows an example result (for En Ru) of the grid search prior to taking the convex hull.",
"We see similar trends between the three pairs of MCL/EWC curves at corresponding learning rates, but in each case EWC is further up/right, indicating better performance.",
"Note that for both EWC and MCL, both learning rate and regularization amount have a large impact on final inand general-domain performance.",
"General-domain gains for no in-domain performance degradation are presented in Table",
"2. Our method provides large general-domain gains (be-tween 8.0 and 18.1 BLEU), regaining the majority of general-domain performance lost in continued training and substantially outperforming MCL.",
"6 Previous work has compared single runs of competing methods, making comparison difficult (e.g. one system may be better on in-domain, the other better on general-domain).",
"We interpret the general-domain performance drop experienced during continued training as catastrophic forgetting of general-domain knowledge and demonstrate that it can be largely mitigated by applying Elastic Weight Consolidation.",
"We present the full trade-off for inand general-domain performance and show that our method outperforms MCL (Dakwale and Monz, 2017) at all operating points in five of six language pairs.",
"Our method is able to regain the majority of the general-domain performance lost during continued training without compromising in-domain performance and without an additional memory or computational burden at translation-time.",
"Our method retains the advantages of continued training while addressing one of its main shortcomings and can be used in practical situations to avoid poor performance when general-domain input is encountered, even when in-domain performance and translation efficiency are both critical.",
"The authors thank Paul McNamee, Matt Post, Zach Wood-Doughty, and the Johns Hopkins 2018 SCALE participants for for helpful discussions and technical assistance.",
"Brian Thompson is supported by the Department of Defense through the National Defense Science and Engineering Graduate Fellowship (NDSEG) Program.",
"Jeremy Gwinnup received support from the Air Force Office of Scientific Research (AFOSR) Visiting Scientist Program.",
"This work has been partially supported by the DARPA LORELEI and the IARPA MATERIAL programs."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"method",
"other",
"other",
"other",
"other"
] |
[
"Jacob Devlin (cid:3) Victor",
"[email protected]",
"In this paper, we propose a new universal machine translation approach focusing on languages with a limited amount of parallel data.",
"Our proposed approach utilizes a transfer-learning approach to share lexical and sentence level representations across multiple source languages into one target language.",
"The lexical part is shared through a Universal Lexical Representation to support multilingual word-level sharing.",
"The sentence-level sharing is represented by a model of experts from all source languages that share the source encoders with all other languages.",
"This enables the low-resource language to utilize the lexical and sentence representations of the higher resource languages.",
"Our approach is able to achieve 23 BLEU on Romanian-English WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU of strong baseline system which uses multilingual training and back-translation.",
"Furthermore, we show that the proposed approach can achieve almost 20 BLEU on the same dataset through fine-tuning a pre-trained multi-lingual system in a zero-shot setting.",
"Neural Machine Translation (NMT) (Bahdanau et al., 2015) has achieved remarkable translation quality in various on-line large-scale systems (Wu et al., 2016; Devlin, 2017) as well as achieving state-of-the-art results on Chinese-English translation (Hassan et al., 2018).",
"With such large systems, NMT showed that it can scale up to immense amounts of parallel data in the order of tens of millions of sentences.",
"However, such data is not widely available for all language pairs and domains.",
"In this paper, we propose a novel universal multilingual NMT approach focusing mainly on low resource languages to overcome the limitations of NMT and leverage the capabilities of multi-lingual NMT in such scenarios.",
"Our approach utilizes multi-lingual neural translation system to share lexical and sentence level representations across multiple source languages into one target language.",
"In this setup, some of the source languages may be of extremely limited or even zero data.",
"The lexical sharing is represented by a universal word-level representation where various words from all source languages share the same underlaying representation.",
"The sharing module utilizes monolingual embeddings along with seed parallel data from all languages to build the universal representation.",
"The sentence-level sharing is represented by a model of language experts which enables low-resource languages to utilize the sentence representation of the higher resource languages.",
"This allows the system to translate from any language even with tiny amount of parallel resources.",
"We evaluate the proposed approach on 3 different languages with tiny or even zero parallel data.",
"We show that for the simulated zero-resource\" settings, our model can consistently outperform a strong multi-lingual NMT baseline with a tiny amount of parallel sentence pairs.",
"Neural Machine Translation (NMT) (Bahdanau et al., 2015; Sutskever et al., 2014) is based on Sequence-to-Sequence encoder-decoder model along with an attention mechanism to enable better handling of longer sentences (Bahdanau et al., 2015).",
"Attentional sequence-to-sequence models are modeling the log conditional probability of the 344 Figure 1: BLEU scores reported on the test set for Ro-En.",
"translation Y given an input sequence X .",
"In general, the NMT system consists of two components: an encoder e which transforms the input sequence into an array of continuous representations, and a decoder d that dynamically reads the encoder's output with an attention mechanism and predicts the distribution of each target word.",
"Generally, is trained to maximize the likelihood on a training set consisting of N parallel sentences: L ( ) = 1 NNX n =1 log p (cid:16) Y ( n ) | X ( n ) ; (cid:17) = 1 NNX n =1 TX t =1 log p (cid:16) y ( n ) t | y ( n ) 1: t 1 , f att t ( h ( n ) 1: T s ) (cid:17) (1) where at each step, f att t builds the attention mechanism over the encoder's output h 1: T s .",
"More precisely, let the vocabulary size of source words as V h 1: T s = f ext (cid:2) e x 1 , ..., e x Ts (cid:3) , e x = EI ( x ) (2) where EI RV d is a look-up table of source embeddings, assigning each individual word a unique embedding vector; f ext is a sentence-level feature extractor and is usually implemented by a multi-layer bidirectional RNN (Bahdanau et al., 2015; Wu et al., 2016), recent efforts also achieved the state-of-the-art using non-recurrence f ext , e.g. ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017).",
"Extremely Low-Resource NMT Both e and d should be trained to converge using parallel training examples.",
"However, the performance is highly correlated to the amount of training data.",
"As shown in Figure.",
"1, the system cannot achieve reasonable translation quality when the number of the parallel examples is extremely small ( N 13 k sentences, or not available at all N = 0 ).",
"Multi-lingual NMT Lee et al. (2017) and Johnson et al. (2017) have shown that NMT is quite efficient for multilingual machine translation.",
"Assuming the translation from K source languages into one target language, a system is trained with maximum likelihood on the mixed parallel pairs { X ( n,k ) , Y ( n,k ) } n =1 ...N k k =1 ...K , that is L ( ) = 1 NKX k =1 N k X n =1 log p (cid:16) Y ( n,k ) | X ( n,k ) ; (cid:17) (3) where N = P Kk =1 N k .",
"As the input layer, the system assumes a multilingual vocabulary which is usually the union of all source language vocabularies with a total size as V = P Kk =1 V k .",
"In practice, it is essential to shuffle the multilingual sentence pairs into mini-batches so that different languages can be trained equally.",
"Multi-lingual NMT is quite appealing for low-resource languages; several papers highlighted the characteristic that make it a good fit for that such as Lee et al. (2017), Johnson et al. (2017), Zoph et al. (2016) and Firat et al. (2016).",
"Multi-lingual NMT utilizes the training examples of multiple languages to regularize the models avoiding over-fitting to the limited data of the smaller languages.",
"Moreover, the model transfers the translation knowledge from high-resource languages to low-resource ones.",
"Finlay, the decoder part of the model is sufficiently trained since it shares multilingual examples from all languages.",
"Despite the success of training multi-lingual NMT systems; there are a couple of challenges to leverage them for zero-resource languages:",
"Lexical-level Sharing Conventionally, a multilingual NMT model has a vocabulary that represents the union of the vocabularies of all source languages.",
"Therefore, the multi-lingual words do not practically share the same embedding space since each word has its own representation.",
"This does not pose a problem for languages with sufficiently large amount of data, yet it is a major limitation for extremely low resource languages since most of the vocabulary items will not have enough, if any, training examples to get a reliably trained models.",
"A possible solution is to share the surface form of all source languages through sharing sub-units 345 such as subwords (Sennrich et al., 2016b) or characters (Kim et al., 2016; Luong and Manning, 2016; Lee et al., 2017).",
"However, for an arbitrary low-resource language we cannot assume significant overlap in the lexical surface forms compared to the high-resource languages.",
"The low-resource language may not even share the same character set as any high-resource language.",
"It is crucial to create a shared semantic representation across all languages that does not rely on surface form overlap.",
"Sentence-level Sharing It is also crucial for low-resource languages to share source sentence representation with other similar languages.",
"For example, if a language shares syntactic order with another language it should be feasible for the low-resource language to share such representation with another high recourse language.",
"It is also important to utilize monolingual data to learn such representation since the low or zero resource language may have monolingual resources only.",
"We propose a Universal NMT system that is focused on the scenario where minimal parallel sentences are available.",
"As shown in Fig. 2, we introduce two components to extend the conventional multi-lingual NMT system (Johnson et al., 2017): Universal Lexical Representation (ULR) and Mixture of Language Experts (MoLE) to enable both word-level and sentence-level sharing, respectively.",
"As we highlighted above, it is not straightforward to have a universal representation for all languages.",
"One potential approach is to use a shared source vocabulary, but this is not adequate since it assumes significant surface-form overlap in order being able to generalize between high-resource and low-resource languages.",
"Alternatively, we could train monolingual embeddings in a shared space and use these as the input to our MT system.",
"However, since these embeddings are trained on a monolingual objective, they will not be optimal for an NMT objective.",
"If we simply allow them to change during NMT training, then this will not generalize to the low-resource language where many of the words are unseen in the parallel data.",
"Therefore, our goal is to create a shared embedding space which",
"(a) is trained towards NMT rather than a monolingual objective,",
"(b) is not based on lexical surface forms, and",
"(c) will generalize from the high-resource languages to the low-resource language.",
"We propose a novel representation for multilingual embedding where each word from any language is represented as a probabilistic mixture of universal-space word embeddings.",
"In this way, semantically similar words from different languages will naturally have similar representations.",
"Our method achieves this utilizing a discrete (but probabilistic) universal token space, and then learning the embedding matrix for these universal tokens directly in our NMT training.",
"Lexicon Mapping to the Universal Token Space We first define a discrete universal token set of size M into which all source languages will be projected.",
"In principle, this could correspond to any human or symbolic language, but all experiments here use English as the basis for the universal token space.",
"As shown in Figure 2, we have multiple embedding representations.",
"EQ is language-specific embedding trained on monolingual data and EK is universal tokens embedding.",
"The matrices EK and EQ are created beforehand and are not trainable during NMT training.",
"EU is the embedding matrix for these universal tokens which is learned during our NMT training.",
"It is worth noting that shaded parts in Figure2 are trainable during NMT training process.",
"Therefore, each source word e x is represented as a mixture of universal tokens M of EU .",
"The mapping q projects the multilingual words into the universal space based on their semantic similarity.",
"That is, q ( u | x ) is a distribution based on the distance D s ( u, x ) between u and x as: q ( u i | x ) = e D ( u i ,x ) / P u j e D ( u j ,x ) / (5) where is a temperature and D ( u i , x ) is a scalar score which represents the similarity between source word x and universal token u i : D ( u, x ) = EK ( u ) A EQ ( x ) T (6) where EK ( u ) is the key embedding of word u , EQ ( x ) is the query embedding of source word x .",
"This is a key-value representation, where the queries are the monolingual language-specific embedding, the keys are the universal tokens embeddings and the values are a probabilistic distribution over the universal NMT embeddings.",
"This can represent unlimited multi-lingual vocabulary that has never been observed in the parallel training data.",
"It is worth noting that the trainable transformation matrix A is added to the query matching mechanism with the main purpose to tune the similarity scores towards the translation task.",
"A is shared across all languages and optimized discriminatively during NMT training such that the system can fine-tune the similarity score q () to be optimal for NMT.",
"Shared Monolingual Embeddings In general, we create one EQ matrix per source language, as well as a single EK matrix in our universal token language.",
"For Equation 6 to make sense and generalize across language pairs, all of these embedding matrices must live in a similar semantic space.",
"To do this, we first train off-the-shelf monolingual word embeddings in each language, and then learn one projection matrix per source language which maps the original monolingual embeddings into EK space.",
"Typically, we need a list of source word universal token pairs (seeds S k ) to train the projection matrix for language k .",
"Since vectors are normalized, learning the optimal projection is equivalent to finding an orthogonal transformation O k that makes the projected word vectors as close as to its corresponded universal tokens: max O k X ( x, y ) S k (cid:0) EQ k ( x ) O k (cid:1) EK ( y ) T s.t. OT k O k = I, k = 1 , ..., K (7) which can be solved by SVD decomposition based on the seeds (Smith et al., 2017).",
"In this paper, we chose to use a short list of seeds from automatic word-alignment of parallel sentences to learn the projection.",
"However, recent efforts (Artetxe et al., 2017; Conneau et al., 2018) also showed that it is possible to learn the transformation without any seeds, which makes it feasible for our proposed method to be utilized in purely zero parallel resource cases.",
"It is worth noting that O k is a language-specific matrix which maps the monolingual embeddings of each source language into a similar semantic space as the universal token language.",
"Interpolated Embeddings Certain lexical categories (e.g. function words) are poorly captured by Equation 4.",
"Luckily, function words often have very high frequency, and can be estimated robustly from even a tiny amount of data.",
"This motivates an interpolated e x where embeddings for very frequent words are optimized directly and not through the universal tokens: ( x ) EI ( x ) + ( x ) MX i =1 EU ( u i ) q ( u i | x ) (8) Where EI ( x ) is a language-specific embedding of word x which is optimized during NMT training.",
"In general, we set ( x ) to 1.0 for the top k most frequent words in each language, and 0.0 otherwise, 347 where k is set to 500 in this work.",
"It is worth noting that we do not use an absolute frequency cutoff because this would cause a mismatch between high-resource and low-resource languages, which we want to avoid.",
"We keep ( x ) fixed to 1.0.",
"An Example To give a concrete example, imagine that our target language is English (En), our high-resource auxiliary source languages are Spanish (Es) and French (Fr), and our low-resource source language is Romanian (Ro).",
"En is also used for the universal token set.",
"We assume to have 10M+ parallel Es-En and Fr-En, and a few thousand in Ro-En.",
"We also have millions of monolingual sentences in each language.",
"We first train word2vec embeddings on monolingual corpora from each of the four languages.",
"We next align the Es-En, Fr-En, and Ro-En parallel corpora and extract a seed dictionary of a few hundred words per language, e.g., gato cat , chien dog .",
"We then learn three matrices O 1 , O 2 , O 3 to project the Es, Fr and Ro embeddings ( EQ 1 , EQ 2 , EQ 3 ), into En ( EK ) based on these seed dictionaries.",
"At this point, Equation 5 should produce reasonable alignments between the source languages and En, e.g., q ( horse | magar ) = 0 .",
"5 , q ( donkey | magar ) = 0 .",
"3 , q ( cow | magar ) = 0 .",
"2 , where magar is the Ro word for donkey .",
"As we paved the road for having a universal embedding representation; it is crucial to have a language-sensitive module for the encoder that would help in modeling various language structures which may vary between different languages.",
"We propose a Mixture of Language Experts (MoLE) to model the sentence-level universal encoder.",
"As shown in Fig. 2, an additional module of mixture of experts is used after the last layer of the encoder.",
"Similar to (Shazeer et al., 2017), we have a set of expert networks and a gating network to control the weight of each expert.",
"More precisely, we have a set of expert networks as f 1 ( h ) , ..., f K ( h ) where for each expert, a two-layer feed-forward network which reads the output hidden states h of the encoder is utilized.",
"The output of the MoLE module h 0 will be a weighted sum of these experts to replace the encoder's representation: h 0 = KX k =1 f k ( h ) softmax ( g ( h )) k , (9) where an one-layer feed-forward network g ( h ) is used as a gate to compute scores for all the experts.",
"In our case, we create one expert per auxiliary language.",
"In other words, we train to only use expert f i when training on a parallel sentence from auxiliary language i .",
"Assume the language 1 ...K 1 are the auxiliary languages.",
"That is, we have a multi-task objective as: L gate = K 1 X k =1 N k X n =1 log [ softmax ( g ( h )) k ] (10) We do not update the MoLE module for training on a sentence from the low-resource language.",
"Intuitively, this allows us to represent each token in the low-resource language as a context-dependent mixture of the auxiliary language experts.",
"We extensively study the effectiveness of the proposed methods by evaluating on three almost-zero-resource language pairs with variant auxiliary languages.",
"The vanilla single-source NMT and the multi-lingual NMT models are used as baselines.",
"Dataset We empirically evaluate the proposed Universal NMT system on 3 languages Romanian (Ro) / Latvian (Lv) / Korean (Ko) translating to English (En) in near zero-resource settings.",
"To achieve this, single or multiple auxiliary languages from Czech (Cs), German (De), Greek (El), Spanish (Es), Finnish (Fi), French (Fr), Italian (It), Portuguese (Pt) and Russian (Ru) are jointly trained.",
"The detailed statistics and sources of the available parallel resource can be found in Table 1, where we further down-sample the corpora for the targeted languages to simulate zero-resource.",
"It also requires additional large amount of monolingual data to obtain the word embeddings for each language, where we use the latest Wikipedia dumps 5 for all the languages.",
"Typically, the monolingual corpora are much larger than the parallel corpora.",
"For validation and testing, the standard validation and testing sets are utilized for each targeted language.",
"1 http://www.statmt.org/wmt16/translation-task.html 2 https://sites.google.com/site/koreanparalleldata/ 3 http://www.statmt.org/europarl/ 4 http://opus.lingfil.uu.se/MultiUN.php (subset) 5 https://dumps.wikimedia.org/ 348 Zero-Resource Translation Auxiliary High-Resource Translation source Ro Ko Lv Cs De El Es Fi Fr It Pt Ru corpora WMT16 1 KPD 2 Europarl v8 3 UN 4 size 612k 97k 638k 645k 1.91m 1.23m 1.96m 1.92m 2.00m 1.90m 1.96m 11.7m subset 0/6k/60k 10k 6k / 2.00m Table 1: Statistics of the available parallel resource in our experiments.",
"Preprocessing All the data (parallel and monolingual) have been tokenized and segmented into subword symbols using byte-pair encoding (BPE) (Sennrich et al., 2016b).",
"We use sentences of length up to 50 subword symbols for all languages.",
"For each language, a maximum number of 40 , 000 BPE operations are learned and applied to restrict the size of the vocabulary.",
"We concatenate the vocabularies of all source languages in the multilingual setting where special a language marker \" have been appended to each word so that there will be no embedding sharing on the surface form. Thus, we avoid sharing the representation of words that have similar surface forms though with different meaning in various languages. Architecture We implement an attention-based neural machine translation model which consists of a one-layer bidirectional RNN encoder and a two-layer attention-based RNN decoder. All RNNs have 512 LSTM units (Hochreiter and Schmidhu-ber, 1997). Both the dimensions of the source and target embedding vectors are set to 512. The dimensionality of universal embeddings is also the same. For a fair comparison, the same architecture is also utilized for training both the vanilla and multilingual NMT systems. For multilingual experiments, 1 5 auxiliary languages are used. When training with the universal tokens, the temperature (in Eq. 6) is fixed to 0 . 05 for all the experiments. Learning All the models are trained to maximize the log-likelihood using Adam (Kingma and Ba, 2014) optimizer for 1 million steps on the mixed dataset with a batch size of 128. The dropout rates for both the encoder and the decoder is set to 0.4. We have open-sourced an implementation of the proposed model. 6 4.2 Back-Translation We utilize back-translation (BT) (Sennrich et al., 2016a) to encourage the model to use more information of the zero-resource languages. More concretely, we build the synthetic parallel corpus 6 https://github.com/MultiPath/NA-NMT/tree/universal_translation by translating on monolingual data 7 with a trained translation system and use it to train a backward direction translation model. Once trained, the same operation can be used on the forward direction. Generally, BT is difficult to apply for zero resource setting since it requires a reasonably good translation system to generate good quality synthetic parallel data. Such a system may not be feasible with tiny or zero parallel data. However, it is possible to start with a trained multi-NMT model. 4.3 Preliminary Experiments Training Monolingual Embeddings We train the monolingual embeddings using fastText 8 (Bojanowski et al., 2017) over the Wikipedia corpora of all the languages. The vectors are set to 300 dimensions, trained using the default setting of skip-gram . All the vectors are normalized to norm 1 . Pre-projection In this paper, the pre-projection requires initial word alignments (seeds) between words of each source language and the universal tokens. More precisely, for the experiments of Ro/Ko/Lv-En, we use the target language (En) as the universal tokens; fast_align 9 is used to automatically collect the aligned words between the source languages and English. 5 Results We show our main results of multiple source languages to English with different auxiliary languages in Table 2. To have a fair comparison, we use only 6k sentences corpus for both Ro and Lv with all the settings and 10k for Ko. It is obvious that applying both the universal tokens and mixture of experts modules improve the overall translation quality for all the language pairs and the improvements are additive. To examine the influence of auxiliary languages, we tested four sets of different combinations of auxiliary languages for Ro-En and two sets for Lv-En. 7 We used News Crawl provided by WMT16 for Ro-En. 8 https://github.com/facebookresearch/fastText 9 https://github.com/clab/fast_align 349 0k 6k 60k 600k size of parallel corpus 0 5 10 15 20 25 BLEU s c o r e s Vanilla Multi-NMT Multi-NMT+UnivTok Figure 3: BLEU score vs corpus size 0 2 4 6 8 10 12 14 16 # of Missing tokens on 6K data 10.0 12.5 15.0 17.5 20.0 22.5 25.0 27.5 BLEU s c o r e s Mult-NMT-6k Mult-NMT-6k + UnivTok Mult-NMT-60k Mult-NMT-6k + UnivTok + BT Figure 4: BLEU score vs unknown tokens Src Aux Multi +ULR + MoLE Ro Cs De El Fi 18.02 18.37 Cs De El Fr 19.48 19.52 De El Fi It 19.11 19.33 Es Fr It Pt 14.83 20.01 20.51 Lv Es Fr It Pt 7.68 10.86 11.02 Es Fr It Pt Ru 7.88 12.40 13.16 Ko Es Fr It Pt 2.45 5.49 6.14 Table 2: Scores over variant source languages (6k sentences for Ro & Lv, and 10k for Ko). Multi\" means the Multi-lingual NMT baseline.",
"It shows that Ro performs best when the auxiliary languages are all selected in the same family (Ro, Es, Fr, It and Pt are all from the Romance family of European languages) which makes sense as more knowledge can be shared across the same family.",
"Similarly, for the experiment of Lv-En, improvements are also observed when adding Ru as additional auxiliary language as Lv and Ru share many similarities because of the geo-graphical influence even though they don't share the same alphabet.",
"We also tested a set of Ko-En experiments to examine the generalization capability of our approach on non-European languages while using languages of Romance family as auxiliary languages.",
"Although the BLEU score is relatively low, the proposed methods can consistently help translating less-related low-resource languages.",
"It is more reasonable to have similar languages as auxiliary languages.",
"We perform thorough experiments to examine effectiveness of the proposed method; we do ablation study on Ro-En where all the models are trained",
"Unknown Tokens One explanation on how ULR help the translation for almost zero resource languages is it greatly cancel out the effects of missing tokens that would cause out-of-vocabularies during testing.",
"As in Fig. 4, the translation performance heavily drops when it has more unknown\" which cannot be found in the given 6k training set, especially for the typical multilingual NMT. Instead, these unknown\" tokens will naturally have their embeddings based on ULR projected universal tokens even if we never saw them in the training set.",
"When we apply back-translation over the monolingual data, the performance further improves which can almost catch up with the model trained with 60k data.",
"Examples Figure 5 shows some cherry-picked examples for Ro-En.",
"Example",
"(a) shows how the lexical selection get enriched when introducing ULR (Lex-6K) as well as when adding Back Translation (Lex-6K-BT).",
"Example",
"(b) shows the effect of using romance vs non-romance languages as the supporting languages for Ro.",
"Example",
"(c) shows the importance of having a trainable A as have 10 For 0k experiments, we used the pre-projection learned from 6k data.",
"It is also possible to use unsupervised learned dictionary.",
"Visualization of MoLE Figure 6 shows the activations along with the same source sentence with various auxiliary languages.",
"It is clear that MoLE is effectively switching between the experts when dealing with zero-resource language words.",
"For this particular example of Ro, we can see that the system is utilizing various auxiliary languages based on their relatedness to the source language.",
"We can approximately rank the relatedness based of the influence of each language.",
"For instance, the influence can be approximately ranked as Es P t > F r It > Cs El > De > F i , which is interestingly close to the grammatical relatedness of Ro to these languages.",
"On the other hand, Cs has a strong influence although it does not fall in the same language family with Ro, we think this is due to the geo-graphical influence between the two languages since Cs and Ro share similar phrases and expressions.",
"This shows that MoLE learns to utilize resources from similar languages.",
"All the described experiments above had the low resource languages jointly trained with all the auxiliary high-resource languages, where the training of the large amount of high-resource languages can be seen as a sort of regularization.",
"It is also common to train a model on high-resource languages first, and then fine-tune the model on a small resource language similar to transfer learning approaches (Zoph et al., 2016).",
"However, it is not trivial to effectively fine-tune NMT models on extremely low resource data since the models easily over-fit due to over-parameterization of the neural networks.",
"In this experiment, we have explored the fine-tuning tasks using our approach.",
"First, we train a Multi-NMT model (with ULR) on {Es, Fr, It, Pt}-En languages only to create a zero-shot setting for Ro-En translation.",
"Then, we start fine-tuning the model with 6 k parallel corpora of Ro-En, with and without ULR.",
"As shown in Fig. 7, both models improve a lot over the baseline.",
"With the help of ULR, we can achieve a BLEU score of around 10 .",
"7 (also shown in Fig. 3) for Ro-En translation with zero-resource\" translation.",
"The BLEU score can further improve to almost 20 BLEU after 3 epochs of training on 6 k sentences using ULR.",
"This is almost 6 BLEU higher than the best score of the 351",
"baseline.",
"It is worth noting that this fine-tuning is a very efficient process since it only takes less than 2 minutes to train for 3 epochs over such tiny amount of data.",
"This is very appealing for practical applications where adapting a per-trained system on-line is a big advantage.",
"As a future work, we will further investigate a better fine-tuning strategy such as meta-learning (Finn et al., 2017) using ULR.",
"Multi-lingual NMT has been extensively studied in a number of papers such as Lee et al. (2017), Johnson et al. (2017), Zoph et al. (2016) and Firat et al.",
"(2016).",
"As we discussed, these approaches have significant limitations with zero-resource cases.",
"Johnson et al. (2017) is more closely related to our current approach, our work is extending it to overcome the limitations with very low-resource languages and enable sharing of lexical and sentence representation across multiple languages.",
"Two recent related works are targeting the same problem of minimally supervised or totally unsupervised NMT.",
"Artetxe et al. (2018) proposed a totally unsupervised approach depending on multi-lingual embedding similar to ours and dual-learning and reconstruction techniques to train the model from mono-lingual data only.",
"Lample et al. (2018) also proposed a quite similar approach while utilizing adversarial learning.",
"In this paper, we propose a new universal machine translation approach that enables sharing resources between high resource languages and extremely low resource languages.",
"Our approach is able to achieve 23 BLEU on Romanian-English WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU of strong multilingual baseline system."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"objective",
"result"
] |
[
"This paper introduces MEDIASUM 1 , a large-scale media interview dataset consisting of 463.6K transcripts with abstractive summaries.",
"To create this dataset, we collect interview transcripts from NPR and CNN and employ the overview and topic descriptions as summaries.",
"Compared with existing public corpora for dialogue summarization, our dataset is an order of magnitude larger and contains complex multi-party conversations from multiple domains.",
"We conduct statistical analysis to demonstrate the unique positional bias exhibited in the transcripts of televised and radioed interviews.",
"We also show that MEDIASUM can be used in transfer learning to improve a model's performance on other dialogue summarization tasks.",
"Dialogue summarization can provide a succinct synopsis for conversations between two or more participants, based on human-transcribed or machine-generated transcripts.",
"Dialogue summaries are useful for participants to recap salient information in the talk and for absentees to grasp the key points.",
"As a result, several models have been recently proposed to summarize daily conversations (Gliwa et al., 2019; Chen and Yang, 2020), meeting transcripts (Zhu et al., 2020) and customer support conversations (Liu et al., 2019).",
"However, compared with the abundance of text summarization datasets, there are very few public datasets for dialogue summarization.",
"And existing datasets are limited to their small sizes.",
"For example, the benchmark datasets for meeting summarization, AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003), only contain transcripts and abstractive summaries for 137 and 59 business Equal contribution 1 https://github.com/zcgzcgzcg1/ MediaSum/ meetings, respectively.",
"While recently some larger dialogue summarization datasets have been proposed, they are either built from a narrow domain, e.g. the CRD3 dataset (Rameshkumar and Bailey) which is built from conversations in a live-streamed show for the Dungeons and Dragons game, or not publicized due to privacy reasons, e.g. the Didi dataset (Liu et al., 2019) from customer service conversations.",
"This lack of large-scale dialogue summarization datasets is due to a higher labeling cost compared with news articles and privacy issues with many real daily dialogues and business meetings.",
"On the other hand, media interview transcripts and the associated summaries/topics can be a valuable source for dialogue summarization.",
"In a broadcast interview, the host discusses various topics with one or more guests.",
"As many interviews proceed with pre-defined topics, the accompanying summaries are of a relatively high quality.",
"Also, the wide variety of topics, different backgrounds of speakers, and the colloquial form of chat make these interviews very close to daily conversations and business meetings.",
"Therefore, we collect public interview transcripts and the associated summaries/topics from NPR and CNN to build a large-scale dialogue summarization dataset, MEDIASUM .",
"In NPR, each transcript comes with an overview of the interview, which is used as the summary in our dataset.",
"We leverage the INTERVIEW dataset (Majumder et al., 2020) to get transcripts and crawl the associated descriptions.",
"We end up with 49.4K NPR transcripts with summaries.",
"We then collect 269.4K CNN interview transcripts from 2000 to 2020, each with a list of topic descriptions.",
"As many CNN interviews contain multiple topics, we conduct segmentation at the boundary of commercial breaks to assign each topic to the most relevant interview segment via lexical matching.",
"In this way, we not only obtain transcripts with a more concentrated topic but also enlarge the total number of instances.",
"We end up with 414.2K CNN transcript segments with topic descriptions as summaries.",
"Thus, in total, our MEDIASUM dataset contains 463.6K transcripts with summaries.",
"We show that compared to existing public dialogue summarization datasets, MEDIASUM contains more speakers, longer conversation and is an order of magnitude larger.",
"Also, we demonstrate the unique positional bias in interview dialogues: while a televised interview often mentions keywords in the summary at the beginning of the program, a radio interview usually mentions these keywords at both the beginning and the end of the program.",
"In experiments, we evaluate several benchmark summarization models on our dataset.",
"We then show that after fine-tuning on MEDIASUM , models' performance can be improved on other dialogue summarization tasks like AMI, ICSI and SAMSum, demonstrating the transfer learning capability of our dataset.",
"Due to the success of corpus-based methods, the past decade saw the emergence of many dialogue datasets on various domains (Budzianowski et al., 2018; Lowe et al., 2015).",
"However, very few of these datasets contain corresponding summary text.",
"As human dialogues have very different structures and language patterns from written articles, dialogue summarization models can only limitedly benefit from the largely available news summarization data (Zhu et al., 2020).",
"Current public datasets for dialogue summarization are either very small or in a specific domain.",
"AMI (McCowan et al., 2005) and ICSI (Janin et al., 2003) contain 137 and 59 meeting transcripts with abstractive summaries.",
"AMI meetings are recorded in an artificial environment with actors and ICSI contains meetings of a speech group.",
"MultiWOZ (Budzianowski et al., 2018) is a multi-domain task-oriented dialogue dataset where the instructions have been used as summaries (Yuan and Yu, 2019).",
"All dialogues are conducted between one user and one agent on the topic of booking and inquiry.",
"SAMSum (Gliwa et al., 2019) hires linguists to write messenger-like daily conversations.",
"Although the dialogues are open-domain, they are not from real human conversations.",
"CRD3 (Rameshkumar and Bailey) contains 159 episodes from the Critical Role show with transcribed conversations between Dungeons and Dragon players.",
"Additionally, there are non-public dialogue summarization datasets in th domains of customer support (Liu et al., 2019) and medical conversation (Krishna et al., 2020).",
"We first collect interview transcriptions from National Public Radio (NPR, www.npr.org ).",
"The INTERVIEW dataset (Majumder et al., 2020) contains 105K transcripts from NPR but does not include interview summaries or the link to the transcript page.",
"We find a majority of NPR interviews come with an overview description before the transcription text, which can be used as summaries.",
"Thus, for each interview in the INTERVIEW dataset, we use the NPR searching service to get the link to the corresponding page and extract the description text if it exists.",
"We filter out descriptions with more than 200 words and collect 49.4K transcripts with summaries.",
"The CNN transcription service provides transcripts of televised interviews and a list of discussed topics, which can be used as summaries ( transcripts.cnn.com ).",
"We crawl CNN transcripts from 2014 to 2020, combined with the data from 2000 to 2014 (Sood, 2017), and end up with 269.4K transcripts with summaries.",
"Transcript segmentation for topic match.",
"Interviews with multiple topics are often long, and the mixing of multiple topics makes it hard for models to generate accurate summaries.",
"Among the collected CNN interviews, 157.9K transcripts, or 58.6%, have more than one topic.",
"Thus, we try to partition multi-topic interviews into segments and match each topic to a segment.",
"We find that the televised CNN interviews often contain several commercial breaks marked in the transcript.",
"These ads usually come in between topics.",
"Therefore, we partition the transcript at the boundaries of commercial breaks.",
"Then, we assign each topic to the segment containing the most (at least one) non-stop words in the topic.",
"We do not count the last 50 words in a segment where the host often reminds watchers of the next topic after the commercial break.",
"Among the 157.9K multi-topic interviews, 330.4K segments are associated with at least one topic.",
"To make sure that the summary contains enough information, we filter out summaries with fewer than 5 words.",
"In the end, we construct Statistics NPR CNN Dialogues 49,420 414,176 Avg.",
"414.2K CNN interview transcripts with summaries.",
"As transcripts from the NPR and CNN are from similar domains, we combine them into a unified summarization dataset, MEDIASUM , containing 463.6K pairs of transcripts and summaries.",
"As far as we know, this is the largest public open-domain dialogue summarization dataset.",
"We show an example dialogue with its summary in Table 5.",
"Here, we note that the summary styles of NPR and CNN are different.",
"Table 1 shows that although the dialogue length and number of speakers are similar in NPR and CNN, the summaries from NPR are much longer and more abstractive, indicated by a higher ratio of novel words in summary that do not appear in the dialogue.",
"In this section, we investigate different aspects of the MEDIASUM dataset via statistics.",
"We leverage the Latent Dirichlet Allocation (Blei et al., 2003) tool in scikit-learn package (Pedregosa et al., 2011) to analyze the main dialogue topics.",
"We manually name the topic clusters based on the returned top 10 words in each cluster.",
"The top 5 topics are politics (26.3%), international news (13.3%), crime (12.7%), economy (12.5%) and US news (11.7%).",
"The dialogues in MEDIASUM have on average 30.0 turns, 6.5 speakers and 1,553.7 words, and the summaries have on average 14.4 words.",
"This shows that most dialogues in our dataset are multiparty conversations of medium to long lengths.",
"Table 2 compares MEDIASUM with other public dialogue summarization datasets.",
"As shown, MEDIASUM contains much longer dialogues and more speakers than MultiWOZ 2.0 and SAMSum.",
"This makes it suitable for training models targeted for multi-party dialogue or meeting summarization.",
"Also, while AMI, ICSI and MultiWOZ 2.0 contain dialogues either from limited domains or un-0.010 0.015 0.020 0.025 0.030 0 25 50 75 100 Position of summary words in the dialogue F r e qu e n cy Dataset CNNNPR Figure 1: The frequency of the non-stop summary words appearing at different positions of the dialogue.",
"der artificial context, MEDIASUM is a much larger dataset containing radioed and televised interview transcripts covering much broader topics.",
"It has been found that in many news articles, the most important information is often shown at the beginning, i.e. the inverted pyramid structure (Kedzie et al., 2018).",
"In this section, we investigate whether a similar positional bias is present in multi-party dialogues.",
"We record the position of each non-stop word in the transcript that also appears in the summary.",
"To normalize, we partition each transcript into 100 equal-length bins and count the frequency that summary words appear in each bin.",
"As shown in Fig. 1, similar to news articles, the beginning of transcripts from both CNN and NPR contain more summary words on average.",
"However, different from televised CNN interviews, NPR programs also contain many summary words near the end.",
"To make sure that the trend in CNN is not caused by topic segmentation, we compute the frequency for original single-topic CNN transcripts and find that the trend is very similar to the overall distribution (Ap-pendix C).",
"Thus, we suggest that the difference in positional bias between televised and radioed programs may be because viewers watching interviews on TV are relatively more focused, diminishing the need to recapitulate the main points before the program ends.",
"We apply several benchmark summarization models to the MEDIASUM dataset and report the results, including PTGen (See et al., 2017), the pre-Dataset",
"trained models UniLM-base-uncased (Dong et al., 2019) and BART-Large (Lewis et al., 2019).",
"The input concatenates transcripts from all turns, each prepended with the speaker name.",
"We also include the LEAD-3 baseline which takes the first three sentences of the transcript as the summary.",
"More implementation details are shown in Appendix D. We randomly select 10K instances for validation and another 10K for test.",
"We use the ROUGE (Lin, 2004) metrics and hyper-parameters are cho-sen based on the highest ROUGE-L score on the validation set.",
"As shown in Table 3, the LEAD-3 baseline has a relatively weak performance, indicating that media dialogues exhibit less lead bias than news articles.",
"This aligns with the general guideline to avoid inverted pyramid structure in digital programs (Macadam).",
"Moreover, pre-trained models such as BART and UniLM outperform the non-pre-trained PTGen model, showing the effectiveness of pre-training.",
"In this section, we evaluate the transfer capability of MEDIASUM by employing it for further training to improve the performance on other dialogue summarization tasks of different domains",
"and styles.",
"Specifically, we take the pre-trained model UniLM (Dong et al., 2019), fine-tune it on MEDIASUM , and then train it on datasets for meeting and dialogue summarization: AMI (McCowan et al., 2005), ICSI (Janin et al., 2003) and SAMSum (Gliwa et al., 2019).",
"As shown in Table 4, on all three datasets, training on MEDIASUM leads to improvement on the target dataset.",
"This shows the potential of using MEDIASUM as a transfer learning dataset for other dialogue summarization tasks.",
"We introduce MEDIASUM , a large-scale media interview dataset for dialogue summarization, consisting of 463.6K transcripts and summaries from NPR and CNN.",
"We conduct transcript segmentation to align topic descriptions to segments for CNN interviews.",
"The MEDIASUM dataset is an order of magnitude larger than existing corpora and contains complex multi-party conversations from { \"id\": \"NPR-11\", \"program\": \"Day to Day\", \"date\": \"2008-06-10\", \"url\": \"https://www.npr.org/templates/story/story.php?storyId=91356794\", \"title\": \"Researchers Find Discriminating Plants\", \"summary\": \"The sea rocket' shows preferential treatment to plants that are its kin. Evolutionary plant ecologist Susan Dudley of McMaster University in Ontario discusses her discovery.\", \"utt\": [ \"This is Day to Day. I'm Madeleine Brand.\", \"And I'm Alex Cohen.\", \"Coming up, the question of who wrote a famous religious poem turns into a very unchristian battle.\", \"First, remember the 1970s? People talked to their houseplants, played them classical music. They were convinced plants were sensuous beings and there was that 1979 movie, The Secret Life of Plants.'\", \"Only a few daring individuals, from the scientific establishment, have come forward with offers to replicate his experiments, or test his results. The great majority are content simply to condemn his efforts without taking the trouble to investigate their validity.\", ... \"OK. Thank you.\", \"That's Susan Dudley. She's an associate professor of biology at McMaster University in Hamilt on Ontario. She discovered that there is a social life of plants.\" ], \"speaker\": [ \"MADELEINE BRAND, host\", \"ALEX COHEN, host\", \"ALEX COHEN, host\", \"MADELEINE BRAND, host\", \"Unidentified Male\", ... \"Professor SUSAN DUDLEY (Biology, McMaster University)\", \"MADELEINE BRAND, host\" ] } Table 5: Example dialogue and summary from MEDIASUM .",
"multiple domains.",
"We also show that MEDIASUM can be used as a dataset for transfer learning to improve a model's performance on other dialogue summarization tasks.",
"We have used only the publicly available transcripts data from the media sources and adhere to their only-for-research-purpose guideline.",
"As media and guests may have biased views, the transcripts and summaries will likely contain them.",
"The content of the transcripts and summaries only reflect the views of the media and guests, and should be viewed with discretion.",
"We thank William Hinthorn for proof-reading the paper and thank the anonymous reviewers for their insightful comments."
] | [
"abstain",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively.",
"In this paper, we study a hybrid approach for leveraging the strengths of both models.",
"We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvements over previous state-of-the-art models.",
"We demonstrate that an hybrid approach by combining answers from both readers can effectively take advantages of extractive and generative answer inference strategies and outperform single models as well as homogeneous ensembles.",
"Our approach outperforms previous state-of-the-art models by 3 .",
"3 and 2 .",
"7 points in exact match on NaturalQuestions and TriviaQA respectively.",
"Open-domain question answering (QA) has been a long standing problem in natural language understanding, information retrieval, and related fields (Chen and Yih, 2020).",
"An typical open-domain QA system follows the retrieval-reader framework (Chen et al., 2017; Guu et al., 2020; Karpukhin et al., 2020), where the relevant passages are first retrieved from a large text corpus, and a reader module then navigates multiple passages for answer inference.",
"In this work, we study two paradigms of reader modules, i.e. extractive (Karpukhin et al., 2020; Guu et al., 2020) and generative (Lewis et al., 2020; Izacard and Grave, 2021) readers.",
"The extractive reader extracts contiguous spans from the retrieved passages whereas the generative reader sequentially decodes the answer string which might not be contained in the retrieved passages.",
"Recent work on open-domain QA (Karpukhin et al., 2020; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021) explores either an extractive reader or a generative reader exclusively.",
"We hypothesize that extractive and generative readers adopt different answer inference strategies, thus a hybrid extractive/generative reader can be a better option for open-domain QA tasks.",
"As shown in Figure 1, compared with prediction agreement among only generative or extractive readers (top-left and bottom-right), the cross prediction agreement between extractive and generative readers (bottom-left) is relatively low (< 50% ).",
"It indicates that answers produced by those two types of models are different and they can be complementary to each other.",
"Therefore, we propose a hybrid reader approach, UnitedQA, which is a simple ensemble approach to combine the predictions from extractive and generative readers.",
"It achieves state-of-the-art results on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017).",
"In UnitedQA, the extractive reader (UnitedQA-E) and generative reader (UnitedQA-G) are built upon the pretrained language models, ELECTRA (Clark et al., 2020) and T5 (Raffel et al., 2020), respectively.",
"For the UnitedQA-E, we adopt a weakly-supervised training objective to address the noisy supervision issue caused by the heuristics-based labeling and incorporate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the model robustness.",
"The UnitedQA-G follows the T5 Fusion-in-Decoder (FID) (Izacard and Grave, 2021) and we make two improvements: first, we add a group of attention bias parameters into the decoder cross-attention block to feature the ranking information of retrieved contexts; second, we add the adversarial training (Ju et al., 2019; Jiang et al., 2020; Pereira et al., 2021) to improve the model generalization ability.",
"The experimental results highlight the effec-G-1 G-2 G-3 E-1 E-2 E-3 G1 G2 G3 E1 E2 E3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 1: Pairwise prediction agreement ratio.",
"tiveness of the simple hybrid approach of UnitedQA.",
"With both improved extractive and generative readers, UnitedQA sets new state-of-the-art results on two popular open-domain QA datasets, i.e. 54 .",
"7 and 70 .",
"3 in exact match on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), respectively.",
"It is worth noting that our UnitedQA model not only outperforms each single model but also brings more pronounced improvements over homogeneous ensembles of either extractive or generative readers.",
"Last, based on our analyses, UnitedQA-E and UnitedQA-G have advantages in different cases, suggesting they may use different reasoning strategies.",
"In this section, we present the overall pipeline of the UnitedQA system, which consists of three components: Retrieval , Reading , and Re-ranking .",
"First, the retrieval module fetches a list of relevant passages from a Wikipedia dump for a given question.",
"Then, the module of hybrid readers produces answer candidates from the set of retrieved passages.",
"Last, the re-ranking module combines the answer candidates with linear interpolation and produce the final answer.",
"Retrieval Following Karpukhin et al. (2020), we consider two methods, BM25 and dense passage retrieval (DPR), for retrieving the support passages for a given question.",
"For BM25, passages are encoded as bag of words (BOW), and inverse document frequencies are used as the ranking function.",
"For DPR, passages and questions are represented as dense vectors based on two BERT (Devlin et al., 2019) models.",
"The relevance score is then computed based on the dot production between the query and passage vectors.",
"In this paper, we adopt the same implementation as Karpukhin et al. (2020) for retrieving passages.",
"Specifically, the English Wikipedia dump from Dec. 20, 2018 is used as the source documents for retrieval, with the removal of semi-structured data, such as tables or lists.",
"Each document is split into disjoint 100-word passages as the basic retrieval unit.",
"The top-100 passages are then passed for reading.",
"Reading We combine the generative reader and the extractive reader to produce answer candidates over the retrieved passages.",
"Here, we only give a high-level description of our approach.",
"More details regarding our improved extractive and generative models are presented in 2.1 and 2.2 respectively.",
"The generative reader is based on a sequence-to-sequence model pre-trained in a forward-generation fashion on a large corpus, i.e. T5 (Raffel et al., 2020).",
"Similar to Izacard and Grave (2021), the model takes the question and its relevant passages as input, and then generates the answer string token by token.",
"Specifically, the concatenation of all retrieved passages and the corresponding question is used as the encoder input.",
"Then, the decoder performs reasoning over the concatenation of all evidence through an attention mechanism.",
"Following state-of-the-art extractive QA models (Devlin et al., 2019; Karpukhin et al., 2020), our extractive reader is based on a Transformer neural network pre-trained with a cloze style self-supervised objective, i.e. ELECTRA (Clark et al., 2020).",
"Here, a pair of a given question and a support passage is jointly encoded into neural text representations.",
"These representations are then used to define scores or probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans.",
"Finally, the answer string probabilities are based on the aggregation over all possible answer spans from the entire set of support passages.",
"In 2.1.2, we give the problem definition of open-domain QA for extractive reader.",
"Then, we detail the improvements of UnitedQA-E in 2.1.2.",
"Given a question q and a set of K retrieved passages p 1 , . . . , p K , a text encoder produces contextualized representations: h k 1 , ... h kT R n for the question-passage pair (q , p k ) in the form of [CLS] question [SEP] passage [SEP] , where [CLS] and [SEP] are special tokens for encoding inputs, T is the maximum sequence length of the input text, and h ki indicates the contextualized embedding of the i -th token in (q , p ).",
"The extractive reader computes the span-begin score of the i -th token as s b ( i k ) = w Tb h ki using a weight vector w b R d .",
"The span-end score s e ( j k ) is defined in the same way.",
"Thus, the probabilities of a start position i k and an end position j k are P b ( i k ) = exp( s b ( i k )) Z b , P e ( j k ) = exp( s e ( j k )) Z e , where Z b , Z e are normalizing factors defined by the corresponding probability space.",
"The probability of an answer span from i k to j k is defined as P s ( i k , j k ) = P b ( i k ) P e ( j k ) .",
"Here, we consider two probability spaces, passage level and multi-passage level , with the only difference in the computing of Z b , Z e .",
"Specifically, the passage-level probability of each answer begin and end is computed by normalizing all possible positions in the respective passage, i.e. Z b = Z kb = (cid:80) I k NULL exp( s b ( i )) , Z e = Z ke = (cid:80) I k NULL exp( s e ( j )) , where I k is the set of all possible positions from the k -th passage and NULL indicates special positions if p k does not support answering the question.",
"Similarly, the multi-passage level probability is computed by normalizing over each answer positions across all K relevant passages, i.e. Z b = Z b = (cid:80) k (cid:80) I k exp( s b ( i )) , Z e = Z e = (cid:80) k (cid:80) I k exp( s e ( j )) , respectively.",
"Since there are usually multiple plausible mentions for open-domain QA, during training, it is typical to maximize either the marginal log-likelihood (MML) of all correct spans (Karpukhin et al., 2020) or the log-likelihood of the most likely correct span (HardEM) (Min et al., 2019).",
"During inference, the prediction is made based on the candidate answer string score, obtaining as P a ( y ) = (cid:80) ( i,j ) Y P s ( i, j ) , where Y is the set of spans corresponding to the answer string y .",
"In addition to better text representations from Clark et al. (2020), we consider two methods for improving the training of the extractive reader.",
"Multi-objective for Weakly-supervised QA The multi-objective formulation is introduced in Cheng et al. (2020) for improving weakly supervised document-level QA.",
"Different from Cheng et al. (2020) where only MML is considered for the multi-objective formulation, we found combining HardEM with MML is more effective for open-domain QA based on our experiments (4.1).",
"Specifically, we combine a multi-passage HardEM loss with K passage-level MML losses over a batch of K passages LEXT = log max ( i,j ) P Ms ( i, j ) + 1 K (cid:88) k log (cid:88) ( i k ,j k ) P Ps ( i k , j k ) , (1) where P Ms , P Ps is the multi-passage level and passage level span probabilities respectively.",
"Posterior Differential Regularization Due to the noisy supervision for open-domain QA (Chen et al., 2017), we investigate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the robustness of the extractive reader.",
"Different from Cheng et al. (2021) where only clean supervision setting is considered, in this work, we apply PDR to the weakly supervised open-domain QA scenario.",
"Given it is computationally expensive to enumerate all possible spans, we apply two separate regularization terms for the begin and end probabilities at the multi-passage level, respectively, LPDR = D ( P b ( i ) | P (cid:48) b ( i )) + D ( P e ( j ) | P (cid:48) e ( j )) , (2) where D ( | ) is the squared Hellinger distance, and P (cid:48) b , P (cid:48) e are the probabilities of start and end positions with additive input noise to the token embeddings.",
"Specifically, we sample noise vectors (cid:15) 1 , . . . , (cid:15) T from N (0 , c 2 I ) , and add them to the token embeddings as the noisy input, i.e. v 1 + (cid:15) 1 , . . . , v T + (cid:15) T , where c is fixed to 1e 3 throughout our experiments.",
"Based on this, the overall training objective for the extractive reader is L 1 = LEXT + LPDR , (3) where is a regularization scalar hyperparameter.",
"Here, we first formally define the setup of generative reader for open-domain QA in 2.2.1 and then present our improvements in 2.2.2.",
"Given a question q and a set of K retrieved passages p 1 , . . . , p K , the encoder model encodes each ( q , p k ) pair independently, and produces contextualized representation for each token: h ki R d for the i -th token of the k -th pair.",
"The decoder then performs attention over the concatenation of the representations of all the retrieved passages, and generates the answer string.",
"Let x denote the input of the question and all retrieved passages x = (cid:0) ( q , p 1 ) , ..., ( q , p K ) (cid:1) , and y the answer string with its tokens as ( y 1 , ..., y N ) .",
"The generative reader is trained to maximize a sequence-to-sequence objective for a given ( x , y ) , L ( x , y ; ) = N (cid:88) i log P ( y i | x , y 1: i 1 ) , (4) where is the model parameter.",
"During inference, a greedy decoding is used to produce the answer.",
"Decoder Attention Bias The decoder in the T5 transformer model adopts a cross-attention mechanism to compute attention scores between the decoding answer tokens and all the retrieved passage tokens.",
"Specifically, let y i R d be the query vector of the i -th decoding token 1 , and m kj R d be the key vector of the j -th token in ( ( q ) , p k ) .",
"The multi-head cross-attention scores in T5 (Raffel et al., 2020) s ki,j is calculated as s ki,j = MultiHeadAtt ( y i , m kj ) R | Head | (5) where | Head | is the number of attention heads.",
"However, it doesn't capture the relevance information of retrieved passages into the reader in (5).",
"To add the relevance feature into the attention block, we revise (5) by incorporating the attention bias s ki,j = MultiHeadAtt ( y i , m kj ) + b k , (6) where b k R | Head | is a trainable attention bias vector for all the tokens in the k -th retrieved passage.",
"In the experiments, the maximum retrieved passages is by default set to 100 .",
"Thus, the decoder attention bias introduces additional 100 | Head | parameters for each layer.",
"Adversarial Training Adversarial training creates adversarial examples by adding small perturbations to the embedding layer.",
"Assuming the word(-piece) embedding layer is parameterized by a matrix V R | V | d , | V | is the vocabulary size, and d 1 we omit the layer notation for simplification Dataset Train Dev Test NQ 79168 8757 3610 TriviaQA 78785 8837 11313 EffcientQA 1800 Table 1: Number of questions in each QA dataset.",
"is the embed-dimension.",
"The adversarial embedding matrix V can be obtained by g V = VL ( x , y ; ) , (7) V = V + SG ( (cid:15)g V / || g V || 2 ) , (8) where SG ( ) is the stop-gradient operation.",
"We use the adversarial embedding matrix V to replace the original V in model parameters , and obtain .",
"Thus the adversarial loss can be calculated as LAT ( x , y ; ) = L ( x , y ; ) .",
"Therefore, the overall training objective of the generative reader is",
"where = 0 .",
"5 , = 0 .",
"5 in all of the exepriments.",
"The UnitedQA system combines outputs from both extractive and generative models for a given question during inference.",
"Since the output spaces of extractive and generative models are different, we use a simple linear interpolation based on best predictions from each model 2 .",
"Denote the predicted strings from M extractive and N generative models as y E 1 , ..., y EM and y G 1 , ..., y GN , respectively.",
"The hybrid prediction y is obtained by argmax y Y M (cid:88) m =1 1 ( y, y Em ) + N (cid:88) n =1 1 ( y, y Gn ) , (11) where Y is the set of all predicted strings, 1 ( y, y (cid:48) ) is an indicator function and = 0 .",
"We use two representative QA datasets and adopt the same training/dev/testing splits as in previous",
"2 We have also tried a few more complex approaches for combining the extractive and generative models.",
"For example, we first train an extractive model, and then append the top-k answer strings from the extractive model at the end of the input for training a generative model.",
"None of them is as good as the simple ensemble approach.",
"work (Lee et al., 2019; Karpukhin et al., 2020).",
"Both datasets (see Table 1 for statistics) have been heavily studied in recent work (Lee et al., 2019; Min et al., 2019; Karpukhin et al., 2020; Guu et al., 2020).",
"We follow the standard evaluation protocol and use exact match (EM) as the evaluation metric.",
"NaturalQuestions (Kwiatkowski et al., 2019) is composed of questions by real users to Google Search, each with answers identified by human annotators in Wikipedia.",
"The open-domain version of NaturalQuestions (Lee et al., 2019) only consider questions with short answers, i.e. answers with less than 5 tokens.",
"In the NaturualQuestions, the questions are considered to be more information seeking given that the question askers didn't know the answer beforehand.",
"In addition, we use another evaluation set, i.e. the dev set introduced recently by the EfficientQA competition (Min et al., 2021), which is constructed in the same way as the original NaturalQuestions dataset.",
"TriviaQA (Joshi et al., 2017) contains trivia question-answer pairs that were scraped from the web.",
"Different from NaturalQuestions, the questions here are written with known answers in mind.",
"Specifically, the unfiltered set has been used for developing open-domain QA models.",
"Implementation details For a fair comparison, we use the same retrieval module as Karpukhin et al. (2020) for NaturalQuestions and TriviaQA to mitigate the impact of retrieval difference.",
"Specifi-cally, we use DPR (single) for NaturalQuestions and BM25+DPR (multi) for TriviaQA because of their best end-to-end performance (Karpukhin et al. 2020).",
"For all the experiments, we use 8 and 16 V100-32GB for base and large model training respectively.",
"We train our models with Adam optimizer of a linear scheduler with a warmup raito of 0.1.",
"The extractive models are trained for up to 8 epochs with a learning rate of 2e 5 and a batch passage size per question of 16 .",
"The generative models are trained for up to 10 epochs with a learning rate of 1e 4 , a batch size of 64 , and 100 retrieved passages per question for model training.",
"We select in { 4 , 8 } .",
"After the best configuration is selected based on the dev set, we run our best models 3 times independently with different random seeds and report the median performance on the test set.",
"We also report ensemble results which are based on the linear interpolation over answer predictions from the 3 models.",
"Single Model Results: We first compare our models to two recent models, REALM (Guu et al., 2020) and RAG (Lewis et al., 2020), which are first pre-trained with different retrieval augmented objectives and then fine-tuned for open-domain QA.",
"In addition, we include as baselines DPR (Karpukhin et al., 2020) and T5-FID (Izacard and Grave, 2021), both of which are based on the same retriever as ours.",
"As shown in Table 2, both our extractive and generative models achieve new state-of-the-art results for both studied datasets.",
"Compared with the recent state-of-the-art extractive model (DPR), our base model leads to pronounced 15% relative improvements for both NaturalQuestions ( +6 . 2 absolute improvement) and TriviaQA ( +8 . 4 absolute improvement).",
"More importantly, UnitedQA-E base achieves comparable or even better performance with regard to generative models of larger size, i.e. RAG and T5-FID base .",
"It highlights the importance of proper training strategies for open-domain QA models.",
"Hybrid Model Results: In order to evaluate the advantage of the hybrid of the extractive and generative models (UnitedQA), we include two homogeneous ensemble baselines, one consisting of only extractive readers (UnitedQA-E++) and the other ensemble of exclusively generative models (UnitedQA-G++).",
"For homogeneous ensemble cases, the three-way majority prediction is used.",
"For the hybrid of extractive and generative readers, we select a three-model combination from the set of three generative and three extractive models based on the dev set.",
"We observed that combining predictions from two generative models and one extractive model results in the best hybrid model for both datasets.",
"As expected, all ensemble models show an improvement over their single model counterparts.",
"However, the two homogeneous ensemble baselines, UnitedQA-E++ and UnitedQA-G++, only provide marginal gains over the corresponding best single models.",
"The significant improvement brought by our proposed hybrid approach indicates the benefit of combining extractive and generative readers for open-domain QA.",
"Discussion: Although the proposed hybrid approach has been shown to be highly effective for open-domain QA, we point out that the improved performance comes with increased computational cost.",
"The best combination requires approximately three times the computational cost of a single generative model.",
"Therefore, it would be interesting to explore more efficient hybrid methods, such as effective parameter sharing strategies or unified formulations.",
"Another interesting future direction is to explore customized compression approaches for reducing the model size of retriever and reader separately or jointly through pruning (Han et al., 2016), quantization (Hubara et al., 2018), and knowledge distillation (Hinton et al., 2015).",
"Specifically, given that the hybrid model is more effective, it is likely that a student model can learn more effectively from a hybrid teacher model via knowledge distillation for open-domain QA.",
"In this section, we first carry out ablation study on the extractive and generative model improvements.",
"Moreover, we aim to take a deeper look and understand the difference between the two models.",
"In Table 3, we present ablation experiments on the effectiveness of different textual representations and methods for improving the extractive model UnitedQA-E base .",
"Here, we focus on base models, i.e. BERT base and ELECTRA base .",
"Note that the row UnitedQA-E base is the corresponding base model reported in Table 2.",
"Compared with the MML-based multi-objective (Cheng et al., 2020), we find that a new multi-objective with HardEM at the multi-passage level and MML at the passage level is more effective for open-domain QA.",
"In addition to the multi-objective training, there is a noticeable improvement brought by the regularization method (PDR) which indicates the importance of proper regularization for learning with noisy supervision.",
"Last but not least, the large improvement of ELECTRA over BERT indicates the importance of deriving better text representations for weakly supervised NLP problems.",
"For the UnitedQA-G, we present the ablation study on analyzing the effectiveness of decoder attention bias component and adversarial training mechanism in Table",
"4. Both techniques contribute to decent improvements over T5-FID with more pronounced gains brought by adversarial training.",
"Here, we vary the number of retrieved passages during inference and report the evaluation results in terms of end-to-end QA exact match score of UnitedQA-E and UnitedQA-G along with the corresponding topk retrieval accuracy.",
"The results are summarized in Table",
"5. As expected, when the number of retrieved passages increases, both topk retrieval accuracy and the end-to-end QA performance improve.",
"However, there is a noticeable gap between the improvement of retrieving more passages (i.e., recall) and that of the corresponding end-to-end QA performance, especially for the extractive reader.",
"This is likely caused by additional noise introduced with improved retrieval recall.",
"Specifically, only half of the retriever improvement can be effectively utilized by the extractive model while the generative model can benefit more from retrieving more passages.",
"This suggests that by concatenating all passages in vector space, the generative model are more effective in de-noising in comparison to the extractive model.",
"Following Lewis et al. (2021), we carry out a breakdown evaluation of model performance over the NaturalQuestions and TriviaQA test sets.",
"Given their superior performance, we again only consider our improved extractive and generative models, i.e. UnitedQA-E large and UnitedQA-G respectively.",
"The evaluation is summarized in Table",
"6. In comparison to their corresponding overall performance, both the extractive and generative models achieve much better performance on the Overlap categories ( i.e. Question Overlap and Answer Overlap) for both NaturalQuestions and TrivaQA, which indicates that both models perform well for question and answer memorization.",
"Different from question and answer memorization, there is a pronounced performance drop for both models on theAnswer Overlap Only category where certain amount of relevance inference capability is required to succeed.",
"Lastly, we see that both extractive and generative models suffer some significant performance degradation for the No Overlap column which highlights model's generalization evaluation.",
"Nevertheless, the extractive model demonstrate a better QA generalization by achieving a better overall performance on the No Overlap category for both datasets.",
"Here, we conduct analyses into prediction errors made by the extractive and generative models based on automatic evaluation.",
"For this study, we use the EfficientQA dev set (Min et al., 2021) which is constructed in the same way as the original NaturalQuestions dataset.",
"Specifically, we group prediction errors into three categorizes: 1) common prediction errors made by both the extractive and generative models, 2) prediction errors made by the extractive model, 3) prediction errors produced by the generative model.",
"In the following, we first carry out a manual inspection into the common errors.",
"Then, we compare the prediction errors made by extractive and generative models, respectively.",
"First of all, there is an error rate of 29% of those consensus predictions made by both extractive and generative models according to the automatic evaluation.",
"Based on 30 randomly selected examples, we find that around 30% of those predictions are actually valid answers as shown in the top part of Table",
"7. In addition to predictions that are answers at different granularity or semantically equivalent ones, some of those prediction errors are likely caused by the ambiguity in questions.",
"As the given example in Table 7, based on the specificity, the model prediction is also a valid answer.",
"This high-Dataset Model Total Question Overlap No Question Overlap Answer Overlap Answer Overlap Only No Overlap NQ UnitedQA-G 52.3 72.2 40.5 62.7 45.4 34.0 UnitedQA-E 51.8 69.4 41.5 60.1 45.1 37.6 TriviaQA UnitedQA-G 68.6 88.4 62.5 78.1 69.6 44.5 UnitedQA-E 68.9 89.3 62.7 78.6 70.6 44.3 Table 6: Breakdown evaluation on NaturalQuestions (NQ) and TriviaQA based on test splits defined in (Lewis et al., 2021).",
"lights the limitation of the current evaluation metric, which does not accurately estimate the existing open-domain QA system capabilities.",
"As shown in the bottom part of Table 7, most of representative errors are due to the confusion of related concepts, entities or events that are mentioned frequently together with the corresponding gold answers.",
"Next, all questions from the dev set are categorized based the WH question word, i.e. what, which, when, who, how, where .",
"We then report the relative performance change of each WH category for both extractive and generative models over their corresponding overall prediction accuracy in Figure 2.",
"First, it is easy to see that both extractive and generative models achieve the best performance for entity related who questions, which is likely to be the result of high ratio of samples of this type seen during training.",
"In contrast, the answers to what questions can play a much richer syntactic role in context, making it more difficult for both extractive and generative models to perform well.",
"Interestingly, the generative model exhibits the strength for temporal reasoning, whereas the extractive model does not.",
"This difference suggests that it is worth exploring better temporal modeling strategies to improve the extractive model in the future.",
"Open-domain QA Open-domain QA requires a system to answer questions based on evidence retrieved from a large corpus such as Wikipedia (Voorhees, 2000; Chen et al., 2017).",
"Recent progress has been made towards improving evidence retrieval through both sparse vector models like TF-IDF or BM25 (Chen et al., 2017; Min et al., 2019), and dense vector models based on BERT (Lee et al., 2019; Karpukhin et al., 2020; Guu et al., 2020; Qu et al., 2021).",
"Generally, the dense representations complement the sparse vector methods for passage retrieval as they can potentially give w h a t w h i c h w h e n w h o h o w w h e r e 0.10 0.05 0.00 0.05 0.10 R e l a t i v e A cc u r a c y Generative w h a t w h i c h w h e n w h o h o w w h e r e Extractive Figure 2: Relative accuracy of different WH questions.",
"high similarity to semantically related text pairs, even without exact lexical overlap.",
"Unlike most work focusing on a pipeline model, Lee et al. (2019) propose a pre-training objective for jointly training both the retrieval encoder and reader.",
"It is further extended by Guu et al. (2020) with a dynamic update of the passage index during the training.",
"Instead, in this work, we focus on a hybrid reader approach for open-domain QA.",
"By simply combing answer predictions from extractive and generative models, our UnitedQA achieves significant improvements over state-of-the-art models.",
"Reading Comprehension with Noisy Labels There has been a line of work on improving distantly-supervised reading comprehension models by developing learning methods and model architectures that can better use noisy labels.",
"Most of them focus on the document-level QA, where all paragraphs share the same document context.",
"Clark and Gardner (2018) propose a paragraph-pair ranking objective for learning with multiple paragraphs so that the model can distinguish relevant paragraphs from irrelevant ones.",
"In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones.",
"Min et al. (2019) propose a hard EM learning scheme where only passage-level loss is considered for document-level QA.",
"More recently, different probabilistic assumptions with corresponding training and inference methods are examined in (Cheng et al., 2020) again for document-level QA with distant supervision.",
"In our work, we further extend the multi-objective formulation proposed in (Cheng et al., 2020) with the hard EM learning (Min et al., 2019) for enhancing extractive open-domain QA, where the input passages are given by a retrieval model and are typically from different documents.",
"In this study, we propose a hybrid model for open-domain QA, called UnitedQA, which combines the strengths of extractive and generative readers.",
"We demonstrate the effectiveness of UnitedQA on two popular open-domain QA benchmarks, NaturalQuestions and TriviaQA.",
"Our results show that the proposed UnitedQA model significantly outperforms single extractive and generative models as well as their corresponding homogeneous ensembles, and sets new state-of-the-art on both benchmarks.",
"We also perform a comprehensive empirical study to investigate the relative contributions of different components of our model and the techniques we use to improve the readers.",
"For future work, it would be interesting to explore model compression approaches for reducing the model size of retriever and reader separately or jointly through pruning, quantization, and knowledge distillation.",
"We would like to thank the anonymous reviewers for valuable suggestions, Yuning Mao for valuable discussions and comments, and Microsoft Research Technology Engineering team for computing support."
] | [
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other"
] |
[
"Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner.",
"While techniques can effectively generate fluent text, they can also produce undesirable societal biases that can have a disproportionately negative impact on marginalized populations.",
"Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decoding techniques.",
"To better understand these challenges, we present a survey on societal biases in language generation, focusing on how data and techniques contribute to biases and progress towards reducing biases.",
"Motivated by a lack of studies on biases from decoding techniques, we also conduct experiments to quantify the effects of these techniques.",
"By further discussing general trends and open challenges, we call to attention promising directions for research and the importance of fairness and inclusivity considerations for language generation applications.",
"Natural language generation (NLG) is a suite of techniques that enables the generation of human-readable language for different goals.",
"These techniques are the core components of applications such as virtual assistants, chat bots, automatic translators, summarizers, and creative language composers.",
"Recent advances in techniques for language generation (e.g., GPT (Radford et al., 2018), GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), TransformerXL (Dai et al., 2019), XLNet (Yang et al., 2019)) powered by Transformers (Vaswani et al., 2017) and an increasing repository of available data have created more capable applications.",
"This has, in turn, channeled more interest and effort into developing NLG techniques.",
"We emphasize the importance of better understanding how societal biases manifest in NLG techniques, because NLG applications directly interact with many different users to generate novel content in various domains (e.g., chat bots for health, education, and customer support).",
"However, when techniques are less effective or detrimental for marginalized populations, these techniques can inadvertently become gatekeepers of those populations for generation and associated language technologies.",
"For example, an educational chat bot that produces more negative responses for topics about a specific ethnicity will discourage users of that ethnicity from interacting with the chat bot.",
"While it is generally important to study the societal impact of NLP and AI techniques, we argue that the direct user impact of NLG techniques makes it especially important to carefully quantify the impact.",
"Motivated by the importance of fairness in language generation, we present the first comprehensive survey on societal biases in language generation.",
"By enumerating how NLG techniques contribute to biases and examining progress towards bias analysis and mitigation, we contextualize the discussion of broader trends and challenges.",
"Specifically, we focus on techniques for NLG tasks, i.e., tasks that generate a sequence of text.",
"1 Finding a lack of studies on biases from decoding techniques, we additionally present an experimental study to quantify the effects of various decoding techniques.",
"Before we delve into the details of biases in language generation, we first position our survey in the context of other relevant surveys and position papers.",
"Sun et al. (2019) present a focused survey 1 Although bi-directional language models like BERT (De-vlin et al., 2019) can also be used for auto-regressive generation (Wang and Cho, 2019; Chen et al., 2020), traditional auto-regressive models are still typically of better quality and more widely used for generation (Shwartz et al., 2020).",
"Thus, we limit the scope of this survey to the latter models.",
"generation, dialogue generation, machine translation (MT), and text re-writing.",
"on mitigating gender biases and Shah et al. (2020) categorize sources of biasesboth largely focus on natural language understanding (NLU) tasks, while we examine biases in NLG tasks.",
"Additionally, Blodgett et al. (2020) urge for more explicitly tying biases in NLP to societal normative definitions of biases and social hierarchies; with their recommendations in mind, we discuss the negative impacts of biases in NLG techniques.",
"Our contributions are a comprehensive survey on societal biases in language generation and an experimental study on biases from decoding techniques.",
"To start, we describe classes of NLG tasks (Sec. 2) and subsequently examine examples of biases and harms in NLG (Sec. 3).",
"We then discuss NLG techniques that facilitate biases, including a study of decoding techniques (Sec. 4).",
"Sec. 5 highlights progress and challenges, and Sec. 6 presents open problems and proposals.",
"We hope this survey brings more visibility to the importance of carefully considering different components of NLG pipelines for potential biases and mitigation methods.",
"To begin, we categorize generation tasks and introduce existing bias studies relevant to each task.",
"NLG tasks broadly fall into two categories: those that generate text continuations conditioned on some prompt and those that transform text from one form to another .",
"Table 1 organizes various bias-related works for NLG tasks.",
"The continuation class includes autocomplete and dialogue generation, where the goal is to generate",
"text that is coherent and relevant to a prompt.",
"Autocomplete Generation We use the term autocomplete generation to refer to conditional generation directly from language models.",
"Language models are the core components for many NLG and NLU tasks, and this task enables directly quantifying biases in large, pre-trained language models (Bordia and Bowman, 2019; Sheng et al., 2019; Solaiman et al., 2019; Brown et al., 2020).",
"Existing works analyzing biases in autocomplete generation have mostly examined Transformer-based models, including GPT (Shwartz et al., 2020), GPT-2 (Solaiman et al., 2019; Sheng et al., 2019, 2020; Shwartz et al., 2020; Vig et al., 2020; Yeo and Chen, 2020; Huang et al., 2020; Dhamala et al., 2021; Schick et al., 2021), GPT-3 (Brown et al., 2020), CTRL (Dhamala et al., 2021), TransformerXL (Shwartz et al., 2020; Vig et al., 2020; Huang et al., 2020), and XLNet (Shwartz et al., 2020; Vig et al., 2020; Yeo and Chen, 2020), though Bordia and Bowman (2019); Qian et al. (2019) also look at LSTM-based models.",
"Dialogue Generation Dialogue generation is conditioned on user inputs and can be for specific domains (e.g., health, customer service) and tasks (e.g., behavior intervention, booking flights) or general chit-chat.",
"These dialogue applications directly interact with users, and any propagated biases directly affect user behavior and actions.",
"In terms of recurrent dialogue models, Henderson et al. (2018) analyze biases in hierarchical recurrent encoder-decoder architectures and Liu et al. (2020a,b) analyze LSTM-based encoder-decoder models.",
"Other works on dialogue biases (Dinan et al., 2020a; Sheng et al., 2020, 2021b) focus on Transformer-based models such as DialoGPT (Zhang et al., 2020) and other custom architectures.",
"The transformation class includes machine translation and various formulations of text re-writing.",
"The general goal of these tasks is to transform text into a form with targeted properties.",
"Machine Translation Translation is the task of transforming text between languages while preserving the meaning.",
"Existing works on biases in machine translation have almost exclusively focused on issues of gender biases 2 in a variety of academic and commercial systems.",
"The use of grammatical gender in some languages and not in others can expose unwanted gender associations (e.g., for different occupations) through translation (Prates et al., 2019).",
"Earlier works by Vanmassenhove et al. (2018) and Elaraby et al. (2018) study LSTM-based encoder-decoder translation systems, and more recent works examine Transformer-based architectures (Escude Font and Costa-juss`a, 2019; Stanovsky et al., 2019; Saunders and Byrne, 2020; Saunders et al., 2020; Costa-juss`a and de Jorge, 2020; Basta et al., 2020; Stafanovics et al., 2020; Renduchintala and Williams, 2021; Choubey et al., 2021; Saunders et al., 2021; Tomalin et al., 2021).",
"While Google Translate 3 has been the most popular commercial system to analyze for gender biases (Prates et al., 2019; Moryossef et al., 2019; Stanovsky et al., 2019; Cho et al., 2019; Farkas and Nemeth, 2020), Stanovsky et al. (2019) also 2 For a detailed survey of gender bias in machine translation, we refer readers to Savoldi et al. (2021).",
"study Microsoft Translator, 4 Amazon Translate, 5 and SYSTRAN; 6 Cho et al. (2019) additionally look at Naver Papago 7 and Kakao Translator, 8 and Cho et al. (2021) also examine Yandex.",
"9 Re-writing We use the term re-writing to refer to tasks of revising specific words and phrases in the original text to be more aligned with a targeted attribute.",
"Specifically, there have been studies on re-inflection (Habash et al., 2019; Zmigrod et al., 2019; Alhafni et al., 2020) and re-writing text to use neutral viewpoints (Pryzant et al., 2020), gender-neutral English (Sun et al., 2021), or more agency (Ma et al., 2020).",
"These tasks typically rely on custom encoder-decoder models.",
"There are other NLG tasks, such as the continuation tasks of story and poetry generation, and the transformation tasks of abstractive summarization and paraphrase generation.",
"However, these other NLG tasks are not yet well-studied in the context of societal biases.",
"10 3 Biases and their Negative Impacts In this section, we introduce how existing studies of biases in NLG tasks commonly quantify biases and their negative impacts.",
"In the context of AI fairness, the term bias commonly refers to skews that result in undesirable impacts (Crawford, 2017) and is quantifiable with some metric.",
"There are relatively more existing studies on biases in NLU tasks, where it is arguably simpler to define bias metrics, since we can intuitively compare the accuracy of the task (e.g., coreference resolution, hate speech detection) for different demographics.",
"Language generation tasks often involve stochastic generation of open-ended and lengthy texts, traits that are not directly compatible with traditional algorithmic bias definitions (e.g., 4 https://www.bing.com/translator 5 https://aws.amazon.com/translate 6 https://www.systransoft.com 7 https://papago.naver.com 8 https://translate.kakao.com 9 https://translate.yandex.com 10 Lucy and Bamman (2021) is an exception that analyzes gender in generated stories.",
"While there are studies of biases in poetry generation and summarization, they focus on non-NLG biases: Sheng and Uthus (2020) investigate biases in a poetry composition system, but in the context of information retrieval; Celis and Keswani (2020) analyze biases in extractive summarization.",
"equalized odds, equal opportunity, demographic parity (Dwork et al., 2012; Hardt et al., 2016)).",
"Because of the difficulty in defining metrics, existing works define bias loosely as demographic inequality and use intermediate proxy metrics to comparatively measure bias.",
"Examples include: Regard Ratio : negative-neutral-positive regard score ratios of text generated from bias-inducing prompts (Sheng et al., 2019) Sentiment Ratio : negative-neutral-positive sentiment score ratios of text generated from African American English (AAE) versus White-Aligned English (WAE) prompts (Groenwold et al., 2020) Individual and Group Fairness through Sentiment : comparisons of the sentiment distributions of generated text across demographics and prompts (Huang et al., 2020) Gendered Word Co-occurrence Score : mean and standard deviations of the absolute log ratio of probabilities: P ( word | female terms ) to P ( word | male terms ) across all words in generated text (Bordia and Bowman, 2019) There are also metrics for other bias evaluation setups in continuation generation tasks involving sentiment (Shwartz et al., 2020), the ratio of gendered words (Solaiman et al., 2019; Vig et al., 2020; Dinan et al., 2020a), and other novel metrics (Peng et al., 2020; Yeo and Chen, 2020).",
"Studies of biases in transformation generation tasks favor metrics of accuracy in terms of successfully transforming text to have a desired property.",
"We present a more thor-ough comparison of metrics in Section 5.4.",
"Bias metrics can also be categorized by how they define associations between demographic group attributes and text.",
"Biases can be towards people described in text, people who produce the text, or people to whom the text is addressed (Dinan et al., 2020b).",
"Most existing works define bias metrics through the first associationthese biases are relatively easier to analyze, since both the demographic and the textual signals of bias are encapsulated within the text.",
"There are also works that define biases towards people who produce the text (Groenwold et al., 2020) or people to whom the text is addressed (Sheng et al., 2021b), though there are relatively fewer works that study these latter associations.",
"Biases in NLG techniques are important to study because they can result in harmful, negative impacts.",
"impacts.",
"We survey detrimental representational 11 and allocational 12 impacts (Crawford, 2017; Baro-cas et al., 2017; Blodgett et al., 2020) used to motivate existing studies of bias in NLG tasks, finding limited examples.",
"While representational impacts are sometimes cited, it is difficult to measure the extent of the impacts.",
"Additionally, techniques for effective NLG are relatively new, and existing studies have limited knowledge of potential allocational impacts.",
"Finally, biases in NLG tasks give rise to a third type of negative impacts, which we call vulnerability impacts .",
"Representational Impacts The works in Table 1 motivate (to varying degrees) studying biases in NLG through potential negative representational impacts, in the form of propagating stereotypes, misrepresentations, or denigrations of social groups.",
"For example, Sheng et al. (2019) enumerate how generated text can propagate varying social perceptions of different demographics, and Prates et al. (2019) discuss how occupation-related gender biases could propagate stereotypes in translation.",
"However, it is difficult to quantify the effects of representational impacts; 13 while such impacts may be measured indirectly (e.g. by analyzing allocational impacts), we suggest long-term, interdisciplinary collaborations to explore the direct effects of these representational impacts.",
"Allocational Impacts Harmful allocational impacts result from an unequal allocation of resources across groups.",
"Since effective NLG techniques based on large Transformer models (Vaswani et al., 2017) are relatively new, most of the existing works on biases in NLG that list possible impacts only analyze direct representational consequences.",
"A real example of a negative allocational impact is when machine translation errors lead to arrests (Ong, 2017).",
"In general, technologies that are less effective or detrimental for certain populations become barriers that actively prevent those populations from using the technology, leading to diminished opportunities in jobs, education, health, etc.",
"We discuss more details in Section 4.5.",
"With continuous technological advances, more organizations will turn to effective NLG techniques, making it imperative to start setting norms to reduce harmful allocational impacts (Tamkin et al., 2021).",
"11 Unfair representations of different groups 12 Unfair allocation of resources 13 Kay et al. (2015) is a rare example that explicitly studies the effect of representational impacts in image search.",
"Vulnerability Impacts Open-domain generation tasks can amplify a group's vulnerability to manipulation and harm , which is an intermediate impact that makes a group more susceptible to representational and allocational impacts.",
"For example, privacy-related issues (Carlini et al., 2020), misinformation (Levy et al., 2021), or radicalizing views in generated text could make a group more likely to be attributed to specific stereotypes (e.g., through action guided by misinformation) or end up with diminished opportunities (e.g., by having personal data exposed and misused).",
"Separately identifying vulnerability impacts could help facilitate recognition of other negative impacts.",
"In a pipeline from data collection to evaluation for an NLG task, each component could propagate biases.",
"14 We emphasize the ways in which data, model architecture, decoding, evaluation, and deployment uniquely exacerbate biases in generation tasks.",
"Additionally, we present an empirical study to show how measured biases in generated text can vary based on decoding technique.",
"Modern NLP models often rely on large pre-trained language models, which in turn rely on a large collection of data to learn explicit and implicit associations.",
"Several recent pre-trained language models used for NLG tasks, e.g., T5 (Raffel et al., 2020) and GPT-3 (Brown et al., 2020), are trained on the largest datasets used for any models.",
"These large models for generation are commonly trained on web data, which is known to contain biased language (e.g., Ferrer et al. (2021) discover gender, religion, and ethnic biases in Reddit communities).",
"While preprocessing is often included to filter out malformatted data and explicitly negative content (e.g., bad words and offensive phrases), those are generally the only efforts to reduce biases and associated impacts.",
"Furthermore, by filtering out all words deemed bad, Bender et al. (2021) warns that we remove the discourse of marginalized populations.",
"Paullada et al. (2020), Bender and Friedman (2018), and Gebru et al. (2018) provide more comprehensive surveys and frameworks that focus on aspects of data creation and management that 14 Task formulation and application deployment are also part of NLG task pipelines (Kiritchenko et al., 2020), though we do not focus on biases in these areas.",
"could lead to biases, and we refer readers to their works for more discussion.",
"In the context of translation, Cho et al. (2021) find that more data can increase translation fluency but may also make the system more biased.",
"There are relatively few studies that examine model architectural properties that could lead to biases.",
"We discuss the few efforts towards understanding model biases in NLG tasks and emphasize the need for more to generalize.",
"For autocomplete generation, Vig et al. (2020) analyze GPT-2 variants through a causal mediation analysis, finding that larger models contain more gender bias, and bias tends to be concentrated in a small number of neurons and attention heads.",
"Silva et al. (2021) observe amplified biases in distilled versus original models.",
"For machine translation, Costa-juss`a et al. (2020) note that language-specific architectures are less biased because they encode more gender information than shared language encoder-decoder architectures.",
"Studies like the aforementioned are useful for designing targeted bias mitigation methods (e.g., controlled generation to target specific attention heads or regularization to retain gender information).",
"However, more evidence would be needed to generalize findings across models.",
"15 4.3 Biases from Decoding While NLU and NLG models have structural similarities, NLG tasks uniquely use search or sampling techniques at inference time to generate text.",
"Popular techniques include: Greedy Search : at each time step, choose the word with the highest probability.",
"Beam Search : at each time step, keep the top b hypotheses with highest probabilities; eventually pick the hypothesis with the highest probability.",
"Topk sampling (Fan et al., 2018): at each time step, re-distribute the probability mass of the top k words with highest probabilities and sample.",
"Nucleus sampling (Holtzman et al., 2019): at each time step, re-distribute the probability mass of the smallest set of words with a cumulative probability exceeding p and sample.",
"More constrained forms of generation such as machine translation generally use variations of beam 15 We also refer the reader to the work of Park et al. (2018) that discusses biases in NLU tasks from model components that attend to specific words (e.g., through attention or pool-ing), which could be applicable to NLG tasks as well.",
"search; however, preferred decoding techniques are more varied for open-domain generation.",
"Despite variations in fluency and diversity between deterministic versus stochastic, search versus sampling procedures, there are limited studies (Roberts et al., 2020) on how different decoding properties affect biases in generation.",
"A Study on Biases from Decoding To study how decoding techniques affect biases in generation, we use existing NLG bias metrics to evaluate text generated from different decoding methods.",
"16 We examine autocomplete generations from GPT, GPT-2, and XLNet, using the decoding techniques from Section 4.3.",
"We evaluate with the following bias metrics: regard ratios (Sheng et al., 2019), sentiment ratios (Groenwold et al., 2020), individual and group fairness through sentiment scores (Huang et al., 2020), and gendered word co-occurrence scores (Bordia and Bowman, 2019) (as introduced in Section 3).",
"More experimental details can be found in the Appendix.",
"In Section 5.4, we distinguish between relative and absolute score metrics to examine evaluation differences between NLG tasks.",
"Here, we organize our results into these categories to generalize trends about decoding techniques.",
"The ratio-based metrics are relative score metrics, since evaluation relies on comparing ratios between demographics.",
"The latter three metrics are absolute score metrics that have target values of zero indicating no bias.",
"For the relative score metrics, search and sampling techniques generate similar outcomes.",
"An interesting result between sampling techniques for the regard metric is that nucleus sampling is less biased yet more negative than topk sampling.",
"For the absolute score metrics, we find that beam search is the most unbiased technique, closely followed by greedy search and then topk and nucleus sampling.",
"Through our study, we discover that text diversity is not accounted for in any of the bias metrics, yet diversity can be a confounding fac-tor.",
"Specifically, beam search is the least diverse, 17 followed by greedy search, topk sampling, then nucleus sampling.",
"Results indicate that the less diverse search techniques lead to better scores for individual fairness, group fairness, and gendered word co-occurrence ratios.",
"age researchers to document sampling techniques, consider how metrics can be formulated to evaluate both bias and other factors of generation quality, and inspire more comprehensive studies.",
"18 4.4 Biases from Evaluation Biases can arise from both general evaluations and bias evaluations for NLG tasks.",
"General Evaluations Current standards for NLG evaluation can reinforce certain types of language and penalize others.",
"For example, using perplexity as measured by models pre-trained on datasets largely containing non-AAE text leads to an unfair evaluation of AAE text.",
"Additionally, the subjectivity of generation tasks means that much of NLG evaluation depends on human labels.",
"Since humans from different backgrounds are accustomed to different societal norms and linguistic variations, the choice of human annotators could drastically influence the evaluation standards for generated text.",
"Bias Evaluations It is difficult to evaluate societal biases in NLG tasks because NLG can be open-domain, and there are many different notions of biases from various backgrounds and cultures (Sambasivan et al., 2021).",
"These factors lead to the use of a variety of metrics to evaluate biases (Section 3).",
"To avoid experimental bias in evaluation, we recommend using multiple metrics to cover many types of biases at various granularities.",
"We identify three points to emphasize the need for more comprehensive evaluations.",
"First, most existing works on biases in generation center around one demographic dimension (often gender and from a Western perspective, e.g., using standard Western occupations).",
"While there has been no comprehensive study on whether mitigating biases for one demographic dimension (e.g., gender) may exacerbate biases for others (e.g., race, intersectional identities), this is a possibility we must consider.",
"Second, most works only evaluate bias through a single intermediate proxy; however, different metrics are defined at different granularities (e.g., sentiment is sentence-level, gendered word ratio is word-level).",
"Finally, different evaluation datasets test for specific types of biases and are influenced by the backgrounds of the curators.",
"Collectively evaluating biases across demographic dimensions and granularities can thus help reduce experimentally-biased evaluations.",
"18 Results are summarized in Appendix Tables 2, 3, and",
"In terms of deploying NLG systems, there is a feedback loop that benefits some communities and further disadvantages others.",
"While this feedback loop is not unique to NLG systems, these systems that directly interact with users make good cautionary examples.",
"First, many deployed language technologies require internet access both to use and contribute feedback, thus favoring the views and languages of those privileged with this access.",
"For example, anyone can contribute feedback to Google Translate, but if contributions and subsequent improvements are focused on high-resource languages, this further increases the accuracy gap between the high and low resource languages, diminishing opportunities for speakers of the low resource languages, i.e., representation disparity (Hashimoto et al., 2018).",
"Second, those who are unable to achieve their goals from using these language technologies (e.g., unsuccessful translation, unhelpful or offensive chat bot) are less likely to continue using the technology.",
"This means that there is less feedback and data to improve the technologies, reinforcing the decreased effectiveness for certain populations, i.e., disparity amplification (Hashimoto et al., 2018).",
"One way we might intervene is to follow a more targeted approach for data and feedback collection, e.g., from excluded populations.",
"However, we acknowledge that this remains a difficult task and that it is also necessary to be aware of commu-nity goals and other factors in order to co-design language technologies without inflicting additional harm on marginalized populations (Bird, 2020).",
"Following the discussion of contributors to biases, we survey trends and challenges for reducing biases in NLG.",
"Data-based methods for both bias analysis and mitigation use the general idea of counterfactual data augmentation (CDA) (Lu et al., 2020) to curate sets of counterfactual prompts.",
"A common method for analysis is using targeted prompts to induce NLG models to reveal biases.",
"For data-based mitigation, existing works focus on fine-tuning large models or training smaller models with datasets that are balanced with respect to targeted demographics.",
"Curated Datasets Existing datasets to study biases in translation include parallel sentences tagged with speaker or subject gender information (Van-massenhove et al., 2018; Habash et al., 2019) and datasets to study gender biases when translating from neutral references of a person (e.g., nurse in English, gender-neutral pronouns) to gendered instances (e.g., enfermera or enfermero in Spanish, gendered pronouns) (Cho et al., 2019; Stanovsky et al., 2019; Gonen and Webster, 2020; Kocmi et al., 2020).",
"Renduchintala and Williams (2021) additionally provide a dataset to study translation of neutral references in unambiguous contexts.",
"Other works present parallel corpora of biased versus unbiased framings and presuppositions (Pryzant et al., 2020) and AAE versus WAE equivalents (Groen-wold et al., 2020).",
"Sheng et al. (2019); Huang et al. (2020); Dhamala et al. (2021) additionally curate sets of prompts that can be used to evaluate biases in autocomplete generation.",
"Bias Analysis Most bias analyses of NLG tasks use prompts to probe for different biases in generated text, e.g., regarding social perception (Sheng et al., 2019), gender in translation (Prates et al., 2019), names (Shwartz et al., 2020), sentiment distribution (Huang et al., 2020), dialects (Groen-wold et al., 2020), dialogue personas (Sheng et al., 2021a), or other notions of similarity across demographics (Yeo and Chen, 2020; Henderson et al., 2018).",
"Vig et al. (2020) also use prompts to investigate gender biases, though they do so in the context of a causal mediation analysis.",
"Furthermore, Prates et al. (2019) and Farkas and Nemeth (2020) compare pronoun gender biases in translations (induced with prompts) to real-world statistics.",
"Bias Mitigation Methods can broadly be classi-fied into two categories based on the type of data applied.",
"The first category encompasses methods that fine-tune or train on a balanced dataset to lessen the effects of the model relying on spurious correlations between imbalanced data and task performance.",
"CDA has been applied to datasets used for continued or fresh training in dialogue generation (Dinan et al., 2020a; Liu et al., 2020a) as well as machine translation (Saunders and Byrne, 2020; Costa-juss`a and de Jorge, 2020; Stafanovics et al., 2020).",
"The second category is methods that attach a short prefix at training time (Vanmassenhove et al., 2018; Basta et al., 2020; Alhafni et al., 2020) or inference time (Moryossef et al., 2019).",
"Challenges The size of state-of-the-art pre-trained models and varying definitions of biases in generation present difficulties for creating standardized datasets that are generally effective across biases and demographics.",
"Moreover, it remains to be seen whether data-based mitigation is as effective for open-domain NLG tasks as it is for more constrained settings.",
"Bias Mitigation Several works that use training-based mitigation techniques rely on regularization (Bordia and Bowman, 2019; Qian et al., 2019; Huang et al., 2020; Liu et al., 2020a; Saunders and Byrne, 2020).",
"There are also works that induce control by incorporating a bias control code through conditional training (Dinan et al., 2020a), by appending a target value to inputs during training (Ma et al., 2020), by using a normative classifier to produce reward values for backpropagation (Peng et al., 2020), or through adversarial training (Liu et al., 2020b).",
"Other techniques include using de-biased word embeddings (Escude Font and Costa-juss`a, 2019), identifying and editing out subjective words (Pryzant et al., 2020), and using Markov random fields to preserve morpho-syntactic agreement during reinflection (Zmigrod et al., 2019).",
"Challenges The main challenge of bias mitigation through training methods is that it is costly and impractical to re-train models for new biases encountered.",
"In fact, most of the techniques that rely on training from scratch use smaller architectures (exceptions are from larger institutions).",
"While the existing literature on inference time methods for bias mitigation is sparse, decoding-based methods are a promising alternative to dataand training-based methods.",
"Specifically, these methods are compatible with any pre-trained language model for generation without additional training.",
"Given recent development of inference-time methods for control that can reduce toxicity (e.g., PPLM (Dathathri et al., 2019), GeDi (Krause et al., 2020), DExperts (Liu et al., 2021)), there is potential for extending these methods to bias mitigation.",
"Bias Mitigation For autocomplete and dialogue generation, Sheng et al. (2020) formulate bias triggers using gradient-based methods of Wallace et al. (2019).",
"These triggers are appended to prompts during inference time to control text generation to be more equalized towards different demographics.",
"For translation, Saunders and Byrne (2020) present a lattice rescoring procedure that creates gender-inflected search spaces to rescore text for more accurate translations, and Saunders et al. (2021) subsequently use this lattice structure to present more gendered options during beam search and rerank translation hypotheses according to gender criteria.",
"For dialogue generation, Sheng et al. (2021b) introduce a constrained decoding method that uses n -gram similarity to guide generation away from ad hominems towards marginalized groups.",
"For autocomplete generation, Schick et al. (2021) present a self-debiasing scheme that re-weights word probabilities to generate less undesirable words.",
"Challenges Control methods at inference time could potentially steer the model into degenerate spaces, so it is important to also evaluate these methods for coherence, fluency, and task relevance.",
"There are two types of evaluations: those that rely on absolute scores and those that rely on relative scores.",
"Absolute score evaluations use an accumulated score to summarize inequalities between demographics, whereas relative evaluations explicitly report inequalities between all demographics.",
"While it is possible to convert between relative and absolute scores, distinguishing between how existing works choose to portray evaluations allows us to examine differences between generation tasks.",
"Absolute Evaluations We find that the transformation class of generation tasks favors bias evaluation through absolute metrics, which is possible because these tasks involve relatively more constrained forms of generation.",
"Examples of evaluation objectives through absolute scores include Peng et al. (2020) reducing non-normative generations, Ma et al. (2020) increasing the accuracy of the change in agency, Zmigrod et al. (2019) increasing the number of correct inflections, Huang et al. (2020) reducing individual and group fairness scores, and Sheng et al. (2021b) reducing the amount of ad hominems towards marginalized groups.",
"Studies of gender bias in machine translation are well-suited to evaluations using absolute scores: many use BLEU and its variants to evaluate correct gender inflections and translations (Moryossef et al., 2019; Escude Font and Costa-juss`a, 2019; Elaraby et al., 2018; Habash et al., 2019; Alhafni et al., 2020) or accuracy on WinoMT (Saunders and Byrne, 2020; Saunders et al., 2020; Kocmi et al., 2020; Costa-juss`a and de Jorge, 2020; Costa-juss`a et al., 2020; Basta et al., 2020; Choubey et al., 2021; Saunders et al., 2021).",
"Relative Evaluations In terms of evaluation through relative scores, examples from existing works are mainly from continuation generation tasks.",
"We infer that the less constrained, open-domain nature of continuation generation tasks makes it more preferable to evaluate mitigation through more flexible comparisons rather than absolute scores.",
"For autocomplete generation, Sheng et al. (2019, 2020) and Groenwold et al. (2020) compare regard or sentiment scores across demographics, Shwartz et al. (2020) compare names across various intermediate metrics, Vig et al. (2020) measure proportional differences between the amount of bias under a gendered versus ambiguous reading, and Yeo and Chen (2020) compare occupations generated for different genders.",
"Bias studies in dialogue generation use relative scores by comparing sentiment and offensive language discrepancies (Henderson et al., 2018; Liu et al., 2020a,b) and the percentage of gendered words (Dinan et al., 2020a).",
"Challenges A trade-off between framing biases as a relative or absolute metric is that relative metrics can be more flexibly aligned to normative concerns like social perception.",
"Absolute metrics that look for ratios of gendered words or other indicator words assume that there is a set of words that captures all the differences between demographic groups, regardless of whether these differences are related to normative definitions of harm.",
"There are also absolute metrics such as those of Huang et al. (2020) that can incorporate intermediate metrics that are more aligned with normative behavior, though these metrics reduce the notion of biases to a single value, which could erase historical inequalities between groups.",
"As a fairly nascent area of exploration, the study of biases in language generation still poses many challenges.",
"Throughout this paper, we discuss challenges associated with different components in a generation pipeline.",
"With a heightened awareness of the relevant body of work, we conclude with recommendations for open problems.",
"Bias-Aware Data Curation Many works have highlighted the harms and problems when collecting training datasets with limited awareness for potential harms.",
"Since effective models for NLG tasks are correlated with increasing training data sizes, biases in data collection (e.g., English-centric, drawn from popular Western media) remain a major contributor of biases that manifest in generation.",
"Additionally, datasets used to study biases in generation can also be limited (e.g., only for binary gender classes).",
"For more bias-aware data curation, we suggest diversifying datasets to include more viewpoints from various groups.",
"Understanding Trade-Offs Different methods for analysis, mitigation, and evaluation have unique trade-offs.",
"Existing works have been relatively small-scale and limited to a small number of biases for specific tasks.",
"Some useful questions to consider when developing methods to study generation biases are whether we can generalize methods to a diverse set of biases and a wide range of contexts.",
"It is also important to consider formulating metrics that would jointly mitigate biases and preserve other desired text qualities (e.g., diversity, fluency).",
"Interactive and Continuous Learning The difficulties of measuring and mitigating biases in generation can be reduced with a general framework for interactive and continuous learning.",
"Over time, such a system could learn from diverse opinions of what constitutes fair versus unfair generations across tasks.",
"A unified framework would centralize and highlight the importance of studying biases in generation, as well as fuel the development of a more comprehensive set of evaluations that may be useful for large-scale studies of impact.",
"Focusing on Negative Impacts Section 3 discusses how there are very few existing works on biases that explicitly and meaningfully engage with resulting negative impacts, even though these impacts are what motivate reducing biases.",
"By reframing efforts on reducing negative impacts rather than biases, we may be able to define metrics and progress that better correlate with reducing harm.",
"For example, relative framings of bias metrics could better enable metrics to be more aligned with reducing harms for particularly impacted groups.",
"We would like to thank Seraphina Goldfarb-Tarrant, Sunipa Dev, Jason Teoh, members of the Plus Lab, and our anonymous reviewers for the many helpful suggestions that went into this paper.",
"In this work, we present a survey and commentary on the progress and challenges for studying societal biases in language generation.",
"Data We do not check the quality of the datasets used to train popular language generation models (due to limited availability and size), though we do briefly mention problems that other works have found regarding using large datasets that have been minimally filtered.",
"Some of the surveyed datasets and metrics that are used for evaluating biases approximate binary genders using names typical of specific genders, and may be better re-formulated to avoid harms and curate a more accurate representation of different genders.",
"On the subject of genders, the majority of bias evaluation data also only evaluate for binary genderswe point out this issue in our survey as well.",
"Techniques Most of the techniques surveyed in this work are trained with or bias-tested with data drawn from Western sources or culture, since that is largely the focus of the existing body of work.",
"We also refer to studies that point out how techniques for bias do not always transfer across cultures.",
"Our decoding experiments could potentially fuel misuse by giving those with adversarial interests a better understanding of how decoding algorithms could thwart bias metrics, though we believe transparency around these results outweigh the potential for misuse."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"result",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain"
] |
[
"Representation of linguistic phenomena in computational language models is typically assessed against the predictions of existing linguistic theories of these phenomena.",
"Using the notion of polarity as a case study, we show that this is not always the most adequate set-up.",
"We probe polarity via so-called negative polarity items' (in particular, English any ) in two pre-trained Transformer-based models (BERT and GPT-2).",
"We show that at least for polarity metrics derived from language models are more consistent with data from psycholinguistic experiments than linguistic theory predictions.",
"Establishing this allows us to more adequately evaluate the performance of language models and also to use language models to discover new insights into natural language grammar beyond existing linguistic theories.",
"This work contributes to establishing closer ties between psycholinguistic experiments and experiments with language models.",
"Recent Transformer-based language representation models (LRMs) such as BERT and GPT-2 (De-vlin et al., 2019; Radford et al., 2019) show impressive results on practical text analysis tasks.",
"But do these models have access to complex linguistic notions?",
"The results in this domain are less clear as well as ways to best approach this question.",
"Instead of asking whether LRMs encode fragments of current linguistic theory, we will directly compare metrics derived from LRMs to corresponding human judgments obtained in psycholinguistic experiments.",
"The motivation for this is twofold.",
"First, linguistic theories can be inaccurate so, evaluating a model with respect to predictions of such theories is not informative about the model performance.",
"Second, robust abstract theoretical notions rarely correspond to robust judgments in Equal contribution.",
"humans, and theoretical' and perceived' versions of the same phenomenon can be significantly different (for instance, see Geurts 2003 on inference judgments; discussed in Section 2).",
"If this is something that LRMs inherit through training on human-produced texts, this makes LRMs an attractive possible component in an experimental pipeline, serving as a source of empirical predictions about human linguistic behaviour (Baroni, 2021; Linzen and Baroni, 2021).",
"As a case study, we focus on polarity : a complex property of sentences at the intersection of grammar and semantics.",
"We tackle polarity via the distribution of items that are sensitive to it namely, so-called negative polarity items (NPIs) like English any .",
"As a basic illustration of NPI sensitivity to polarity, consider a pair of sentences in (1) (* = ungrammaticality): (1)",
"(1-a) is a negative sentence (has negative polarity), and any is grammatical in it.",
"(1-b) is an affirmative sentence (has positive polarity) and any in this sentence is grammatically degraded compared to (1-a).",
"Apart from this paradigmatic contrast, as we discuss below, polarity contrasts are expressed in a variety of ways and are tied to semantics.",
"As a proxy for a grammaticality measure, we will use the probability of any in the masked token position (in BERT) (following Goldberg 2019; Warstadt et al. 2019",
"a.o.) and perplexity increase when adding any to a sentence (in GPT-2).",
"The differences in the metrics for the two different models stem from the differences in their architecture and training objectives.",
"For all experiments, we use non-fine-tuned pre-trained LRMs.",
"For this, we introduce our ANY dataset, which combines natural and synthetic data.",
"We find high levels of alignment between results of psycholinguistic experiments on monotonicity and NPIs, on the one hand and our LRM-derived results, on the other hand.",
"Furthermore, show how LRMs can be used to make new predictions about NPIs in contexts with different numerals and confirm these predictions in a psycholinguistic experiment.",
"This case study contributes to the complement of the interpretability of neural LRMs' research agenda: we can ask not only what linguistic tasks tell us about LRMs, but also what these models can help us find out about natural language (see Baroni 2021; Linzen and Baroni 2021 for a discussion along these lines).",
"The paper is structured as follows.",
"First, in section 2, we set up the context for our study: we describe the background in theoretical and experimental linguistics in the domains relevant for our discussion.",
"Section 3 discusses previous work on NPIs and polarity in computational linguistics.",
"Section 4 contains the description of our experimental method.",
"First, we introduce our ANY dataset; then, we describe the tests and metrics we use with BERT and with GPT-2 given our dataset.",
"Section 5 discusses our results.",
"In section 6, we go beyond state-of-the-art knowledge in experimental semantics and pragmatics and study the effect of the numeral on NPI acceptability first, we do a BERT study and then confirm the results on human participants.",
"Section 7 concludes: we propose directions for future work aligning experimental studies of language in humans and LRMs.",
"NPIs are expressions with limited linguistic distribution.",
"While their use is grammatical in some sentences, in other sentences their use results in ungrammaticality.",
"The distribution of NPIs like any is governed by the notion of polarity that is much more intricate than the simple presence or absence of sentential negation, as in (1).",
"For instance, in examples (2)-(3), (2) are negative enough' to allow for (=license') any , while (3) are not even though none of these sentences contain overt sentential negation.",
"(2)",
"a. None of the boxes contain anything.",
"b. Nobody talked to anybody.",
"c. At most five students did anything.",
"d. Few people had any thoughts (3)",
"a. *Some of the boxes contain anything.",
"b. *Somebody talked to anybody.",
"c. *At least 5 students did anything.",
"The notion of monotonicity builds on logical entailment.",
"Monotonicity of a linguistic environment defines its entailment patterns.",
"In (4), the domain in square brackets is upward-entailing (UE), or upward-monotone, as evidenced by the valid inference from sets ( textbooks ) to supersets ( books ): sentence (4-b) entails sentence (4-a).",
"In contrast, (5) shows a downward-entailing (DE), or downward-monotone, environment, which supports inferences from sets ( books ) to subsets ( textbooks ): (5-a) entails (5-b).",
"Expressions responsible for monotonicity of a linguistic context are a heterogeneous class that includes sentential operators such as negation and conditional if ; quantifiers ( some , no , few , at most five etc.); quantificational adverbs ( rarely , always etc.) and more.",
"Monotonicity is a highly abstract logical property interfacing with general reasoning.",
"At the same time, it is deeply embedded into natural language grammar and it is relevant for understanding of inner workings of different linguistic expressions, such as NPIs.",
"As shown by examples (1)-(3), DE contexts give rise to negative polarity, as seen from NPI acceptability; UE contexts are positive.",
"There is conflicting evidence concerning non-monotone contexts (Crnic, 2014; Alexandropoulou et al., 2020).",
"The connection between monotonicity and NPI licensing is undeniable also beyond examples (1)-(3) (see Fauconnier 1975; Ladusaw 1979 and much 1 This is a simplification.",
"This is true of so-called weak NPIs' a subclass of NPIs to which any belongs.",
"We will keep referring to them simply as NPIs since we are only discussing weak ones.",
"There are also other factors in weak NPI distribution apart from monotonicity (see Giannakidou 1998; Barker 2018).",
"Still, we focus on monotonicity as a crucial factor in NPI acceptability, following evidence discussed in the rest of the section.",
"subsequent literature).",
"Experimental evidence shows a bi-directional connection between inference judgments in a context and NPI acceptability in that context.",
"Chemla et al. (2011) found that the inferences a person considers valid in a given linguistic context predict how acceptable they would find an NPI in that same context.",
"Conversely, Denic et al. (2020) show that inferential judgments are modified by the presence of an NPI.",
"So, the two phenomena show clear mutual influence.",
"Importantly, both monotonicity and NPI acceptability in humans is not an all-or-nothing matter.",
"Acceptance of logically valid inferences and rejection of invalid ones varies to some extent from person to person and from context to context (Geurts, 2003; Sanford et al., 2007; Chemla et al., 2011; McNabb et al., 2016; Denic et al., 2020).",
"Chemla et al. (2011) report that logically DE sentences with no are perceived as DE by human participants only 72% of the time.",
"At most also logically a DE environment is only recognized as such 56% of the time.",
"Moreover, less than and at most truth-conditionally equivalent environments differ in DE inference endorsement by 11%.",
"The best predictor of NPI acceptability by humans was found to be not the logical entailment pattern but the subjective, or perceived, one (Chemla et al., 2011; Denic et al., 2020).",
"There is no single overarching psycholinguistic study testing the whole landscape of contexts.",
"Combined knowledge from an array of studies (Geurts, 2003; Sanford et al., 2007; Chemla et al., 2011; McNabb et al., 2016; Denic et al., 2020) produces the picture summarized in Table 1. 3 Previous work NPIs have been a topic of an investigation in the context of LRMs, both as a subset of a more general test dataset (Marvin and Linzen, 2018; Hu et al., 2020), and as the main object of study (Jumelet and Hupkes, 2018; Warstadt et al., 2019; Jumelet et al., 2021; Weber et al., 2021).",
"Here we focus on (Warstadt et al., 2019) as a representative case, as it shares with other previous studies its general set-up: assessment of LRMs against predictions of linguistic theory.",
"Warstadt et al. (2019) focus on NPIs in BERT.",
"Using a variety of testing techniques, both zero-shot and with fine-tuning, they conclude that BERT's ability to recognize NPI licensing environments and, therefore, to tell licit uses of NPIs from illicit ones varies a lot depending on the type of context, scope configuration and the type of experimental setting.",
"This might lead one to conclude that BERT's ability to recognize polarity of a sentence is not so great across the board.",
"Indeed, reports from other tasks that involve polarity and/or monotonicity seem to support this.",
"In particular, natural language inference has been reported to be hard for LRMs (Yanaka et al., 2019a,b; Talmor et al., 2020; Geiger et al., 2020).",
"Remarkably, Geiger et al. (2020) report that fine-tuning BERT on the SNLI dataset and then evaluating it on DE sentences (their NMoNLI dataset) results in 2.2% accuracy that is, the model practically ignores the monotonicity profile of the sentence.",
"But is alleged poor polarity detection to blame here?",
"Importantly for our study, Warstadt et al. (2019) judge BERT's recognition of NPI acceptability against logical monotonicity rather than subjective monotonicity as uncovered by psycholinguistic experiments.",
"So, we believe that these results deserve a second look.",
"One of the measuring techniques in Warstadt et al. (2019) is very close to one of the two techniques we will adopt in this paper.",
"It is a version of Cloze Test adapted for MLM, where probabilities of candidates for the masked position are compared.",
"We discuss the set-up in section 4.",
"Finally, the idea of targeted LRM evaluations modeled after psycholinguistic experiments is being used in an increasing number of recent studies, albeit mainly in the domains of syntax and lexical semantics (Gulordava et al., 2018; Linzen et al., 2016; Marvin and Linzen, 2018; Wilcox et al., 2018; Chowdhury and Zamparelli, 2018; Futrell et al., 2019; Nair et al., 2020; Abdou et al., 2020; Ettinger, 2020).",
"We move on to describing our dataset, procedure and results.",
"We perform two types of tests using the dataset that we produce for this purpose.",
"One experiment is done with BERT, the other one with GPT-2.",
"Both experiments are performed in a zero-shot setting using the pre-trained models without fine-tuning.",
"The goal of these experiments is to test the contrasts between types of sentences described in Table 1. We will do this by comparing the relevant pairs of contexts along LRM-derived metrics that are meant to capture grammaticality / acceptability.",
"First, we describe the dataset; then we explain the experiment procedure for BERT and GPT-2; finally, we report and discuss the results.",
"Our dataset consists of two parts: one with natural and one with synthetic data.",
"We scraped the Gutenberg Project and a subset of English Wikipedia to obtain the list of sentences that contain any .",
"Next, using a combination of heuristics 3 , we filtered the result with regular expressions to produce two sets of sentences (the second set underwent additional manual filtering): 3844 sentences with sentential negation and a plural object with any to the right to the verb; 330 sentences with nobody / no one as subject and a plural object with any to the right.",
"The first set was modified to substitute the negated verb by its non-negated version, so we contrast 3844 sentences with negation and 3844 affirmative ones ( NEG vs. AFF ).",
"In the second dataset, we substituted nobody for somebody and no one for someone , to check the SOME vs. NO contrast.",
"We used the following procedure.",
"First, we automatically identified the set of verbs and nouns to build our items from.",
"To do so, we started with bert-base-uncased 4 vocabulary.",
"Taking its non-subword lexical tokens is an easy way to get a list of simple and common words.",
"We ran this list through a SpaCy POS tagger 5 .",
"Further, we lem-matized the result using pattern 6 and dropped duplicates.",
"Then, we filtered out modal verbs, sin-gularia tantum nouns and some visible lemmatiza-tion mistakes.",
"Finally, we filtered out non-transitive verbs to give the dataset a bit of a higher baseline of grammaticality.",
"7 We kept top 100 nouns and top 100 verbs from the resulting lists these are the lexical entries we will deal with.",
"Then, we generated sentences with these words, using the following pattern: A(n) noun x verb.PST.SG a(n) noun y 8 For this, we iterate over the 100 nouns in the subject and the object positions (excluding cases where the same noun appears in both positions) and over the 100 verbs.",
"The procedure gave us 990k sentences like these: (7)",
"Some are more natural, make more sense and adhere to the verb's selectional restrictions better than the others.",
"To control for this, we ran the sentences through GPT-2 9 and assigned perplexity to all candidates.",
"Then we took the bottom 20k of the sentences ( the most natural' ones) as the core of our synthetic dataset.",
"4 https://huggingface.co/ bert-base-uncased 5 https://github.com/explosion/ spacy-models 6 https://pypi.org/project/Pattern/ 7 Our procedure was equivalent to that in github.com/ Mirith/Verb-categorizer 8 We use the singular indefinite object for this part of the procedure to avoid idiomatic verb phrases ( change hands , join forces ) at the top of the list.",
"9 https://huggingface.co/gpt2 We tried to approximate the naturalness' of examples by a combination of measures.",
"We rely on insights from different models (GPT-2, BERT, corpus-based statistical insights into verb transitivity) on different stages of the dataset creation.",
"Still, some sentences sound intuitively weird'.",
"We do not see this as a problem though we will not rely directly on the naturalness of individual examples, rather we will measure the effect of the NPI across the dataset (as is common practice when working with synthetic data see, for example, Geiger et al. 2020, 2021).",
"The amount of the examples will allow us to generalize across varying parts of the sentences to make sure that the results can be attributed to the parts we are interested in: items responsible for the polarity of the sentence.",
"The quantity of test items is crucial for reproducing psycholinguistic experiments on LRMs while in the former one sentence gives rise to a number of observations when different human participants make a judgment, in the latter one test sentence gives one observation only.",
"With this in mind, we use the 20k sentences produced by the previous steps to build the parts of our synthetic dataset.",
"Each of the sentences has a pluralized (not singular anymore!) object in combination with any : any roads .",
"The subject type varies in different datasets comprising our synthetic data.",
"Here is what we end up with: 12 datasets 20k sentences each: AFF (8-a); NEG (8-b); SOME (8-c); NO ; MANY ; FEW ; MORE THAN 5; FEWER THAN 5; AT LEAST 5; AT MOST 5; EXACTLY 5; BETWEEN 5 AND 10; 2 datasets 8230 sentences each: SOMEBODY / SOMEONE / SOMETHING (8-d); NOBODY / NO ONE / NOTHING (replacing the whole subject, duplicates deleted) (8)",
"a. A girl crossed any roads.",
"b. A girl didn't cross any roads.",
"c. Some girls crossed any roads.",
"d. Somebody crossed any roads.",
"Overall, sentences in all parts of our dataset vary in the type of context it instantiates (simple affirmative, negation, different quantifiers) but all sentences contain any in the object position in combination with a plural noun.",
"The next two subsections explain the metrics derived from the two model we study, stemming from the differences in their architecture and training objectives.",
"The Cloze Test on BERT is very similar to that described in (Warstadt et al., 2019).",
"In each of the sentences in the dataset, we mask any and ask BERT for predictions for the masked position: [CLS] Few girls crossed [MASK] roads .",
"We extract the probability that BERT assigns to any in the masked position, as well as the rank of any in BERT vocabulary sorted by the probability in the masked position.",
"Further, we compare these values between conditions (= different types of contexts).",
"The comparison between a pair of conditions will be expressed as the percentage of sentences in our dataset where any got a higher probability in the first condition compared to the probability of any in the corresponding sentence in the second condition.",
"The same for the rank of any instead of probability.",
"For example, AFF : NEG : 0 .",
"12% reads as: in 0.12% of the dataset, any got a higher probability (or a higher rank) in an affirmative sentence compared to the corresponding sentence with negation.",
"Intuitively: that most of the time, a sentence with negation makes a better environment for any than the minimally different affirmative sentence.",
"In this test, for each sentence in the dataset, we calculate perplexity of this sentence (9-a) according to the GPT-2 model and perplexity of that same sentence with any deleted (9-b):",
"We take the difference between these perplexity values normalized by the number of tokens as our measure of how much the presence of any affects the naturalness' of each particular sentence.",
"As before, we compare these values for different conditions.",
"For example, AFF : NEG : 0 .",
"25% reads as: in 0.25% of sentences, the presence of any leads to a smaller increase in perplexity for the affirmative sentence, compared to the analogous negative sentence.",
"That is, most of the time the presence of any worsens affirmative sentences a lot, while the corresponding negative one less so.",
"dropoulou et al., 2020), which measure the differences between acceptability scores with and without any for different types of contexts.",
"We will discuss results from BERT and GPT-2 together, because they mostly agree.",
"One general result that allows us to limit our attention to one of the two BERT metrics is that BERT rank and BERT probability produce the same order on all condition pairs of interest except for one ( AT MOST , AT LEAST ) and we will only discuss BERT probabilities in this section.",
"The 20k synthetic data results are summarized in Fig. 1. The conditions in the 20k results are sorted for readability.",
"8k synthetic data results: NO, SOME : 99.76% (BERT-prob); 99.56% (GPT-PPL-diff).",
"In short, all predictions based on psycholinguistic evidence discussed in section 2 (Table 1) are confirmed by our LRM data.",
"As a sanity check, we compare these results with the results of the same procedure on our natural dataset, and they are very similar: NEG , AFF : 97.21% (BERT-prob), 97.17% (GPT-PPL-diff); NO-, SOME : 98.29% (BERT-prob), 96.98% (GPT-PPL-diff).",
"The take home message from these results is that LRMs can tell between negative and positive polarity, as well as between different types of contexts by their monotonicity, as measured by NPI acceptability .",
"Moreover, what is encoded is a subjective version of the relevant property, similar to what is reflected in graded non-categorical judgments seen in psycholinguistic experiments.",
"Establishing this, first of all, helps us make more sense of the metrics derived from such models and helps draw a more accurate line between noise and meaningful output.",
"Second, it encourages a closer tie between experiments with humans and with LRMs: LRMs encode a snapshot of numerous subjective linguistic intuitions, and maybe we can use LRMs to get indirect access to speakers' shared intuitions as a source of new theoretically relevant linguistic generalisations.",
"The next section is a pilot attempt in this direction.",
"We establish a new generalization looking at LRM data and then confirm it in a psycholinguistic experiment.",
"For the conditions which involve numerals we left one parameter unexplored so far, namely, the numeral itself.",
"In this section, we look at the dependency between NPI acceptability and the numeral.",
"There is no experimental data on this.",
"Theoretical literature tentatively suggests that the higher the numeral, the less acceptable an NPI in its scope (Crnic, 2014): (10) Exactly two of the boxes contain anything (11) ??",
"However, the judgments are subtle and theoretical discussion still waits for an empirical basis.",
"Let us look at our conditions with numerals (apart from BETWEEN we set it aside as too complicated).",
"For each of the conditions, we keep everything constant apart from the numeral and check the effect the numeral has on NPI acceptability.",
"We looked at numerals with these numeric values: [2 20 , 30 , 40 , 50 , 60 , 70 , 80 , 90] .",
"As before, we made pair-wise comparisons between sentences in our synthetic dataset that differ only in the numeral it contains.",
"The measures are the same as before.",
"Both models show an upward trend: the higher the numeral, the worse the context becomes for any .",
"This tendency is shown on Fig. 2. The lines show comparison between sentence pairs in which the second one has a numeral higher than the one in the first sentence by n , where n is plotted on the x axis (so, 10 on the x axis comprises all pairs that differ by 10 2 , 12 , 3 , 13 ...).",
"On y , we show the percentage of pairs in which the first sentence showed higher probability of any than the second one.",
"The effect of the numeral on the NPI acceptability can be sometimes quite strong: to the point of flipping the better NPI licenser' relation in a pair of contexts.",
"For example, this is the case for AT LEAST and MORE THAN in BERT.",
"They have the same logical monotonicity profile (both UE).",
"However, we can find a pair of numerals such that flipping them orders the resulting contexts differently: AT LEAST 2 > MORE THAN 70: 94% MORE THAN 2 > AT LEAST 70: 68% Let us check the effect of numeral on humans, as well as a licensing flip due to the numeral.",
"For the ease of comparison between our LRM experiment data in the previous section and the experiment on human participants, we formulate the latter as a forced-choice task .",
"The participants saw pairs of sentences and were instructed to pick the one that is more grammatical.",
"The study has a 2x2 design with these factors: NUMERAL : five vs. seventy QUANTIFIER : at least vs. more than This gives six forced-choice test conditions: at least five vs. at least seventy at least five vs. more than five at least five vs. more than seventy at least seventy vs. more than five at least seventy vs. more than seventy more than five vs. more than seventy These prefixes were used to generate pairs of sentences using patterns from the 20k synthetic dataset.",
"We randomly selected 50 out of the 20k patterns, which results in 2500 pattern pairs.",
"With 6 test conditions, this amounts to 15k unique test items .",
"We used Toloka to recruit self-reported native speakers of English for this experiment.",
"10 They were allowed to complete the full task after they passed a test with 10 control items with 7 or more correctly identified grammatical sentences.",
"In the main part of the task, each participant saw 38 pairs of sentences : 22 were filler/control items and 16 test items.",
"All participants saw the same filler/control items (random order), test items were taken from the pool of 15k test items in random order and evaluated with no overlap.",
"In total, 968 participants were recruited.",
"We filtered out the data from those who gave wrong answers to more than 30% of the filter/control items in the main part of the task.",
"We were left with 656 participants (= 10496 test items; more than a 2/3 of our pool of test items).",
"Fig. 3 shows the results of the experiment.",
"We used the binomial test to analyze the data.",
"The boxes in the plot show the 95% confidence interval.",
"Result #1 : The effect of the numeral is confirmed both within and across the two types of contexts (lines 1, 6, 9 and 10 in Fig. 3).",
"Result #2: AT LEAST and MORE THAN are not ordered with respect to each other (lines 7 and 8).",
"It is possible to find a particular numeral where the difference reaches significance (line 2), but overall there is no clear order.",
"Result #3: Our data do not show a statistically significant flip between contexts with different numeral values.",
"Even though one side of the flip is there (line 3), the flip of this pair did not reach significance (line 5).",
"Conclusion: The results are generally in line with the trend observed in section 6: the higher the numeral, the worse the context gets for an NPI.",
"This is the first experimental confirmation of this effect, to the best of our knowledge.",
"It is noteworthy that we first found it via LRM and then confirmed it with human participants.",
"A more specific result of this effect what we call a flip' is seen in our data as a tendency, but the effect did not reach significance.",
"It could be an LRM artifact or the lack of it could be an artifact of our experiment.",
"A different choice of numerals or a higher number of participants could sharpen these results.",
"We leave this for future work.",
"Our experiments provide solid support for an approach under which LRM performance is compared directly to psycholinguistic data rather than to predictions of a linguistic theory.",
"This opens up prospects for research that will result in a more empirically grounded picture of where the limits of LRM abilities lie.",
"Our results tell us something new about LRMs but also suggest that LRMs can be included in the experimental loop of theoretical semantics alongside with traditional experiments.",
"To pilot this idea, we conducted an experiment on the effect of the numeral on NPI acceptability.",
"We confirmed our LRM findings in a parallel psycholinguistic study.",
"In this paper, we only explore the connection between behavioral experiments and LRM-derived metrics.",
"What about online measures in psycholinguistic studies?",
"Can we find a usable analogue to, for example, eye-tracking or reaction times in self-paced reading studies that is, studies that tell us which parts of input are important in processing?",
"One obvious LRM-based candidate is attention.",
"We took a preliminary look at BERT attention distribution in sentences with any in an attempt to identify the attention head that contributes most to monotonicity-via-NPIs (see Voita et al. 2019 for a discussion of attention head specialization).",
"To factor out linear position, we focused on the natural part of our dataset.",
"We took the sentences that contained both a quantifier with a clear monotonicity profile ( somebody , nobody , someone etc.) and any ; calculated attention from any to the quantifier for every layer and every attention head and averaged it across sentences.",
"Then we sorted the results and went through the top of the resulting list.",
"We found that the attention head (6,2) of bert-base-uncased model 6th layer, attention head 2 seems to specialize in precisely what we are looking for.",
"Saliency maps below show that in a variety of contexts beyond the ones we checked for the purposes of this paper, monotonicity-affecting items are highlighted buttressing the hypothesis that monotonicity is important for NPI licensing ( without , do -support in a question, if , lexical negation): [CLS] it felt odd without any wards on it .",
"Additionally, this attention head reflects the role of the numeral in NPI licensing that we established in section 6: in all contexts with numerals that we looked at, a lot of attention goes from any to both the quantifier (say, exactly ) and the numeral that comes with it.",
"Moreover, the higher the numeral, the more attention goes to it, compared to the amount of attention that goes to the quantifier: [CLS] exactly two games told any stories .",
"More work is needed to verify and interpret these patterns systematically and compare them to other attribution measures and to online metrics in psycholinguistic studies.",
"We thank the anonymous ARR reviewers; the audience and organizers of the CNRS Seminar on the Interactions between Formal and Computational Linguistics; Toloka team for the help with the human assessment study.",
"We also thank Alexandre Cremers, Ekaterina Garmash, Borislav Kozlovskii, Rick Nouwen, and Denis Paperno for the discussions of our ideas and earlier versions of the paper."
] | [
"abstain",
"result",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Data-to-text generation can be conceptually divided into two parts: ordering and structuring the information (planning), and generating fluent language describing the information (realization).",
"Modern neural generation systems conflate these two steps into a single end-to-end differentiable system.",
"We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization.",
"For training a plan-to-text generator, we present a method for matching reference texts to their corresponding text plans.",
"For inference time, we describe a method for selecting high-quality text plans for new inputs.",
"We implement and evaluate our approach on the WebNLG benchmark.",
"Our results demonstrate that decoupling text planning from neural realization indeed improves the system's reliability and adequacy while maintaining fluent output.",
"We observe improvements both in BLEU scores and in manual evaluations.",
"Another benefit of our approach is the ability to output diverse realizations of the same input, paving the way to explicit control over the generated text structure.",
"Consider the task of data-to-text generation, as ex-emplified in the WebNLG corpus (Colin et al., 2016).",
"The system is given a set of RDF triplets describing facts (entities and relations between them) and has to produce a fluent text that is faithful to the facts.",
"An example of such triplets is: John, birthPlace, London John, employer, IBM With a possible output: This research was supported in part by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1) and by a grant from Theo Hoffenberg and Reverso.",
"1. John, who was born in London, works for IBM .",
"Other outputs are also possible: 2. John, who works for IBM, was born in London.",
"These variations result from different ways of structuring the information: choosing which fact to mention first, and in which direction to express each fact.",
"Another choice is to split the text into two different sentences, e.g., 5. John works for IBM.",
"Overall, the choice of fact ordering, entity ordering, and sentence splits for these facts give rise to 12 different structures, each of them putting the focus on somewhat different aspect of the information.",
"Realistic inputs include more than two facts, greatly increasing the number of possibilities.",
"2a.",
"John works for IBM and was born in London.",
"5a.",
"John is employed by IBM.",
"He was born in London.",
"We refer to the first set of choices (how to structure the information) as text planning and to the second (how to verbalize a plan) as plan realization .",
"1 The distinction between planning and realization is at the core of classic natural language generation (NLG) works (Reiter and Dale, 2000; Gatt and Krahmer, 2017).",
"However, a recent wave of neural NLG systems ignores this distinction 1 Note that the variation from 5 to 5a includes the introduction of a pronoun.",
"This is traditionally referred to as referring expression generation (REG), and falls between the planning and realization stages.",
"We do not treat REG in this work, but our approach allows natural integration REG systems' outputs.",
"and treat the problem as a single end-to-end task of learning to map facts from the input to the output text (Gardent et al., 2017; Dusek et al., 2018).",
"These neural systems encode the input facts into an intermediary vector-based representation, which is then decoded into text.",
"While not stated in these terms, the neural system designers hope for the network to take care of both the planning and realization aspect of text generation.",
"A notable exception is the work of Puduppully et al. (2018), who introduce a neural content-planning module in the end-to-end architecture.",
"While the neural methods achieve impressive levels of output fluency, they also struggle to maintain coherency on longer texts (Wiseman et al., 2017), struggle to produce a coherent order of facts, and are often not faithful to the input facts, either omitting, repeating, hallucinating or changing facts (the NLG community refers to such errors as errors in adequacy or correctness of the generated text).",
"When compared to template-based methods, the neural systems win in fluency but fall short regarding content selection and faithfulness to the input (Puzikov and Gurevych, 2018).",
"Also, they do not allow control over the output's structure.",
"We speculate that this is due to demanding too much of the network: while the neural system excels at capturing the language details required for fluent realization, they are less well equipped to deal with the higher levels text structuring in a consistent and verifiable manner.",
"Proposal we propose an explicit, symbolic, text planning stage, whose output is fed into a neural generation system.",
"The text planner determines the information structure and expresses it unambiguouslyin our case as a sequence of ordered trees.",
"This stage is performed symbolically and is guaranteed to remain faithful and complete with regards to the input facts.",
"Once the plan is determined, 2 a neural generation system is used to transform it into fluent, natural language text.",
"By being able to follow the plan structure closely, the network is alleviated from the need to determine higher-level structural decisions and can track what was already covered more easily.",
"This allows the network to perform the task it excels in, producing fluent, natural language outputs.",
"2 The exact plan can be determined based on a data-driven scoring function that ranks possible suggestions, as in this work, or by other user provided heuristics or a trained ML model.",
"The plans' symbolic nature and precise relation to the input structures allow verification of their correctness.",
"We demonstrate our approach on the WebNLG corpus and show it results in outputs which are as fluent as neural systems, but more faithful to the input facts.",
"The method also allows explicit control of the output structure and the generation of diverse outputs (some diversity examples are available in the Appendix).",
"We release our code and the corpus extended with matching plans in https://github.com/ AmitMY/chimera .",
"Task Description Our method is concerned with the task of generating texts from inputs in the form of RDF sets.",
"Each input can be considered as a graph, where the entities are nodes, and the RDF relations are directed labeled edges.",
"Each input is paired with one or more reference texts describing these triplets.",
"The reference can be either a single sentence or a sequence of sentences.",
"Formally, each input G consists of a set of triplets of the form ( s i , r i , o i ) , where s i , o i V (subject and object) correspond to entities from DBPedia, and r i R is a labeled DBPedia relation ( V and R are the sets of entities and relations, respec-tively).",
"For example, Figure 1a shows a triplet set G and Figure 1d shows a reference text.",
"We consider the data set as a set of input-output pairs ( G, ref ) , where the same G may appear in several pairs, each time with a different reference.",
"Method Overview We split the generation process into two parts: text planning and sentence realization.",
"Given an input G , we first generate a text plan plan ( G ) specifying the division of facts to sentences, the order in which the facts are expressed in each sentence, and the ordering of the sentences.",
"This data-to-plan step is non-neural (Section 3).",
"Then, we generate each sentence according to the plan.",
"This plan-to-sentence step is achieved through an NMT system (Section 4).",
"Figure 1 demonstrates the entire process.",
"To facilitate our plan-based architecture, we devise a method to annotate ( G, ref ) pairs with the corresponding plans (Section 3.1), and use it to construct a dataset which is used to train the plan-to-text translation.",
"The same dataset is also used to devise a plan selection method (Section 3.2).",
"considering the dataset-specific vs. general applicability aspects of our method.",
"On the low-level details, this",
"work is very much dataset dependent.",
"We show how to represent plans for specific datasets, and, importantly for this work, how to automatically construct plans for this dataset given inputs and expected natural language outputs.",
"The method of plan construction will likely not generalize as is to other datasets, and the plan structure itself may also be found to be lacking for more demanding generation tasks.",
"However, on a higher level, our proposal is very general: intermediary plan structures can be helpful, and one should consider ways of obtaining them, and of using them.",
"In the short term, this will likely take the form of ad-hoc explorations of plan structures for specific tasks, as we do here, to establish their utility.",
"In the longer term, research may evolve to looking into how general-purpose plan are structured.",
"Our main message is that the separation of planning from realization, even in the context of neural generation, is a useful one to be considered.",
"Plan structure Our text plans capture the division of facts to sentences and the ordering of the sentences.",
"Additionally, for each sentence, the plan captures (1) the ordering of facts within the sentence; (2) The ordering of entities within a fact, which we call the direction of the relation.",
"For example, the { A, location, B } relation can be expressed as either A is located in B or B is the location of A ; (3) the structure between facts that share an entity, namely chains and sibling structures as described below.",
"A text plan is modeled as a sequence of sentence plans, to be realized in order.",
"Each sentence plan is modeled as an ordered tree, specifying the structure in which the information should be realized.",
"Structuring each sentence as a tree enables a clear succession between different facts through shared entities.",
"Our text-plan design assumes that each entity is mentioned only once in a sentence, which holds in the WebNLG corpus.",
"The ordering of the entities and relations within a sentence is determined by a pre-order traversal of the tree.",
"Figure 1b shows an example of a text plan.",
"Formally, given the input G , a text plan T is a sequences of sentence plans T = s 1 , ..., s NT .",
"A sentence plan s is a labeled, ordered tree, with arcs of the form ( h, (cid:96), m ) , where h, m V are head and modifier nodes, each corresponding to an input entity, and (cid:96) = ( r, d ) is the relation between nodes, where r R is the RDF relation, and d { , } denotes the direction in which the relation is expressed: d = if ( h, r, m ) G , and d = if ( m, r, h ) G .",
"A text plan T is said to match an input G iff every triplet ( s, r, o ) in G is expressed in T exactly once, either as an edge ( s, ( r, ) , o ) or as an edge ( o, ( r i , ) , s ) .",
"Chains ( h, (cid:96) 1 , m ) , ( m, (cid:96) 2 , x ) represent a succession of facts that share a middle entity (Figure 2a), while siblings nodes with the same parent ( h, (cid:96) 1 , m 1 ) , ( h, (cid:96) 2 , m 2 ) represents a succession of facts about the same entity (Figure 2b).",
"Sibling and chain structures can be combined (Figure 2c).",
"An example of an input we addressed in the WebNLG corpus, and matching text plan is given in Figure 1b.",
"Exhaustive generation For small-ish input graphs G such as those in the WebNLG task we consider hereit is trivial to generate all possible plans by first considering all the ways of grouping the input into sets, then from each set generating all possible trees by arranging it as an undirected graph and performing several DFS traversals starting from each node, where each DFS traversal follows a different order of children.",
"3 3.1 Adding Plans to Training Data While the input RDFs and references are present in the training dataset, the plans are not.",
"We devise a method to recover the latent plans for most of the input-reference pairs in the training set, constructing a new dataset of ( G, ref, T ) triplets of inputs, reference texts, and corresponding plans.",
"We define the reference ref , and the text-plan T to be consistent with each other iff",
"(a) they exhibit the same splitting into sentencesthe facts in every sentence in ref are grouped as a sentence plan in T , and",
"(b) for each corresponding sentence and sentence-plan, the order of the entities is identical.",
"The matching of plans to references is based on the observations that",
"(a) it is relatively easy to identify entities in the reference texts, and a pair of entities in an input is unique to a fact;",
"(b) it is relatively easy to identify sentence splits;",
"(c) a reference text and its matching plan must share the 3 If a graph includes a cycle (0.4% of the graphs in the WebNLG corpus contain cycles) we skip it, as it is guaranteed that a different split will result in cycle-free graphs.",
"sentence splits.",
"Sentence split consistency We define a set of triplets to be potentially consistent with a sentence iff each triplet contains at least one entity from the sentence (either its subject or object appear in the sentence), and each entity in the sentence is covered by at least one triplet.",
"Given a reference text, we split it into sentences using NLTK (Bird and Loper, 2004), and look for divisions of G into disjoint sets such that each set is consistent with a corresponding sentence.",
"For each such division, we consider the exhaustive set of all induced plans.",
"Facts order consistency A natural criterion would be to consider a reference sentence and a sentence-plan originating from the corresponding RDF as matching iff the sets of entities in the sentence and the plan are identical, and all entities appear in the same order.",
"4 Based on this, we could represent each sentence and each plan as a sequence of entities, and verify the sequences match.",
"However, using this criterion is complicated by the fact that it is not trivial to map between the entities in the plan (that originate from the RDF triplets) and the entities in the text.",
"In particular, due to language variability, the same plan entity may appear in several forms in the textual sentences.",
"Some of these variations (i.e. A.F.C Fylde vs. AFC Fylde) can be recognized heuristically, while others require external knowledge (UK conservative party vs. the Tories), and some are ambiguous and require full-fledged co-reference resolution (them, he, the for-mer).",
"Hence, we relax our matching criterion to allow for possible unrecognized entities in the text.",
"Concretely, we represent each sentence plan as a sequence of its entities ( pe 1 , ..., pe k ) , and each sentence as the sequence of its entities which we managed to recognize and to match with an input entity ( se 1 , ..., se m ) , m k .",
"5 We then consider a sentence and a sentence-plan to be consistent if the following two condi-4 An additional constraint is that no two triplets in the RDFs set share the same entities.",
"This is to ensure that if two entities appeared in a structure, only one relation could have been expressed there.",
"This almost always holds in the WebNLG corpus, failing on only 15 out of 6,940 input sets.",
"5 We match plan entities to sentence entities using greedy string matching with Levenshtein distance (Levenshtein, 1966) for each token and a manually tuned threshold for a match.",
"While this approach results in occasional false positives, most cases are detected correctly.",
"We match dates by using the chrono-python package that parses dates from natural language texts.",
"tions hold: (1) The sentence entities ( se 1 , ..., se m ) are a proper sub-sequence of the plan entities ( pe 1 , ..., pe k ) ; and (2) each of the remaining entities in the plan already appeared previously in the plan.",
"The second condition accounts for the fact that most un-identified entities are due to pronouns and similar non-lexicalized referring expressions, and that these only appear after a previous occurrence of the same entity in the text.",
"6 3.2 Test-time Plan Selection To select the plan to be realized, we propose a mechanism for ranking the possible plans.",
"Our plan scoring method is a product-of-experts model, where each expert is a conditional probability estimate for some property of the plan.",
"The conditional probabilities are MLE estimates based on the plans in the training set constructed in section 3.1.",
"Estimates involving relation names are smoothed using Lidstone smoothing to account for unseen relations.",
"We use the following experts: Relation direction For every relation r R , we compute its probability to be expressed in the plan in its original order ( d = ) or in the reverse order ( d = ): p dir ( d = R ) .",
"This captures the tendency of certain relations to be realized in the reversed order to how they are defined in the knowledge base.",
"For example, in the WebNLG corpus the relation manager is expressed as a variation of is managed by instead of one of is the manager of in 68% of its occurrences ( p dir ( d = manager ) = 0 .",
"68 ).",
"Global direction We find that while the probability of each relation to be realized in a reversed order is usually below 0.5, still in most plans of longer texts there are one or two relations that appear in the reversed order.",
"We capture this tendency using an expert that considers the conditional probability p gd ( nr = n G ) of observing n reversed edges in an input with G triplets.",
"Splitting tendencies For each input size, we keep track of the possible ways in which the set of facts can be split to subsets of particular sizes.",
"That is, we keep track of probabilities such as p s ( s = [ 3 , 2 , 2 ] 7 ) of realizing an input of 7 RDF triplets as three sentences, each realizing the corresponding number of facts.",
"plan as a sequence of the relation types expressed",
"6 A sensible alternative would be to use a coreference resolution system at this stage.",
"In our case it turned out to not help, and even performed somewhat worse.",
"in it r 1 , . . . , r k followed by an EOS symbol, and compute the markov transition probabilities over this sequence: p trans ( r 1 , r 2 , . . . , r k , EOS ) = i = 1 ,k p t ( r i + 1 r i ) .",
"The expert is the product of the transition probabilities of the individual sentence plans in the text plan.",
"This captures the tendencies of relations to follow each other and in particular, the tendencies of related relations such as birth-place and birth-date to group, allowing their aggregation in the generated text ( John was born in London on Dec 12th, 1980 ).",
"Each of the possible plans are then scored based on the product of the above quantities.",
"7 The scores work well for separating good from lousy text plans, and we observe a threshold above which most generated plans result in adequate texts.",
"We demonstrate in Section 6 that realizing highly-ranked plans manages to obtain good automatic realization scores.",
"We note that the plan in Figure 1b is the one our ranking algorithm ranked first for the input in Figure 1a.",
"Possible Alternatives In addition to the single plan selection, the explicit planning stage opens up additional possibilities.",
"Instead of choosing and realizing a single plan, we can realize a diverse set of high-scoring plans, or realizing a random high-scoring plan, resulting in a diverse and less templatic set of texts across runs.",
"This relies on the combination of two factors: the ability of the scoring component to select plans that correspond to plausible human-authored texts, and the ability of the neural realizer to faithfully realize the plan into fluent text.",
"While it is challenging to directly evaluate the plans adequacy, we later show an evaluation of the plan realization component.",
"Figure 3 shows three random plans for the same graph and their realizations.",
"Further examples of the diversity of generation are given in the appendix.",
"The explicit and symbolic planning stage also allows for user control over the generated text, either by supplying constraints on the possible plans (e.g., number of sentences, entities to focus on, the order of entities/relations, or others) or by supplying complete plans.",
"We leave these options for future work.",
"7 We note that for an input of n triplets, there are O ( 2 2 n + n n ! ) possible plans, making this method prohibitive for even moderately sized input graphs.",
"However, it is sufficient for the WebNLG dataset in which n 7 .",
"For larger graphs, better plan scoring and more efficient search algorithms should be devised.",
"We leave this for future work.",
"For plan realization, we use an off-the-shelf vanilla neural machine translation (NMT) system to translate plans to texts.",
"The explicit division to sentences in the text plan allows us to realize each sentence plan individually which allows the realizer to follow the plan structure within each (rather short) sentence, reducing the amount of information that the model needs to remember.",
"As a result, we expect a significant reduction in overand under-generation of facts, which are common when generating longer texts.",
"Currently, this comes at the expense of not modeling discourse structure (i.e., referring expressions).",
"This deficiency may be handled by integrating the discourse into the text plan, or as a post-processing step.",
"8 .",
"We leave this for future work.",
"To use text plans as inputs to the NMT, we linearize each sentence plan by performing a preorder traversal of the tree, while indicating the tree structure with brackets (Figure 1c).",
"The directed relations ( r, d ) are expressed as a sequence of two or more tokens, the first indicating the direction and the rest expressing the relation.",
"9 Entities that are identified in the reference text are replaced with single, entity-unique tokens.",
"This allows the NMT system to copy such entities from the input rather than generating them.",
"Figure 1d is an example of possible text resulting from such linearization.",
"Training details We use a standard NMT setup with a copy-attention mechanism (Gulcehre et al., 2016) 10 and the pre-trained GloVe.6B word em-8 Minimally, each entity occurrence can keep track of the number of times it was already mentioned in the plan.",
"Other alternatives include using a full-fledged referring expression generation system such as NeuralREG (Ferreira et al., 2018) 9 We map DBPedia relations to sequences of tokens by splitting on underscores and CamelCase.",
"10 Concretely, we use the OpenNMT toolkit (Klein et al., 2017) with the copy attn flag.",
"Exact parameter values are beddings 11 (Pennington et al., 2014).",
"The pre-trained embeddings are used to initialize the relation tokens in the plans, as well as the tokens in the reference texts.",
"Generation details We translate each sentence plan individually.",
"Once the text is generated, we replace the entity tokens with the full entity string as it appears in the input graph, and lexicalize all dates as Month DAY+ordinal, YEAR (i.e., July 4th, 1776 ) and for numbers with units (i.e., 5(min-utes) ) we remove the parenthesis and quotation marks ( 5 minutes ).",
"The WebNLG challenge (Colin et al., 2016) consists of mapping sets of RDF triplets to text including referring expression generation, aggregation, lexicalization, surface realization, and sentence segmentation.",
"It contains sets with up to 7 triplets each along with one or more reference texts for each set.",
"The test set is split into two parts: seen , containing inputs created for entities and relations belonging to DBpedia categories that were seen in the training data, and unseen , containing inputs extracted for entities and relations belonging to 5 unseen categories.",
"While the unseen category is conceptually appealing, we view the seen category as the more relevant setup: generating fluent, adequate and diverse text for a mix of known relation types is enough of a challenge also without requiring the system to invent verbalizations for unknown relation types.",
"Any realistic generation system could afford to provide at least a few verbalizations for each relation of interest.",
"We thus focus our attention mostly on the seen case (though our system does also perform well on the unseen case).",
"Following Section 3.1, we manage to match a detailed in the appendix.",
"11 nlp.stanford.edu/data/glove.6B.zip consistent plan for 76% of the reference texts and use these plan-text pairs to train the plan realization NMT component.",
"Overall, the WebNLG training set contains 18 , 102 RDF-text pairs while our plan-enhanced corpus contains 13 , 828 plan-text pairs.",
"12 Compared Systems We compare to the best submissions in the WebNLG challenge (Gardent et al., 2017): Melbourne, an end-to-end system that scored best on all categories in the automatic evaluation, and UPF-FORGe (Mille et al., 2017), a classic grammar-based NLG system that scored best in the human evaluation.",
"Additionally, we developed an end-to-end neural baseline which outperforms the WebNLG neural systems.",
"It uses a set encoder, an LSTM (Hochreiter and Schmidhuber, 1997) decoder with attention (Bahdanau et al., 2014), a copy-attention mechanism (Gulcehre et al., 2016) and a neural checklist model (Kiddon et al., 2016), as well as applying entity dropout.",
"The entity-dropout and checklist component are the key differentiators from previous systems.",
"We refer to this system as StrongNeural .",
"We begin by comparing our plan-based system ( BestPlan ) to the state-of-the-art using the common automatic metrics: BLEU (Papineni et al., 2002), Meteor (Banerjee and Lavie, 2005), ROUGEL (Lin, 2004) and CIDEr (Vedantam et al., 2015), using the nlg-eval 13 tool (Sharma et al., 2017) on the entire test set and on each part separately (seen and unseen).",
"In the original challenge, the best performing system in automatic metric was based on end-to-end NMT ( Melbourne ).",
"Both the StrongNeural and BestPlan systems outperform all the WebNLG participating systems on all automatic metrics (Table 1).",
"BestPlan is competitive with StrongNeural in all metrics, with small differences either way per metric.",
"14 12 Note that this only affects the training stage.",
"At test time, we do not require gold plans, and evaluate on all sentences.",
"14 At least part of the stronger results for StrongNeural can be attributed to its ability to generate referring expressions, which we currently do not support.",
"Next, we turn to manually evaluate our system's performance regarding faithfulness to the input on the one hand and fluency on the other.",
"We describe here the main points of the manual evaluation setup, with finer details in the appendix.",
"Faithfulness As explained in Section 3, the first benefit we expect of our plan-based architecture is to make the neural systems task simpler, helping it to remain faithful to the semantics expressed in the plan which in turn is guaranteed to be faithful to the original RDF input (by faithfulness, we mean expressing all facts in the graph and only facts from the graph: not dropping, repeating or hallucinating facts).",
"We conduct a manual evaluation over the seen portion of the WebNLG human evaluated test set (139 input sets).",
"We compare BestPlan and StrongNeural .",
"15 For each output text, we manually mark which relations are expressed in it, which are omitted, and which relations exist with the wrong lexicalization.",
"We also count the number of relations the system over generated, either repeating facts or inventing new facts.",
"16 Table 2 shows the results.",
"BestPlan reduces all error types compared to StrongNeural , by 85% , 56% and 90% respectively.",
"While on-par regarding automatic metrics, BestPlan substantially outperforms the new state-of-the-art end-to-end neural system in semantic faithfulness.",
"grammar-based system that is fully faithful by design.",
"16 This evaluation was conducted by the first author, on a set of shuffled examples from the BestPlan and StrongNeural systems, without knowing which outputs belongs to which system.",
"We further note that evaluating for faithfulness requires careful attention to detail (making it less suitable for crowd-workers), but has a precise task definition which does not involve subjective judgment, making it possible to annotate without annotator biases influencing the results.",
"We release our judgments for this stage together with the code.",
"StrongNeural (4b) and BestPlan (4c) on the last input in the seen test set (4b).",
"While both systems chose three sentences split and aggregated details about birth in one sentence and details about the occupation in another, StrongNeural also expressed the information in chronological order.",
"However, StrongNeural failed to generate facts 3 and 5. BestPlan made a lexicalization mistake in the third sentence by expressing October before the actual date, which is probably caused by faulty entity matching for one of the references, and (by design) did not generate any referring expression, which we leave for future work.",
"Fluency Next, we assess whether our systems succeed at maintaining the high-quality fluency of the neural systems.",
"We perform pairwise evaluation via Amazon Mechanical Turk wherein each task the worker is presented with an RDF set (both in a graph form, and textually), and two texts in random order, one from BestPlan , the other from a competing system.",
"We compare BestPlan against a strong end-to-end neural system ( StrongNeural ), a grammar-based system which StrongNeural Reference UPF-FORGe BestPlan -0.6% -5.4% +5.1% Table 3: MTurk average worker score for BestPlan compared to each system.",
"is the state-of-the-art in human evaluation ( UPF-FORGe ), and the human-supplied WebNLG references ( Reference ).",
"The workers were presented with three possible answers: BestPlan text is better (scored as 1), the other text is better (scored as -1), and both texts are equally fluent (scored as 0).",
"Table 3 shows the average worker score given to each pair divided by the number of texts compared.",
"BestPlan performed on-par with StrongNeural , and surpassed the previous state-of-the-art UPF-FORGe .",
"It, however, scored worse than the reference texts, which is expected given that it does not produce referring expressions.",
"Our approach manages to keep the same fluency level typical to end-to-end neural systems, thanks to the NMT realization component.",
"We test the extent to which the realizer generates texts that are consistent with the plans.",
"For several subsets of ranked plans (best plan, top 1% , and top 10% ) for the seen and unseen test sets separately, we realize up to 100 randomly selected text-plans per input.",
"We realize each sentence plan and evaluate using two criteria: (1) Do all entities from the plan appear in the realization; (2) Like the consis-Best Plan Top 1% Plans Top 10% Plans Entities Order Entities Order Entities Order Seen 98.9% 100% 95.9% 99.9% 93.6% 100% Unseen 66.7% 100% 45.3% 100% 41.3% 100% Table 4: Surface realizer performance.",
"the same order in the plan and the realization.",
"Table 4 indicates that for decreasingly probable plans our realizer does worse in the first criterion.",
"However, for both parts of the test set, if the realizer managed to express all of the entities, it expressed them in the requested order, meaning the outputs are consistent with plans.",
"This opens up a potential for user control and diverse outputs, by choosing different plans for realization.",
"Finally, we verify that the realization of potentially diverse plans is not only consistent with each given plan but also preserves output quality.",
"For each input, we realize a random plan from the top 10% .",
"We repeat this process three times with different random seeds to generate different outputs, and mark these systems as RandomPlan-1/2/3 .",
"Table 1 shows that these random plans maintain decent quality on the automatic metrics, with a limited performance drop, and the automatic score is stable across random seeds.",
"17 7 Related Work Text planning is a major component in classic NLG.",
"For example, Stent et al. (2004) shows a method of producing coherent sentence plans by exhaustively generating as many as 20 sentence plan trees for each document plan, manually tagging them, and learning to rank them using the RankBoost algorithm (Schapire, 1999).",
"Our planning approach is similar, but we only have a set of good reference plans without internal ranks.",
"While the sentence planning decides on the aggregation, one crucial decision left is sentence order.",
"We currently determine order based on a splitting heuristic which relies on the number of facts in every sentence, not on the content.",
"Lapata (2003) devised a probabilistic model for sentence ordering which correlated well with human ordering.",
"Our 17 While the scores for the different sets are very similar, the plans are very different from each other.",
"See for examples the plans in Figure 3. plan selection procedure is admittedly simple, and can be improved by integrating insights from previous text planning works (Barzilay and Lapata, 2006; Konstas and Lapata, 2012, 2013).",
"Many generation systems (Gardent et al., 2017; Dusek et al., 2018) are based on a black-box NMT component, with various pre-processing transformation of the inputs (such as delexicalization) and outputs to aid the generation process.",
"Generation from structured data often requires referring to a knowledge base (Mei et al., 2015; Kiddon et al., 2016; Wen et al., 2015).",
"This led to input-coverage tracking neural components such as the checklist model (Kiddon et al., 2016) and copy-mechanism (Gulcehre et al., 2016).",
"Such methods are effective for ensuring coverage and reducing the number of over-generated facts and are in some ways orthogonal to our approach.",
"While our explicit planning stage reduces the amount of over-generation, our realizer may be further improved by using a checklist model.",
"More complex tasks, like RotoWire (Wiseman et al., 2017) require modeling also document-level planning.",
"Puduppully et al. (2018) explored a method to explicitly model document planning using the attention mechanism.",
"The neural text generation community has also recently been interested in controllable text generation (Hu et al., 2017), where various aspects of the text (often sentiment) are manipulated (Ficler and Goldberg, 2017) or transferred (Shen et al., 2017; Zhao et al., 2017; Li et al., 2018).",
"In contrast, like in (Wiseman et al., 2018), here we focused on controlling either the content of a generation or the way it is expressed by manipulating the sentence plan used in realizing the generation.",
"We proposed adding an explicit symbolic planning component to a neural data-to-text NLG system, which eases the burden on the neural component concerning text structuring and fact tracking.",
"Consequently, while the plan-based system performs on par with a strong end-to-end neural system regarding automatic evaluation metrics and human fluency evaluation, it substantially outperforms the end-to-end system regarding faithfulness to the input.",
"Additionally, the planning stage allows explicit user-control and generating diverse sentences, to be pursued in future work."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human wellbeing such as content moderation.",
"New kinds of abusive language continually emerge in online discussions in response to current events (e.g., COVID-19), and the deployed abuse detection systems should be updated regularly to remain accurate.",
"In this paper, we show that general abusive language classifiers tend to be fairly reliable in detecting out-of-domain explicitly abusive utterances but fail to detect new types of more subtle, implicit abuse.",
"Next, we propose an interpretability technique, based on the Testing Concept Activation Vector (TCAV) method from computer vision, to quantify the sensitivity of a trained model to the human-defined concepts of explicit and implicit abusive language, and use that to explain the generalizability of the model on new data, in this case, COVID-related anti-Asian hate speech.",
"Extending this technique, we introduce a novel metric, Degree of Explicitness , for a single instance and show that the new metric is beneficial in suggesting out-of-domain unlabeled examples to effectively enrich the training data with informative, implicitly abusive texts.",
"When machine learning models are deployed in the real world, they must be constantly monitored for their robustness to new and changing input data.",
"One area where this is particularly important is in abusive language detection (Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; Nakov et al., 2021; Vidgen and Derczynski, 2020).",
"The content of online conversation is constantly changing in response to political and social events.",
"New categories of abusive language emerge, encompassing topics and vocabularies unknown to previously trained classifiers.",
"Here, we tackle three main questions: How can a human user formalize new, relevant topics or concepts in text?",
"How do we quantify the sensitivity of a trained classifier to these new concepts as they emerge?",
"And how do we update the classifier so that it remains reliable?",
"As a case study, we consider the rise of COVID-related anti-Asian racism on social media.",
"The COVID-19 pandemic represented an entirely new and unexpected situation, generating new vocabulary ( COVID-19 , coronavirus , social distancing , masking ), new topics of conversation (dealing with isolation, working from home), and unfortunately new and renewed instances of hate speech directed towards Asian communities.",
"We imagine the case of an abusive language detection algorithm which had been deployed prior to the pandemic: what are the new types of abusive language that have emerged with the recent pandemic?",
"To what extent can deployed classifiers generalize to this new data, and how can they be adapted?",
"Although social events can spark off a specific type of hate speech, they are rarely the root cause of the issue.",
"Often such hateful beliefs existed before the event, and are only magnified because of it (Chou and Fea-gin, 2015).",
"Therefore, we expect that the classifier should detect this new variety of hate speech to some extent.",
"An important factor in this study is whether the text expresses explicit or implicit abuse (Waseem et al., 2017; Caselli et al., 2020; Wiegand et al., 2021).",
"Explicit abuse refers to utterances that include direct insults or strong rudeness, often involving profanities, whereas implicit abuse involves more indirect and nuanced language.",
"Since understanding the offensive aspects of implicit abuse in our case study may require some knowledge of the context (i.e., the pandemic), we expect that the pretrained classifier will find these data especially difficult to handle.",
"To examine a classifier's ability to handle new type of abusive text (without access to extensive labeled data), we propose a technique based on the Testing Concept Activation Vector (TCAV) method 5517 from the interpretability literature in computer vision (Kim et al., 2018).",
"TCAV is used to explain whether a classifier associates a specific concept to a class label (e.g., the concept of stripes is associated with class zebra in image classification).",
"Similarly, we define implicit and explicit COVID-related anti-Asian racism with a small set of human-chosen textual examples, and ask whether the pretrained classifier associates these concepts with the positive (abusive) class label.",
"Further, we ask whether sensitivity to human-defined concepts can direct data augmentation 1 to improve generalizations.",
"Intuitively, when updating a classifier, data enrichment should focus on adding examples of concepts to which the classifier is not yet sensitive.",
"Conventional active learning frameworks suggest examples with the lowest classification confidence as the most informative augmentation samples (Zhu et al., 2008; Chen et al., 2019).",
"However, deep neural networks' inability to provide reliable uncertainty estimates is one of the main barriers to adopting confidence-based sampling techniques (Schrder and Niekler, 2020).",
"We suggest that, in the case of abuse detection, implicitly abusive examples are most informative for updating a general classifier.",
"However, to the best of our knowledge, there is no quantitative metric that can measure the degree of explicitness of a candidate example, given a trained classifier.",
"We extend the TCAV technique to provide a degree of explicitness measure at the utterance level and use that for efficient data augmentation.",
"The contributions of this work are as follows: We implement a variation of the TCAV framework for a RoBERTa-based classifier and show that it can be used to quantify the sensitivity of a trained classifier to a human-understandable concept, defined through examples, without access to the training dataset of the classifier or a large annotated dataset for the new category.",
"We analyse the performance of two abusive language classifiers and observe that they generalize well to explicit COVID-related anti-Asian racism, but are unable to generalize to implicit racism of this type.",
"We show that sensitivities to the concepts of implicit and explicit abuse can explain the observed discrepancies.",
"We adjust the TCAV method to compute the degree of explicitness , for an unlabeled instance, as a metric to guide data augmentation when updating a general abusive language classifier to include a new kind of abuse.",
"We test this method against confidence-based augmentation and show that it is able to reach higher accuracy with fewer training examples, while maintaining the accuracy on the original data.",
"The implementation code and data for the experiments are available at https://github.com/ IsarNejad/TCAV-for-Text-Classifiers .",
"We consider the following four English datasets, summarized in Table 1: Founta 2 and Wiki 3 are large, commonly-used datasets for general abusive language detection, while EA and CH specifically target COVID-related anti-Asian racism.",
"We bi-narize all datasets to two classes: positive (i.e., abusive or hateful) and negative.",
"For Founta , this means combining Abusive and Hateful texts into a single positive class; for EA , Hostility against an East-Asian entity is considered positive, and all other classes are negative; and for CH , all hate speech is classed as positive, while counter-hate and hate-neutral texts are classed as negative.",
"Central to our research question is the issue of vocabulary change as a new abusive topic emerges.",
"As the Wiki and Founta datasets were collected before the COVID-19 pandemic, they do not contain novel vocabulary such as chinavirus or wuhan-flu, and the contexts and frequencies for words like China and pandemic may have changed.",
"As a demonstration of the differences in vocabulary across the different datasets, we compute the top 100 most frequent words in the positive class of each dataset (after removing stop words 4 ), and then calculate the overlap between each pair of datasets.",
"We categorize the shared words into three categories: 1) generically profane and hateful words, 2) COVID-related words, and 3) all other words.",
"use the train-dev-test split as provided by Zhou et al. (2021).",
"3 We used a smaller version of the Wiki dataset as provided in Nejadgholi and Kiritchenko (2020).",
"In that work, we removed 54% of Wikipedia-specific non-toxic instances from the training set to mitigate the topic bias, and reported improvements in both the classification performance and the execution time.",
"Here, we found similar benefits.",
"Table 2 shows the three categories of shared words among the 100 most frequent words of the positive classes in our datasets.",
"This analysis reveals that the two COVID-related datasets share more words in common: 50 out of the 100 most frequent words are common between the two datasets.",
"As expected, a large portion of their shared vocabulary (32%) is specific to the pandemic, has been used more frequently during the pandemic or has found new connotations because of the pandemic.",
"For all other datasets, fewer words are shared, and the shared words are either related to profanity and violence or are merely commonly used terms.",
"Profanity and strongly negative words such as hate make up 30% of the shared vocabulary between the Wiki and Founta datasets.",
"Interestingly, CH has a set of profane words in common with both Wiki and Founta ( 25% of shared words), while the words shared between EA and the general datasets are simply common words in the English language, such as people, want, and need.",
"We expect that this vocabulary shift between the different datasets will have a considerable impact on the generalizability.",
"Another important factor in our study is generalization with respect to explicit and implicit types of abusive language.",
"Above, we observed that CH shares many profane words with the general datasets and, therefore, we anticipate it contains more explicitly abusive texts than EA does.",
"Unfortunately, neither of the datasets has originally been annotated for explicitness of abuse .",
"We manually annotate instances from the positive class in the CH dataset and the EA dev set using the following rule: instances that include profanity, insult or rudeness that could be correctly identified as abusive without general knowledge about the COVID-19 pandemic are labeled as explicitly abusive; the remaining instances (e.g., it is not covid 19 but wuhanvirus' ) are labeled as implicitly abusive.",
"We find that 85% of the CH-positive class is categorized as explicit, whereas only 8% of the EA-positive class in the EA dev set is labeled as explicit.",
"Thus, CH and EA share COVID-related vocabulary, but are very different in terms of explicitness of abuse ( CH containing mostly explicit abuse while EA containing mostly implicit abuse), which makes them suitable test beds for assessing the generalizability of classifiers to a new type of abusive language and the impact of new vocabulary on the classification of implicit and explicit abuse.",
"We start by assessing the robustness of a general-purpose abusive language classifier on a new domain of abusive language.",
"Specifically, we analyze the performance of classifiers trained on the Wiki and Founta datasets (expected to detect general toxicity and abuse) on COVID-related anti-Asian racism data.",
"In addition, we want to assess the impact of the change of vocabulary on the generalizibility of the classifiers to implicit and explicit abuse in the new domain.",
"We train binary RoBERTa-based classifiers on the Wiki , Founta , EA and CH datasets (referred to hereafter as the Wiki , Founta , EA and CH classifiers), and test them on the EA as the mostly implicit COVID-related dataset and CH as the mostly explicit COVID-5519 Datasets Count Shared Words EA CH 50 COVID-related (32%): ccp, 19, communist, pandemic, coronavirus, covid19, chinesevirus, infected, covid, chinese, chinavirus, corona, wuhanvirus, wuhan, china, virus Hateful (0%) Other (68%): racist, came, want, country, calling, come, does, spread, like, amp, media, eating, did, human, world, know, government, say, started, think, need, blame, evil, time, people, don, new, let, news, stop, countries, just, spreading, make Wiki Founta 37 COVID-related (0%) Hateful (30%): *ss, b*tch, id*ot, n*ggas, d*ck, f*cking, f*ck, sh*t, hell, hate, stupid Other (70%): oh, dont, want, way, going, come, does, like, look, life, did eat, sex, know, say, think, man, need, time, people, said, stop, really, just, make, tell Founta EA 19 COVID-related (0%) Hateful (0%) Other (100%): racist, want, calling, come, does, like, did, world, know, say, think, need, time, people, trying, let, stop, just, make Wiki EA 15 COVID-related (0%) Hateful (0%) Other (100%): people, want, did, say, think, good, need, come, does, stop, just, know, like, make, time Founta CH 35 COVID-related (0%) Hateful (23%): *ss, b*tch, f*cking, f*ck, sh*t, hate, stupid, f*cked Other (77%): racist, want, way, going, calling, come, does, like, got, look, did, eat, world, know, say, think, man, trump, need, time, people, said, let, stop, really, just, make Wiki CH 33 COVID-related (0%) Hateful (27%): *ss, b*tch, f*cking, f*ck, sh*t, hate, stupid, shut, kill Other (73%): want, way, going, come, does, like, look, did, eat, right, know, die, say, think, man, need, time, people, don, said, stop, really, just, make Table 2: Shared words among 100 most frequent words of the positive classes in the datasets.",
"related dataset.",
"(The training details are provided in Appendix A.)",
"Note that CH is too small to be broken into train/test/dev sets, so it is used either as a training dataset when testing on EA or a test dataset for all other classifiers.",
"Here, while the classifier makes a binary positive/negative decision, we are really assessing its ability to generalize to the new task of identifying anti-Asian hate.",
"For comparison, we also train an explicit general abuse classifier with only explicit examples of the Wiki dataset and the class balance similar to the original Wiki dataset.",
"This classifier is referred to as Wiki-exp .",
"5 Table 3 presents the Area Under the ROC Curve (AUC) and F1-scores for all the classifiers; precision, recall, and average precision scores are provided in Appendix B. We first consider whether class imbalances can explain our results.",
"Note that while abusive language is a relatively rare phenomenon in online communications, most abusive language datasets are collected through boosted sampling and therefore are not subject to extreme class imbalances.",
"The percentage of positive instances in our datasets ranges from 9% to 43% 5 For Wiki-exp , the examples of the positive class are taken from the explicit abuse' topic, which contains texts with explicitly toxic words, from (Nejadgholi and Kiritchenko, 2020), and negative examples are randomly sampled from the Wiki-Normal class.",
"(Table 1).",
"We observe similar performances for the Wiki and Founta classifiers despite different class ratios in their training sets, and different performances for Wiki and EA classifiers despite their similar training class ratios.",
"We also observe better performance from the CH classifier (on the EA test set), compared to the Wiki or Founta classifiers, despite the very small size of the CH dataset.",
"Based on previous research, we argue that cross-dataset generalization in abusive language detection is often governed by the compatibility of the definitions and sampling strategies of training and test labels rather than class sizes (Yin and Zubiaga, 2021).",
"Instead, we explain the results presented in Table 3 in terms of implicit/explicit types of abuse and the change of vocabulary.",
"than all the classifiers trained on the pre-COVID datasets ( Wiki and Founta ).",
"Interestingly, the performance of the CH classifier on the EA dataset is higher than the performance of all the general classifiers, despite the CH dataset being very small and containing mostly explicit abuse.",
"This observation confirms that general classifiers need to be updated to learn the new vocabulary.",
"General-purpose classifiers generalize better to explicit than implicit examples in the new domain.",
"The Wiki and Founta classifiers, which have been exposed to large amounts of generally explicit abuse, perform well on the mostly explicit CH dataset, but experience difficulty with the COVID-specific implicit abuse in the EA dataset.",
"For example, the tweet the chinavirus is a biological attack initiated by china' is misclassified as non-abusive.",
"We observe that Wiki-exp performs relatively similar to the Wiki classifier on CH , despite its small size (only 1,294 positive examples) but is worse than Wiki classifier on EA .",
"This means that the additional 35K instances (of which, 9K are positive examples) of the Wiki compared to the Wiki-exp , only moderately improve the classification of the implicit examples in the new domain.",
"This observation indicates that generalization mostly occurs between the explicit type of the pre-COVID abuse and the explicit type of the COVID-related abuse.",
"Therefore, a general-purpose classifier should be specifically updated to learn implicit abuse in the new domain.",
"In Section 3, we showed that when a new domain emerges, the change in vocabulary mostly affects the classification of implicitly expressed abuse.",
"This observation is in line with findings by Fortuna et al. (2021), and suggests that generalization should be evaluated on implicit and explicit abuse separately.",
"However, due to complexities of annotation of abusive content, curating separate implicit and explicit test sets is too costly (Wiegand et al., 2021).",
"Instead, we propose to adapt the Testing Concept Activation Vector (TCAV) algorithm, originally developed for image classification (Kim et al., 2018), to calculate the classifiers' sensitivity to explicit and implicit COVID-related racism, using only a small set of examples.",
"Then, we show how these sensitivities can explain the generalizations observed in Table",
"3. 4.1 TCAV background and implementation TCAV is a post-training interpretability method to measure how important a user-chosen concept is for a prediction, even if the concept was not directly used as a feature during the training.",
"The concept is defined with a set of concept examples .",
"To illustrate, Kim et al. (2018) suggest stripes as a visual concept relevant to the class zebra, and then operationally define the stripes concept by collecting examples of images containing stripes.",
"In our language-based TCAV method, a concept is defined by a set of manually chosen textual examples.",
"We collect examples from held-out subsets or other available data sources and manually annotate them for the concept of interest (for example, explicit anti-Asian abuse).",
"Then, we represent the concept by averaging the representations of the examples that convey that concept, similarly to how the stripes concept is represented by several images that include stripes.",
"Here, we consider concepts such as COVID-19, hate speech, and anti-Asian abuse, but the approach generalizes to any concept that can be defined through a set of example texts.",
"Using these examples, a Concept Activation Vector (CAV) is learned to represent the concept in the activation space of the classifier.",
"Then, directional derivatives are used to calculate the sensitivity of predictions to changes in inputs towards the direction of the concept, at the neural activation layer.",
"We adapt the TCAV procedure for a binary RoBERTa-based classifier to measure the importance of a concept to the positive class.",
"For any input text, x R k n , with k words in the n dimensional input space, we consider the RoBERTa encoder of the classifier as f emb : R k n R m , which maps the input text to its RoBERTa representation (the representation for [CLS] token), r R m .",
"For each concept, C , we collect NC concept examples, and map them to RoBERTa representations r jC , j = 1 , ..., NC .",
"To represent C in the activation space, we calculate P number of CAVs, pC , by averaging 6 the RoBERTa representations of N randomly chosen concept examples: 6 In the original TCAV algorithm, a linear classifier is trained to separate representations of concept examples and random examples.",
"Then, the vector orthogonal to the decision boundary of this classifier is used as the CAV.",
"We experimented with training a linear classifier and found that the choice of random utterances has a huge impact on the results to the point that the results are not reproducible.",
"More stable results are obtained when CAVs are produced by averaging the RoBERTa representations.",
"where h : R m R is the function that maps the RoBERTa representation to the logit value of the positive class.",
"In Equation 2, S C,p ( x ) measures the changes in class logit, if a small vector in the direction of C is added to the input example, in the RoBERTa-embedding space.",
"For a set of input examples X , we calculate the TCAV score as the fraction of inputs for which small changes in the direction of C increase the logit: T CAV C,p = | x X : S C,p ( x ) > 0 | | X | (3) A TCAV score close to one indicates that for the majority of input examples the logit value increases.",
"Equation 3 defines a distribution of scores for the concept C ; we compute the mean and standard deviation of this distribution to determine the overall sensitivity of the classifier to the concept C .",
"We define each concept C with NC = 100 manually chosen examples, and experiment with six concepts described in Table",
"4. To set a baseline, we start with a set of random examples to form a noncoherent concept.",
"Next, we define a non-hateful COVID-related concept using random tweets with COVID-related keywords covid, corona, covid-19, pandemic .",
"For the explicit anti-Asian abuse concept, we include all 14 explicitly abusive examples from the EA dev set and 86 explicitly abusive examples from CH class.",
"We define two implicit anti-Asian concepts with examples from EA and CH , to assess whether selecting the examples from two different datasets affects the sensitivities.",
"We also define the generic hate concept with examples of pre-COVID general hateful utterances, not directed at Asian people or entities, from the Founta dev set.",
"We calculate P = 1000 CAVs for each concept, where each CAV is the average of N = 5 randomly chosen concept examples.",
"We use 2000 Non-coherent concept: random tweets collected with stop words as queries COVID-19: tweets collected with words covid, corona, covid-19, pandemic as query words Explicit anti-Asian abuse: tweets labeled as explicit from EA dev and CH Implicit abuse (EA): tweets labeled as implicit from EA dev Implicit abuse (CH): tweets labeled as implicit from CH Generic hate: tweets from the Hateful class of Founta dev Table 4: Human-defined concepts and the sources of the tweets used as concept examples.",
"random tweets collected with stopwords as input examples X (see Equation 3).",
"7 Table 5 presents the means and standard deviations of the TCAV score distributions for the classifiers trained on Wiki , Founta , EA , and CH datasets, respectively.",
"First, we observe that all TCAV scores calculated for a random, non-coherent set of examples are zero; i.e., as expected, the TCAV scores do not indicate any association between a non-coherent concept and the positive class.",
"Also, as expected, none of the classifiers associate the non-hateful COVID-related concept to the positive class.",
"Note that a zero TCAV score can be due to the absence of that concept in the training data (e.g., the COVID concept for the Wiki and Founta classifiers), insignificance of the topic for predicting the positive label (e.g., the COVID concept for the EA classi-fier), or the lack of coherence among the concept examples (such as the concept defined by random examples).",
"A TCAV score close to 1, on the other hand, indicates the importance of a concept for positive prediction.",
"These observations set a solid baseline for interpreting the TCAV scores, calculated for other concepts.",
"Here we ask whether the generated TCAV scores can explain the generalization performances observed in Table",
"3. We consider a classifier to be sensitive to a concept if its average TCAV score is significantly different (according to the t-test with p < 0.001) from the average TCAV score of a non-coherent random concept.",
"First, we observe that the general classifiers are only sensitive to the explicit type of COVID-related abusive language.",
"This confirms that the classifiers generalize better to the explicit type of an emerging domain of abusive language.",
"7 Unlike the original TCAV algorithm, we do not restrict the input examples to the target class.",
"In our experiments, we observed that, for this binary classification set-up, the choice of input examples has little impact on the TCAV scores.",
"Intuitively, we assess whether adding the concept vector to a random input would increase the likelihood of it being assigned to the positive class.",
"We also note that Wiki-exp , is sensitive to the explicit anti-Asian concept.",
"Second, the classifier trained with mostly explicit COVID-related data ( CH ) is not sensitive to the implicit abuse concept.",
"8 The only classifier that is sensitive to the explicit and both implicit COVID-related abusive concepts is the EA classifier.",
"Classifiers trained on the COVID datasets are also not sensitive to the generic hate concept, which encompasses a much broader range of target groups.",
"Overall, these findings stress the importance of including implicitly abusive examples in the training data for better generalizability within and across domains.",
"Here, we suggest that implicit examples are more informative (less redundant) for updating a general classifier and provide a quantitative metric to guide the data augmentation process.",
"We extend the TCAV methodology to estimate the Degree of Explicitness or DoE of an utterance.",
"We showed that the average TCAV score of the positive class for the explicit concept is close to 1.",
"DoE is based on the idea that adding one example to an explicit concept will not affect its average TCAV score (i.e., it will still be close to 1), if the added example is explicitly abusive.",
"However, adding an implicit example presumably will change the direction of all CAVs and reduce the sensitivity of the classifier to this modified concept.",
"Here, we modify Equation 1 and calculate each CAV by averaging the RoBERTa representations of N 1 explicit concept examples, and the new utterance for which we want the degree of explicitness, x new , with representation r new .",
"Thus, pnew = 1 N ( N 1 (cid:88) j =1 r jC + r new ) , p = 1 ,",
"We then calculate the average TCAV score for each x new as its DoE score.",
"If the new utterance, x new , is explicitly abusive, pnew will represent an explicit concept, and the average TCAV score, i.e., mean ( T CAV C,p ) will remain close to 1.",
"However, the less explicit the new example is, the more pnew will diverge from representations of explicit abuse, and the average score will drop.",
"We use N = 3 in the following experiments.",
"DoE analysis on COVID-related abusive data: We validate the utility of DoE in terms of separating implicit and explicit abusive examples.",
"For the Wiki and Founta classifiers, we calculate the DoE score of the implicit and explicit examples from CH and the EA dev set (described in Section 3), excluding the examples used to define the Explicit anti-Asian abuse concept.",
"Given that low classification confidence could indicate that the model struggles to predict an example correctly, one might expect that implicit examples are classified with less classification confidence than explicit examples.",
"Figure 1 shows the comparison of DoE with classification confidence in distinguishing between implicit and explicit examples.",
"We observe that for both classifiers, the distribution of DoE scores of implicit examples is different from the distribution of DoE scores of explicit examples, but the distributions of their classification confidences are indistinguishable.",
"Therefore, we conclude that DoE is more effective at separating implicit abuse from explicit abuse than classification confidence.",
"We further analyze DoE scores for the positive and negative classes separately in Appendix C. 6 Data Augmentation with DoE score We now use the DoE score to direct data augmentation.",
"We consider a scenario where a general classifier should be re-trained with an augmented dataset to include emerging types of abusive language.",
"As we showed, general classifiers are already sensitive to explicit abuse.",
"Therefore, we hypothesize that implicit examples are more benefi-5523 Figure 1: Comparison of classification confidence and DoE score for distinguishing between implicit and explicit abusive utterances.",
"cial for updating the classifier.",
"Here, we describe a novel DoE-based augmentation approach and contrast it with the conventional process of choosing augmentation examples based on the classification confidence (Zhu et al., 2008; Chen et al., 2019).",
"We consider the general Wiki classifier.",
"Our goal is to find a small but sufficient portion of the EA train set to augment the original Wiki train set, so that the classifier is able to handle COVID-related anti-Asian hate speech.",
"We calculate the DoE and confidence scores for all the examples in the EA train set and add the N examples with the lowest scores to the original Wiki train set.",
"We vary N from 1K to 6K, with a 1K step.",
"After the augmentation data size reaches 6K, the classifier performance on the original Wiki test set drops substantially for both techniques.",
"Also, note that as the size of the augmentation dataset increases, the two methods converge to the same performance.",
"Figure 2 shows the F1-score of the classifiers updated using the DoE and confidence-based augmentation methods on the original test set ( Wiki ) and the new test set ( EA ) for different augmentation sizes.",
"(Precision and recall figures are provided in Appendix D.)",
"Since only EA is used for augmentation, we evaluate the classifiers on this dataset to find the optimum size for the augmented training set and only evaluate the best performing classifiers on CH .",
"We expect that an efficient augmentation should maintain the performance on Wiki and reach acceptable results on EA test set.",
"the confidence-based augmentation method for all augmentation sizes, except for N= 5K, where the performances of the two methods are comparable.",
"DoE is better at maintaining performance on the original dataset: DoE outperforms the confidence-based method on the Wiki dataset.",
"For all augmentation sizes, the performance of the DoE-augmented classifier on this class stays within 2% of the baseline (the F1-score of the classifier trained just on the Wiki data), whereas for the confidence-based augmentation, we observe up to 6% drop depending on the size of the added data.",
"DoE is better overall: Table 6 presents the best results achieved by the two augmentation methods on the EA test set: AUC score of 0.81 for the DoE-based augmentation obtained with 3K added examples, and AUC score of 0.69 for the confidence-based augmentation obtained with 4K added examples.",
"For comparison, we also show the baseline results for the original Wiki classifier and the classifier trained on the combined Wiki and full EA train sets.",
"Although we did not optimize the augmentation for the CH dataset, our evaluation shows that DoE performs favourably on this dataset, as well.",
"We conclude that the new DoE-based augmentation method maintains the classification performance on the original dataset, while outperforming the other method on the new data.",
"We also qualitatively assess the classifier's output before and after data augmentation with DoE.",
"While explicitly abusive utterances (e.g., f*ck you china and your chinese virus) are often correctly classified both before and after re-training, many implicitly abusive examples (e.g., it is not covid 19 but wuhanvirus) are handled correctly by the classifier only after re-training.",
"Generalizability has been an active research area in NLP (Ettinger et al., 2017; Hendrycks et al., 2020).",
"In a recent review, Yin and Zubiaga (2021) discussed the challenges for building generalizable 5524 F1-score AUC Method Aug. set EA CH Wiki EA CH Wiki DoE 3K EA 0.61 0.73 0.82 0.81 0.78 0.96 Conf.",
"hate speech detection systems and recommended possible future directions, including improving data quality and reducing overfitting through transfer learning.",
"Several studies evaluated generalizability in abuse detection through cross-dataset evaluation (Swamy et al., 2019; Wiegand et al., 2019), direct dataset analysis (Fortuna et al., 2020) or topic modeling on the training data (Nejadgholi and Kiritchenko, 2020).",
"Fortuna et al. (2021) showed that the lack of generalizability is rooted in the imbalances between implicit and explicit examples in training data.",
"The distinction between explicit and implicit abuse has been recognized as an important factor in abuse detection (Waseem et al., 2017; Caselli et al., 2020).",
"Wiegand et al. (2019) showed that lexicon-based sampling strategies fail to collect implicit abuse and most of the annotated datasets are overwhelmed with explicit examples.",
"Breitfeller et al. (2019) showed that inter-annotation agreement is low when labeling the implicit abuse utterances, as sometimes specific knowledge is required in order to understand the implicit statements.",
"For better detection of implicitly stated abuse, large annotated datasets with hierarchical annotations are needed (Sap et al., 2020), so that automatic detection systems can learn from a wide variety of such training examples.",
"Field and Tsvetkov (2020) proposed propensity matching and adversarial learning to force the model to focus on signs of implicit bias.",
"Wiegand et al. (2021) created a novel dataset for studying implicit abuse and presented a range of linguistic features for contrastive analysis of abusive content.",
"We define explicitness as obvious rudeness and hateful language regardless of the social context and introduce a quantitative measure of explicitness from a trained classifier's point of view.",
"Data augmentation has been used to improve the robustness of abuse detection classifiers.",
"To mitigate biases towards specific terms (e.g., identity terms), one strategy is to add benign examples containing the biased terms to the training data (Dixon et al., 2018; Badjatiya et al., 2019).",
"Other works combined multiple datasets to achieve better generalizations, using a set of probing instances (Han and Tsvetkov, 2020), multi-task training (Waseem et al., 2018), and domain adaptation (Karan and najder, 2018).",
"In contrast to these works, we take an interpretability-based approach and guide the data collection process by mapping the new data on the implicit vs. explicit spectrum.",
"As real-world data evolves, we would like to be able to query a trained model to determine whether it generalizes to the new data, without the need for a large, annotated test set.",
"We adopted the TCAV algorithm to quantify the sensitivity of text classifiers to human-chosen concepts, defined with a small set of examples.",
"We used this technique to compare the generalizations of abusive language classifiers, trained with pre-pandemic data, to explicit and implicit COVID-related anti-Asian racism.",
"We then proposed a sensitivity-based data augmentation approach, to improve generalizability to emerging categories.",
"We showed that in the case of abuse detection, the most informative examples are implicitly abusive utterances from the new category.",
"Our approach collects implicit augmentation examples and achieves higher generalization to the new category compared to confidence-based sampling.",
"Strategies for choosing the optimal set of concept examples should be explored in the future.",
"While we examined abusive language detection as a case study, similar techniques can be applied to different NLP applications.",
"For example, the TCAV method could be used to measure the sensitivity of a sentiment analysis system to a new product, or a stance detection algorithm's sensitivity to an important new societal issue.",
"As language evolves, methods of monitoring and explaining classifier behaviour over time will be essential.",
"Content moderation is a critical application with potential of significant benefits, but also harms to human well-being.",
"Therefore, ethics-related issues in content moderation have been actively studied in NLP and other disciplines (Vidgen et al., 2019; Wiegand et al., 2019; Kiritchenko et al., 2021; Vidgen and Derczynski, 2020).",
"These include sampling and annotation biases in data collection, al-5525 gorithmic bias amplification, user privacy, system safety and security, and human control of technology, among others.",
"Our work aims to address the aspects of system safety and fairness by adapting the model to newly emerged or not previously covered types of online abuse, often directed against marginalized communities.",
"We employ existing datasets (with all their limitations) and use them only for illustration purposes and preliminary evaluation of the proposed methodology.",
"When deploying the technology care should be taken to adequately address other ethics-related issues."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"result",
"result",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"objective",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We present a thorough comparison of two principal approaches to Cross-Lingual Information Retrieval: document translation (DT) and query translation (QT).",
"Our experiments are conducted using the cross-lingual test collection produced within the CLEF eHealth information retrieval tasks in 20132015 containing English documents and queries in several European languages.",
"We exploit the Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) paradigms and train several domain-specific and task-specific machine translation systems to translate the non-English queries into English (for the QT approach) and the English documents to all the query languages (for the DT approach).",
"The results show that the quality of QT by SMT is sufficient enough to outperform the retrieval results of the DT approach for all the languages.",
"NMT then further boosts translation quality and retrieval quality for both QT and DT for most languages, but still, QT provides generally better retrieval results than DT.",
"Multilingual content has been growing significantly in the last few years simultaneously with rapid internet access growth all over the world.",
"Monolingual information retrieval task allows users to find information in documents that are written in the language that they use to write their queries.",
"This ignores a vast amount of information that is represented in other languages.",
"Cross-Lingual Information Retrieval (CLIR) breaks this language barrier by allowing users to look up information that is represented in documents written in languages different from the language of the query.",
"We reinvestigate the effectiveness of two principal approaches to CLIR: document translation (DT) and query translation (QT).",
"The existing comparison studies of the two approaches are outdated (e.g. Oard, 1998) and do not reflect the current advances in Machine Translation (MT).",
"Even in very recent works, the authors have blindly assumed that DT is superior to QT (Khiroun et al., 2018), giving the argument that in DT, the text is translated in a larger context compared to the translation of short isolated queries in QT.",
"The larger context should help in translation disambiguation and better lexical selection during translation, which should subsequently lead to better retrieval results.",
"This hypothesis needs to be revised, taking into consideration the significant improvement of machine translation quality in recent years, despite the strong practical disadvantages of DT over QT: DT is computationally expensive and hard to scale (every document needs to be translated into each supported language and then indexed) while QT is performed in query time and only a short text (the query) is translated into the document language.",
"In this work, state-of-the-art Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) systems are deployed for document translation and query translation to investigate their effect on retrieval quality in the cross-lingual setting.",
"The experiments are conducted using the cross-lingual test collection produced within the CLEF eHealth tasks on patient-centered information retrieval in 20132015 extended with additional relevance assessments and manual query translations (Saleh and Pecina, 2019).",
"Though this is a very specific domain and the results cannot be thoughtlessly generalized to other domains, the choice of this test collection was motivated by two facts: First, it provides resources for large-scale experimentation (1 million in-domain documents, 166 queries in 8 languages, thorough relevance assessment).",
"Second, the medical domain in MT has been well studied (Jimeno Yepes et al., 2017; Dusek et al., 2014), and there are enough resources to develop well-performing MT systems for multiple languages.",
"In CLIR, documents and queries are written in different languages.",
"The traditional term-matching retrieval methods require both documents and queries to be represented in the same language.",
"In practice, either the queries need to be translated into the document language (QT), or the documents need to be translated into the query language (DT).",
"Not many studies and experiments have been conducted in order to compare these two approaches.",
"Oard (1998) investigated the performance of DT, QT, and a hybrid system combining both.",
"They found that the system translating English queries into German (the document language) outperformed the system translating the documents from German into English (the query language).",
"They hypothesized that documents, which are typically longer than queries, provide more contextual and linguistic information that helps reduce translation ambiguity and thus improves translation quality.",
"McCarley (1999) presented a hybrid DT/QT system, which averaged the retrieved document scores from DT and QT systems and thus outperformed both of them.",
"Fujii and Ishikawa (2000) employed a two-step method where QT was first used to retrieve a limited number of documents that were translated into the query language and reranked by their DT retrieval scores.",
"Pirkola (1998) presented a new method for CLIR, which was referred to as structured queries .",
"The idea was that a document containing one possible translation candidate of a query term is more relevant than a document that contains multiple translations of that term.",
"This probabilistic structured queries approach was also applied to Cross-Language Speech Retrieval (Nair et al., 2020).",
"Darwish and Oard (2003) also exploited alternative translations of query terms.",
"Their experiments showed that combining multiple translations outperformed the selection of one best translation.",
"Nikoulina et al. (2012) investigated reranking SMT translation hypotheses towards better CLIR performance and showed that SMT systems are usually trained to give the best results in terms of translation accuracy, adequacy, and fluency.",
"However, an improvement will be achieved when they are optimized towards retrieval quality.",
"We followed this approach in our previous work and introduced a richer set of features and adopted the hypothesis reranker for multiple languages in the medical domain (Saleh and Pecina, 2016b,a).",
"Several recent papers employed methods based on Deep Learning.",
"Litschko et al. (2018) presented an unsupervised CLIR approach employing shared cross-lingual word embedding model, which was trained using monolingual data only.",
"They used those embeddings to translate query terms word by word into the document language.",
"Ruckle et al. (2019) trained NMT model for CLIR using out-domain data and synthetic data (created by translating in-domain monolingual English into German) to retrieve answers to German questions from English collection in the technical domain (AskUbuntu and StackOverflow).",
"CLIR in the medical domain has been investigated within the series of CLEF ShARe/eHealth labs since 2013 which focused on improving access of laypeople (non-medical experts) to reliable medical information (Goeuriot et al., 2013, 2014; Palotti et al., 2015; Kelly et al., 2016; Palotti et al., 2017; Jimmy et al., 2018; Kelly et al., 2019).",
"In this paper, we compare the performance of both QT and DT using the traditional SMT and state-of-the-art NMT methods trained on the same data to make the comparison as fair as possible.",
"We present a novel approach for NMT model selection that is optimized towards CLIR performance and investigate the effect of morphological preand post-processing on the performance on CLIR.",
"Two types of data were used in our experiments: The data for training, tuning, and testing MT (Sec-tion 3.1) and the CLIR test collection (Section 3.2).",
"Parallel data is essential for training both SMT and NMT systems.",
"We exploited the UFAL Medical Corpus 1 which was assembled during the course of several EU projects aiming at more reliable machine translation of medical texts and used for the purposes of WMT Biomedical Translation Task (Bojar et al., 2014).",
"It mainly includes the EMEA corpus by Tiedemann (2009), UMLS metathesaurus (Humphreys et al., 1998), titles from Wikipedia articles in the medical categories mapped to other languages using Wikipedia Interlingual links, medical domain patent applications (Waschle and Riezler, 2012; Pouliquen and Mazenc, 2011), and various web-crawled data.",
"Monolingual data is used to build a language model during the development of SMT systems.",
"The language model helps select a candidate translation that is as coherent and fluent as possible in the target language (which is certainly important for document translation, but less important for query translation).",
"Our procedure of data selection (both parallel and monolingual data) follows the work of Pecina et al. (2014), where two language models are trained on in-domain and general-domain data respectively, then each sentence from the corpus is scored by its cross-perplexity between the two models.",
"Finally, the top 10 million scored sentences are chosen.",
"In NMT training, the monolingual data is used to enlarge the parallel data training data by back-translation, where target language monolingual data is machine translated to the source language and added to parallel data for training.",
"The monolingual data used in our experiments includes multiple resources such as the CLEF eHealth 2014 English document collection (Goeuriot et al., 2014), Genia corpus (Ohta et al., 2002), and medical Wikipedia articles in English.",
"MT development and test data: used for tuning and evaluating our MT systems consists of the Khresmoi Summary Translation Test Data 2 used by the DT models and Khresmoi Query Translation Test Data 2.0 3 used by the QT models.",
"Both were developed within the Khresmoi project 4 and later extended within the KConnect 5 and HimL 6 projects.",
"The summary test data includes sentences (1,000 for testing and 500 for development) from summaries of English medical articles manually translated from English to all relevant languages.",
"The query test data includes English queries (1,000 for testing and 500 for tuning) sampled from a query log of a medical search engine and manually translated to the same set of languages.",
"For CLIR experiments, we use the CLIR test collection 7 .",
"that we developed in our previous work (Saleh and Pecina, 2019).",
"It is based on the data used within the CLEF eHealth lab IR tasks in 2013 2015 (Suominen et al., 2013; Goeuriot et al., 2014; 2 http://hdl.handle.net/11234/1-2122 3 http://hdl.handle.net/11234/1-2121 4 http://khresmoi.eu/ 5 http://www.kconnect.eu/ 6 http://www.himl.eu/ 7 http://hdl.handle.net/11234/1-2925 Palotti et al., 2015).",
"It contains about 1 .",
"1 million web pages that were crawled automatically from various trusted medical websites (Goeuriot et al., 2015).",
"There are 166 queries in total ( 100 for training and 66 for testing) originally formulated in English (to mimic real patient queries) and then manually translated by medical experts into seven European languages (Czech, French, German, Spanish, Swedish, Polish, and Hungarian).",
"The relevance judgments consist of the official relevance assessments provided by the task organizers and additional assessments, as described in (Saleh and Pecina, 2019).",
"We clean the document collection by removing HTML tags and other scripts in the documents.",
"All the lemmatization experiments in our work are done using UDPipe (Straka and Strakova, 2017), while for stemming, we use the Snowball algorithm (Moral et al., 2014).",
"The document collection is indexed using Terrier (Ounis et al., 2005), an open-source tool for information retrieval experiments.",
"For retrieval, we use Terrier's implementation of the language model with Bayesian smoothing and Dirichlet prior (Smucker and Allan, 2005) with the default value of the smoothing parameter.",
"In this section, we provide details on training the SMT and NMT systems used in the CLIR experiments.",
"The SMT systems fully replicate the work by Dusek et al. (2014); we only provide the most important information.",
"The NMT systems are described in full detail.",
"The SMT systems are based on the phrase-based SMT paradigm implemented in Moses (Koehn et al., 2007).",
"The system for the QT experiments was developed within the Khresmoi project (Dusek et al., 2014).",
"The system was tuned to translate medical search queries (using the Khresmoi Query development set) and optimized towards PER (Position-independent word Error Rate, Till-mann et al., 1997) instead of the traditionally preferred BLEU (Papineni et al., 2002) as this was shown to be more effective for tuning SMT parameters for translating search queries (Pecina et al., 2014).",
"The system is denoted as QT-SMT-form .",
"For the DT experiments, we train two SMT systems: DT-SMT-form , which is a replication of the SMT system that translates standard sentences by Dusek et al. (2014), and our own system DT-SMT-pre-lem that translates English sentences into lemmatized sentences in the target language.",
"This is done by lemmatizing the monolingual data and the target side of the parallel data prior to training.",
"In both the systems, we use fast align (Dyer et al., 2013) to train word alignment model on the lowercased word forms between English and the target language, then we replace the word forms in the target language with word lemmas.",
"Moses (with its default settings) is used to train a phrase-table model using the tokenized and lowercased English word forms, and the tokenized and lemmatized data in the target language plus a 5-gram language model.",
"Minimum Error Rate Training (MERT, Och, 2003) is used to tune the model parameter weights using the development data sets.",
"We also experiment with another system ( DT-SMT-post-lem ), which produces lemmatized output but obtained as post-lemmatization of the output of the DT-SMT-form system, and a system ( DT-SMT-post-stem ) which produces stemmed output obtained by the Snowball stemmer applied again to the output of DT-SMT-form .",
"This is to allow better comparison of the DT and QT approaches.",
"Translating documents into a morphologically richer language enlarge the vocabulary (term diversity) and thus make retrieval more difficult.",
"The three systems produce morphologically reduced translations of documents and thus make them comparable to the English ones (in terms of vocabulary size).",
"Neural Machine Translation (NMT) has become the state-of-the-art approach in MT and recently achieved superior results and lead to a significant improvement over the SMT systems (Jean et al., 2015).",
"We implement two types of NMT systems: one for query translation (denoted as QT-NMT-form ) and one for document translation (denoted as DT-NMT-form ).",
"Both produce standard (non-lemmatized) output.",
"The systems are based on the Marian (Junczys-Dowmunt et al., 2018) implementation of the Transformer (Vaswani et al., 2017) model with back-translation (Edunov et al., 2018).",
"SMT has an advantage over NMT in employing monolingual data in its language model.",
"This gap can be bridged Parallel Corpus (Authentic) CLEF eHealth Collection (EN) NMT Model Source -> EN NMT Model EN -> Target Initial Training Parallel Corpus (Target Side) NMT Model EN -> Target NMT Model Source -> EN Parallel Corpus (Synthetic) Parallel Corpus (Synthetic) NMT Model Source -> EN NMT Model EN -> Target Iterative Training Parallel Corpus (English Side) Figure 1: A schema of the iterative back-translation mechanism for NMT training.",
"by back-translation, a technique that exploits another MT model to translate monolingual data from the target language into the source language and adds this synthetic data to the original parallel data(Sennrich et al., 2016a).",
"This approach also helps for domain adaption of NMT when the monolingual data is taken from a specific domain.",
"We follow the back-translation approach in this work iteratively.",
"The NMT systems are trained using the same training data as the SMT systems.",
"However, in NMT, all data sets (monolingual and parallel) are encoded into Byte-Pair Encoding (BPE), which helps reduce the out-of-vocabulary problem in NMT by encoding rare words as sequences of subword units (Sennrich et al., 2016b).",
"We train the Transformer model using the same parameters as reported by Vaswani et al. (2017).",
"Figure 1 shows the architecture of the proposed iterative back-translation NMT model, inspired by the work of Hoang et al. (2018): for each language pair, we first train initial models for both directions, English to target, and source to English.",
"We use the authentic (non-synthetic) parallel data that is presented in Section 3.1 for training the initial models.",
"During training the Transformer models, multiple epochs (iterations through the entire training data) are needed.",
"It is known that too many training epochs can cause over-fitting of the model, and a few iterations might cause under-fitting (Popel and Bojar, 2018).",
"To avoid this, the early-stopping of the training is employed to terminate the process when the intermediate model satisfies some stopping criteria (training objective).",
"We stop training when there are three consecutive checkpoints without any improvement in the translation performance of the validation data.",
"Then, we use the initial model to translate monolingual text in the target language coming from two resources: MT parallel training corpus: the target side of the parallel training data (Section 3.1) is translated into English using the SRC EN NMT model to create the synthetic data for the models that are used in DT experiments.",
"The English side of the parallel corpus is translated using the EN TGT model for the QT experiment.",
"This is done to investigate the effect of the source of the monolingual data on the CLIR performance.",
"We randomly select 2 million sentences in each iteration.",
"CLIR test collection: we select randomly 2 million sentences from the test collection (Section 3.2) (after filtering sentences that are longer than 80 words), then we use EN TGT model to translate them into the target language.",
"This is done for models that are used for the query translation approach.",
"The motivation of choosing the collection is to make the model adapted to translate the medical queries into English (the document language).",
"After translating this monolingual data, we create the synthetic data by adding the monolingual data and their translations to the authentic parallel data.",
"Then we continue training of the models in both directions.",
"We conduct back-translation three times, and in each iteration, we use the updated models from the previous one.",
"We setup Marian to save the intermediate models (checkpoints) after every 5,000 iterations where each iteration is a batch sized of instances from the training data.",
"This is done instead of saving each epoch to avoid loosing effective intermediate models in between.",
"The model selection is based on evaluating each checkpoint by BLEU (Papineni et al., 2002) and PER (Tillmann et al., 1997) using the Khresmoi Summary development set (DT) and Khresmoi Query development set (QT).",
"Figure 2 shows the evaluation results of the intermediate models using the two MT metrics and how they correlate with P@10 (IR metric).",
"P@10 is calculated by query translation of the Czech training queries into English using the corresponding NMT model, and then conducting retrieval as we describe in Section 4.",
"Choosing the model that gives the best BLEU scores (iteration 400,000) does not cor-0 200000 400000 600000 800000 Iteration 0 10 20 30 40 50 P10 BLEU (1-PER) Figure 2: Performance comparison of the intermediate QT-NMT-form models at each checkpoint (after each 5,000 iterations) in terms of BLEU, 1-PER, and P@10 when employed in the Czech QT CLIR system.",
"relate with the best value for P@10, nor the best score for PER (500,000).",
"This is understandable because these metrics evaluate translation quality.",
"In order to select the best checkpoint that guarantees the advantages of both metrics (BLEU, which penalizes word order and PER which does not), we ensemble the two models together (best BLEU and best PER) during decoding by setting up the weights for both models equally.",
"Marian decoder supports model ensembling since they share the same vocabularies.",
"For the document translation experiments, we select the NMT models with the highest BLEU scores.",
"In this section, we present intrinsic evaluation of the MT systems.",
"We evaluate how well the systems translate sentence/queries given their reference translations in the test data.",
"We present both BLEU and PER scores (all as percentages).",
"The higher the BLEU score, the better the translation quality is.",
"BLEU is based on measuring the similarity of n-grams counts between a translation hypothesis and its reference translation(s), and as such is sensitive to word order.",
"PER, on the other hand, does not penalize word order between a translation hypothesis and its reference translation as BLEU does.",
"Instead, it considers both as a bag of words.",
"PER captures all words that appear in a translation hypothesis but do not exist in the reference.",
"These words are known as PER errors; thus, the higher the PER value, the lower the translation quality.",
"ENCS ENFR ENDE ENHU ENES ENSV ENPL MT System BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER DT-SMT-form 19.0 51.1 37.8 68.3 18.7 53.4 10.5 41.6 25.7 63.2 33.6 64.6 11.5 41.3 DT-NMT-form 25.9 56.5 38.8 66.5 19.8 51.4 8.2 39.5 23.2 55.2 35.1 64.4 10.2 35.9 DT-SMT-post-lem 30.9 65.6 43.5 74.7 23.6 60.4 13.2 48.6 35.4 72.3 40.9 69.9 16.1 50.5 DT-SMT-pre-lem 28.7 64.2 41.2 72.6 13.0 48.0 14.3 51.9 28.4 65.7 39.1 70.0 12.5 46.9 Table 1: Intrinsic evaluation of MT systems for document translation using the Khresmoi Summary Test set.",
"CSEN FREN DEEN HUEN ESEN SVEN PLEN MT System BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER BLEU PER QT-SMT-form 36.4 70.2 38.7 75.9 37.0 65.2 39.7 67.3 31.2 73.7 39.2 62.7 26.0 58.6 QT-NMT-form 22.5 48.9 30.6 65.4 28.7 58.1 36.7 63.2 17.8 45.5 40.9 63.0 18.7 47.9 Table 2: Intrinsic evaluation of MT systems for query translation using the Khresmoi Query Test set.",
"The MT evaluation scores cannot be directly compared across language pairs, and for the *-form and *-lem systems (since the test sets differ), but they indicate to what extent the translated queries differ from the reference translations, which in term-matching IR is important.",
"Also, the results of the two systems producing lemmas instead of the word forms are indicative only.",
"They cannot be directly compared to those producing word forms.",
"Table 1 displays the (intrinsic) evaluation of the MT systems for document translation using the Khresmoi Summary test set (in terms of BLEU and PER).",
"The results are not very consistent: For six out of the seven translation directions, DT-NMT-form outperforms DT-SMT-form in terms of PER.",
"In terms of BLEU, DT-NMT-form wins for four language pairs.",
"The effect of lemmatization on the scores is not surprising.",
"Naturally, lemmatization reduces the vocabulary size in the target language; thus, the BLEU scores are higher for the systems which employ lemmatization in either way.",
"However, post-lemmatization is constantly better (with the exception of Hungarian, which is a very specific language, and its scores are generally much lower than for other languages).",
"In terms of PER, the situation is different, and despite the fact that lemmatization reduces the target language, the systems without lemmatization often achieve better scores (except in German and Spanish).",
"Table 2 presents the (intrinsic) evaluation of the MT systems for QT using the Khresmoi Query test set.",
"QT-SMT-form outperforms QT-NMT-form in terms of BLEU in all the languages except Swedish.",
"However, in terms of PER (which is preferred), QT-NMT-form is always better.",
"This can be partially explained because of the way we ensembled NMT models towards better CLIR performance.",
"The bold font indicates which of the two *-form systems is better (for each language pair and each measure).",
"Table 3 presents the results of the CLIR experiments altogether.",
"Motivated by the organization of the CLEF eHealth CLIR tasks, we adopt P @ 10 (the percentage of relevant documents among the top ten retrieved ones) as the main evaluation measure.",
"In all the experiments, all the top 10 ranked documents for each query are assessed for relevance.",
"We also report MAP (Mean Average Precision) as a secondary evaluation measure.",
"The *-SMT-form systems are treated as baselines.",
"The figures in bold denote results better than the baseline.",
"Those, which are statistically significantly better are in bold and also in italics.",
"The significance tests were performed using the paired Wilcoxon signed-rank test (Hull, 1993) with = 0 .",
"05 , and no correction was applied.",
"First, we conduct monolingual experiments using the English queries and the English document collection to set a reference (oracle) system for our CLIR task, that is why all the results of monolingual systems are the same for all the languages.",
"We report the following: Mono-form system uses the original English queries and the English collection (no morphological processing applied).",
"Mono-lem and Mono-stem report the results after performing lemmatization and stemming of the document collection and the English queries, respectively.",
"The purpose of these systems is to study the effect of the morphological processing of the English documents on retrieval performance.",
"The QT experiments are done using the SMT and NMT systems, both translating into word forms ( QT-SMT-form and QT-NMT-form ).",
"We want to stress here that the used MT systems for QT are different from the MT systems for DT, not only in the translation direction but also in the way that they were trained and tuned.",
"Details are presented in Section 5.1 and Section 5.2.",
"In DT experiments, we exploit several configura-tions of the MT systems.",
"DT-SMT-form translates the collection from English into the target language by the SMT system for document translation (no morphological processing applied).",
"DT-SMT-post-stem refers to the results obtained by stemming the output of the the DT-SMT-form system.",
"DT-SMT-post-lem lemmatizes the output of the DT-SMT-form , while DT-SMT-pre-lem lemmatizes the training data prior SMT training.",
"(i.e., the translated documents in this system are already lemma-tized).",
"To compare the performance of DT when employing the NMT model, we report DT-NMT-form , which uses the presented NMT models to translate the collection into all the languages.",
"In this work, we are mainly interested in comparing NMT vs. SMT employed in both the CLIR approaches (DT and QT), comparing the two approaches as such and analyzing the effect of morphological normalization in DT.",
"NMT versus SMT: For the QT approach, we can conclude that in terms of P @ 10, the NMT-based CLIR systems (using the QT-NMT-form MT systems) significantly outperform the SMT-based ones.",
"Moreover, QT-NMT-form in Czech outperforms not only all other QT systems but also outperforms the monolingual system, which means that the NMT translations are on average better than the reference ones.",
"This situation is illustrated in Table 4 which provides several examples of queries in which NMT not only provides translations which are better (in terms of P @ 10) than the ones provided by SMT but also better than the reference translations (for each translation, the P @ 10 score is in parentheses).",
"This can be explained by the fact that the NMT models in our work are adapted to translate medical content by employing the collection itself in the back-translation process.",
"This gives the model access to the collection vocabularies that are frequent in the retrieval collection, and in the relevant documents eventually.",
"To investigate this hypothesis.",
"We train another QT-NMT-form system (for CS EN only) using a different source of the back-translation data, namely the English side of the MT parallel text, which is also from the medical domain but different from the CLIR collection (the other settings of the system remain the same).",
"The performance of this system decreased (as expected) from 57.2 % to 54.2 % (statistically significant).",
"This shows that employing the document collection in back-Query: 2013.38 (Czech) SRC: IM a dedicny REF: mi and hereditary (0.0) SMT: mi and hereditary (0.0) NMT: hereditary myocardial infarction (10.0) Query: 2015.61 (French) SRC: hematomes sous les ongles REF: fingernail bruises (40.0) SMT: bruising under the nail (10.0) NMT: nail hematoma (60.0) Query: 2014.19 (Swedish) SRC: L aneurysm i halspulsader REF: l common carotid aneurysm (60.0) SMT: l aneurysm in halspulsader (0.0) NMT: carotid artery aneurysm (100.0) Query: 2015.61 (Spanish) SRC: hematomas en la una del dedo REF: fingernail bruises (40.0) SMT: bruising in toe nail (20.0) NMT: nail hematoma (60.0) Table 4: Comparison of query translations by two systems ( QT-SMT-form and QT-NMT-form ) and reference translations and their effect on retrieval quality.",
"translation indeed helps produce translations that are more adapted to the collection domain.",
"NMT also helps deal with out-of-vocabulary (OOV) words (i.e., words do not appear in the training data), which is a common problem in SMT.",
"For instance, the translations of Swedish queries produced by QT-SMT-form contain 40 untranslated terms.",
"However, in QT-NMT-form translations, due to BPE, there are no OOVs at all (all words get translated, though the correct translation is not guar-anteed).",
"Very likely, this has a positive effect on the CLIR performance too.",
"QT versus DT: The most surprising observation in this work is the predominance of QT over DT in our experiments.",
"In terms of P @ 10, for all the languages, QT-SMT-form provides significantly better translations than DT-SMT-form .",
"For German and Spanish, the systems based on the translation of documents into morphologically normalized forms (lemmas, stems) perform on par with the systems based on QT-SMT-form , but for the other languages, the baseline QT-SMT-form is the best performing SMT option.",
"The NMT models unsurprisingly boost translation quality for both QT and DT, but QT unexpectedly stays superior to DT, and the results get very close to the monolingual performance (and even higher for the Czech system, see above).",
"This can be explained by a simple hypothesis that a well-trained MT system based on the state-of-the-art techniques and sufficient amounts of training data is good enough to provide query translations of sufficient quality and does not require to see any larger context.",
"The translation quality may not be perfect, but still sufficient for retrieval.",
"For example, the Czech query clef2015.test.33 , which is bla infekce hltanu , is translated into English as white infection of pharynx . The reference translation for that query is white infection in pharynx . We can see that the CS EN SMT system fails in translating prepositions ( of instead of in ), but this does not affect the CLIR performance. However, we should keep in mind that our experiments are carried out in a very specific domain. This means that the queries are short, and often include symptoms and health conditions in which linguistics and contextual information may not play a significant role in solving the translation ambiguity . Morphological normalization: Producing document translations (lemmatized or stemmed) reduces collection vocabularies and improves term matching. However, in our experiments, none of the DT-SMT systems employing morphologically normalized translations of documents outperforms (in terms of P @ 10) the QT-SMT-form systems. An example of a query where morphological normalization improved retrieval is the Czech query clef2013.test.18 : aspira cn pneumonie a dysf agie hltanu ( aspiration pneumonia and pharyngeal dysphagia in English). The word hltanu , which means pharyngeal is lemmatized in the training data of the SMT system and the Czech query into hltan , which means pharynx . When translating the English documents into Czech, pharynx and pharyngeal are translated back into hltan . This helps retrieve more relevant documents, increasing P@10 to 0.9 in DT-SMT-pre-lem from 0.7 in the monolingual systems ( Mono , Mono-lem and Mono-stem ), 0.6 in QT-SMT-form and 0.0 in DT-SMT-form . In comparison of pre-lemmatization and-post lemmatization, there is no clear winner. In the intrinsic MT evaluation, DT-SMT-post-lem outperforms DT-SMT-pre-lem for most languages.But in the extrinsic CLIR evaluation, DT-SMT-pre-lem is better for four languages and worse for Hungarian, Swedish, and Polish. DT-SMT-pre-lem in Spanish is the only DT system that outperforms the QT system. No clear conclusion can be done regarding the DT-SMT-post-stem models. Finally, it is important to give insights about the cost-oriented comparison of the two approaches in terms of time complexity. The training time of our MT systems (both NMT and SMT) for both the approaches (QT and DT) is almost the same. The major difference was in the translation process. In the DT approach, translating the document collection using SMT took on average around three days using 200 CPU cores (each has 20 GB of RAM) for each language, which means it took us 21 days to translate 1.1 mil English documents into seven languages. While NMT translation was around ten times faster, using 20 GPUs only (GeForce RTX 2080Ti and Quadro P5000) with 10 GB of GPU RAM took around 20 days to translate the documents into the target languages. While for the QT approach, the translation process was pretty fast, where it took around 15 minutes to translate 66 queries from seven languages into English using SMT systems and around 3 minutes to do the same using NMT. 7 Conclusions We presented a comparative study between query-translation (QT), and document translation (DT) approaches in the Cross-Lingual Information Retrieval (CLIR) task. To conduct this study, we investigated various MT systems and their configura-tions and performed a thorough large-scale evaluation based on the test collection produced within the CLEF eHealth tasks on patient-centered information retrieval during 20132015, and extended with additional relevance assessments. We experimented with both statistical and neural MT paradigms. The SMT systems for QT were specifically trained and tuned to translate medical search queries. For DT, we trained two SMT systems: the first one was built to produce word forms, and the second one to produce word lemmas. We then used these two systems to translate the test collection into seven European languages. Furthermore, we performed lemmatization and stemming on the collection that was translated using the SMT system that produces word forms. The results showed that a well-tuned QT system outperforms DT, which is a positive result with an important impact on practical applications. So far, the QT approach has been preferred mainly for efficiency reasons (less space and computation needed). Our experiments suggest that this approach is even more effective (better retrieval results). We also investigated the effect of using neural machine translation, which is now considered the state-of-the-art in many domains. This completely new paradigm in machine translation tends to improve the fluency of generated output (which is appreciated by humans), but often mismatches content and adequacy (which might hurt the performance in IR). In our experiments, NMT improved retrieval results in both QT and DT, but the QT approach is still superior, so the results are consistent with the findings from the SMT experiments. However, we emphasize that the way we trained our MT systems is very domain-specific (medi-cal domain), and we made use of a vast amount of medical data (monolingual and parallel). This makes our comparative study very task-oriented. When dealing with general domain test collection, some search terms might have a different meaning in different domains. For example, the word development probably in most cases means in medicine the growth or spread of a disease (or a tumor), while in the general domain we can not say without a context, and in that case, the need for linguistics information in the queries will be more important to solve the translation ambiguity.",
"This should be considered when comparing QT and DT approaches; thus, the reader should be careful when drawing the same conclusion of this work while working on a different domain.",
"This work was supported by the Czech Science Foundation (grant n. 19-26934X) and data and tools provided by the LINDAT/CLARIAH-CZ Research Infrastructure (https://lindat.cz), supported by the Ministry of Education, Youth and Sports of the Czech Republic (Project No. LM2018101).",
"We would like to thank everyone who helped us with this work, including Ondrej Bojar for his feedback on the design of the MT systems and Zain Baaity for this effort in the annotation of our system results."
] | [
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other"
] |
[
"The goal of this work is to build conversational Question Answering (QA) interfaces for the large body of domain-specific information available in FAQ sites.",
"We present DoQA, a dataset with 2,437 dialogues and 10,917 QA pairs.",
"The dialogues are collected from three Stack Exchange sites using the Wizard of Oz method with crowdsourcing.",
"Compared to previous work, DoQA comprises well-defined information needs, leading to more coherent and natural conversations with less factoid questions and is multi-domain.",
"In addition, we introduce a more realistic information retrieval (IR) scenario where the system needs to find the answer in any of the FAQ documents.",
"The results of an existing, strong, system show that, thanks to transfer learning from a Wikipedia QA dataset and fine tuning on a single FAQ domain, it is possible to build high quality conversational QA systems for FAQs without in-domain training data.",
"The good results carry over into the more challenging IR scenario.",
"In both cases, there is still ample room for improvement, as indicated by the higher human upperbound.",
"The overarching objective of our work is to access the large body of domain-specific information available in Frequently Asked Question sites (FAQ for short) via conversational Question Answering (QA) systems.",
"In particular, we want to know whether current techniques are able to work with limited training data, and without needing to gather data for each target FAQ domain.",
"In this paper we present DoQA , a task and associated dataset for accessing domain-specific FAQs via conversational QA 1 .",
"The dataset contains 2,437 information-seeking ques-tion/answer dialogues on three different domains 1 The DoQA dataset is available here: http://ixa.",
"(10,917 questions in total).",
"These dialogues are created using the Wizard of Oz technique by crowdworkers that play the following two roles: the user asks questions about a given topic posted in Stack Exchange 2 , and the domain expert replies to the questions by selecting a short span of text from the long textual reply in the original post.",
"The first question is prompted by the real FAQ question, which sets the topic of interest driving the user questions.",
"In addition to the extractive span, we also allow experts to rephrase it, in order to provide an abstractive, more natural, answer.",
"The dataset covers unanswerable questions and some relevant dialogue acts.",
"We focused on three different domains: Cooking, Travel and Movies.",
"These forums 2 https://stackexchange.com/ 7303 are some of the most active ones and contain knowledge of general interest, making it easily accessible for crowdworkers.",
"DoQA contains two scenarios: in the standard scenario the test data comprises the questions and the target document from which the answers need to be extracted; in the information retrieval (IR) scenario the test data contains the questions, but the target document is unknown, and the system needs to select the documents which contain the answers among all documents in the collection.",
"Previous work on conversational QA datasets include CoQA (Reddy et al., 2018) and QuAC (Choi et al., 2018).",
"The main focus of CoQA are reading comprehension questions, which are produced with access to the target paragraph.",
"The topic of the questions are delimited by the paragraph, which leads to specific questions about details in the paragraph.",
"Choi et al. (2018) observed that a large percentage of CoQA answers are named entities or short noun phrases.",
"In QuAC, the topic of the conversation is set by a title and first paragraph of a Wikipedia article about people.",
"The user makes up questions about the person of interest.",
"Note that, contrary to our setting, there is no real information need in any of those datasets, which can lead to less coherent conversations: any question about the paragraph or person of interest is valid, respectively.",
"DoQA makes the following contributions .",
"Firstly, contrary to made-up reading comprehension tasks, DoQA reflects real user needs, as defined by a topic in an existing FAQ.",
"Good results on DoQA are of practical interest, as they would show that effective conversational QA interfaces to FAQs can be built.",
"Secondly, for the same reason, the conversations in DoQA are more coherent, natural and contain less factoids than other datasets, as shown by our analysis.",
"Thirdly, the IR scenario and the multiple domains make DoQA more challenging and realistic.",
"Table 1 summarizes the characteristics of DoQA.",
"Although one could question the small size of our dataset, our goal is to test whether current techniques are able to work with limited training data, and without needing to gather data for each target FAQ domain.",
"We thus present results of an existing strong conversational QA model with limited and out-of-domain data.",
"The system trained on Wikipedia data (QuAC) provides some weak results which are improved when fine-tuning on DoQA QuAC CoQA Real information need (cid:75) Naturalness (cid:75) Dialogue coherence (cid:75) Non-factoid questions (cid:75) Unanswerable questions (cid:75) (cid:75) Dialogue acts (cid:75) (cid:75) Multi-domain (cid:75) (cid:75) IR scenario (cid:75) Table 1: Summary of the characteristics of DoQA compared to QuAC and CoQA.",
"(cid:75) for positive.",
"the FAQ dataset.",
"Our empirical contribution is to show that a relatively low amount of training in one FAQ dataset (1000 dialogues on Cooking) is suf-ficient for strong results on Cooking (comparable to those obtained in the QuAC dataset with larger amounts of training data), but also on two other totally different domains with no in-domain training data (Movies and Travel).",
"In all cases scores over 50 F1 are reported.",
"Regarding the IR scenario, an IR module complements the conversational system, with a relatively modest drop in performance.",
"The gap with respect to human performance is over 30 points, showing that there is still ample room for system improvement.",
"Conversational QA systems stem from the body of work on Reading Comprehension, whose goal is to test the capacity of a system to understand a document by answering any question posed over its content.",
"Recent work on the field has resulted in the creation of multiple datasets (Rajpurkar et al., 2016; Trischler et al., 2017; Nguyen et al., 2016; Kocisky et al., 2018; Dunn et al., 2017).",
"These datasets are typically composed of multiple ques-tion/answer pairs, often along with a reference passage from which the answer is curated.",
"Whereas the questions are always in free text form, some datasets represent the answers as a contiguous span in the reference passage, while others contain free form answers.",
"The former are usually referred as extractive , whereas the latter are called abstractive .",
"All in all, in these QA datasets the queries are unrelated to each other, and thus there is no dialogue structure involved.",
"Iyyer et al. (2017) propose to answer complex queries by decomposing them into sequences of single, co-referent queries.",
"The question sequence can be seen as different turns in a dialogue, and each question refers and refines previous ones.",
"The 7304 authors present the SequentialQA dataset, which comprises 6K question sequences posed over the content of Wikipedia tables.",
"In the case of our task, it is the user who makes several questions in sequence.",
"More similar to our work, CoQA (Reddy et al., 2018) and QuAC (Choi et al., 2018) are two conversational QA datasets comprising QA dialogues that fulfill the information need of a user by answering questions about different topics.",
"Similarly to our, both datasets are built by crowdsourcing, where one person (the questioner) is presented with a topic and has to pose free-form questions about it.",
"Another person (the answerer) has to select an answer to the question by choosing an excerpt from the relevant passage describing the topic.",
"Some of the questions in both datasets are unanswerable, and access to previous questions and answers are needed in order to answer some of the questions.",
"CoQA contains 127k questions with answers, obtained from 8k conversations about passages from broad domains, ranging from children stories to science.",
"The answers are also excerpts from the relevant passage, but answerers have the choice of reformulating them.",
"The authors report that 78% of the answers had at least one edit.",
"Although reformulating answers can yield to more natural dialogues, Yatskar (2018) showed that span based systems can in principle obtain a performance up to 97 .",
"8 points F1, showing that editing the answers does not yield to systems with better quality.",
"In CoQA, both questioner and answerer have access to the full passage, which guides the conversation towards the specific information conveyed in it.",
"QuAC is a dataset that contains 14k information-seeking question answering dialogues.",
"The dialogues in QuAC are about a specific section in Wikipedia articles about people.",
"The answerer has access to the full section text, whereas the questioner only sees the section's title and the first paragraph of the main article, which serves as inspiration when formulating the queries.",
"QuAC also contains dialogue acts in each turn, which are useful when collecting the dialogues, as they can be used by the answerer to indicate to questioner whether to continue making questions about the last answer or drift to other aspects of the topic.",
"We will compare CoQA and QuAC in more detail in Section",
"4. Previous conversational QA datasets provide the relevant document or passage that contain the answer of a query.",
"However, in many real world scenarios such as FAQs, the answers need to be searched over the whole document collection.",
"In related question answering research, Chen et al. (2017) and Watanabe et al. (2017) combine retrieval and answer extraction on a large set of documents.",
"In (Talmor and Berant, 2018) the authors propose decomposing complex questions into a sequence of simple questions, and using search engines to answer those single questions, from which the final answer is computed.",
"We find that requiring the system to search for relevant documents and passages is more realistic, and DoQA is the first conversational QA task incorporating this scenario.",
"In contemporary work, Castelli et al. (2019) present a question answering dataset for the technical support domain which focuses on actual questions posed by users and has a real-world size with only 600 training instances.",
"It also requires systems to examine 50 documents per query.",
"Our work has similar motivations for setting up more realistic tasks, and is complementary in the sense that we cover non-technical domains and conversatioal QA.",
"Community Question Answering has been also the focus of two related tasks (Nakov et al., 2016, 2017), where, given a new question and a collection of pre-existing questions and answers, the systems need to rank the answers that are most useful for answering the new question.",
"This section describes our conversational QA dataset collection process which consists of an interactive task designed for two crowdworkers in Amazon Mechanical Turk (AMT).",
"We collected topic-answer pairs for the three different domains from the Stack Exchange data dumps.",
"We focused on the Cooking 3 , Travel 4 and Movies 5 domains, as they are active forums and contain knowledge of general interest, making it easily accessible and attractive for crowdworkers.",
"Note that the posts in Stack Exchange (as in most FAQ sites) comprise broad questions which often require lengthy answers.",
"We refer to the question in the post as topic and to the long answer in the post as passage (not to be confused with the actual ques-3 https://cooking.stackexchange.com/ 4 https://travel.stackexchange.com/ 5 https://movies.stackexchange.com/ 7305 tions/answers in the collected dialogues).",
"Figure 1 shows an example of a topic and its corresponding passage for the Cooking domain.",
"More details on post filtering and selection can be found in Appendix A. 3.2 Crowdsourcing Task For the annotation process, we defined a HIT in AMT as the task of generating a dialogue about a specific topic between two workers (the specifications of the defined HIT can be found in Appendix B).",
"One of the workers (the user) asks questions to the second one (the domain expert) about a certain topic from a Stack Exchange Cooking, Travel or Movies thread.",
"The worker who adopts the user role has access to a small paragraph that introduces the topic.",
"Having this information, he must ask free text questions.",
"The first question of every dialogue must be the title of the topic that appears in the title of the Stack Exchange thread.",
"The domain expert has access to the whole answer passage and he/she answers the query by selecting a span of text from it.",
"In order to make the dialogue look more natural, the domain expert has the opportunity to edit the answer, but note that if he does so the answer will not match the content of the text span anymore.",
"Therefore, and following Yatskar (2018), we motivate minimal modifications by copying the selected text span directly into the answer field in the web application.",
"In addition to the span of text, the expert has to give feedback to the user with one of the following dialogue acts: an affirmation act, which is is required when the question is a Yes/No question ( yes , no or neither ); an answerability act, which defines if the question has an answer or not ( answerable or no answer ).",
"When no answer is selected, the returned string is I don't know; and a continuation dialogue act, which is used for leading the user to the most interesting topics ( follow up or don't follow up ).",
"The last dialogue act is used to minimally guide the user in his/her questions, where the expert can encourage (or dicourage) the user to continue with questions related to his last questions using follow up (or alternatively don't follow up ).",
"These dialogue acts are the same as in QuAC, but we discarded the maybe follow up act from the continuation act because we felt it was not intuitive enough.",
"Dialogues are ended when a maximum of 8 question and answer pairs is reached, when 3 unanswerable questions have been asked, or when 10 min-Cooking Travel Movies Train Dev.",
"utes time limit is reached.",
"The purpose of these limits is to avoid long and repetitive dialogues, because real threads of the selected domains are very focused on a certain topic.",
"Dialogues are only accepted if they have a minimum length of 2 question and answer pairs and if they have at least one answer that is not I don't know sorry.",
"The data collection interface is based on CoCoA 6 , which we modified.",
"The interfaces for the user and expert are shown in Appendix C. 3.3 Dataset Details Following usual practice, we divided the main Cooking dataset into a train, development and test splits.",
"For the other two domains, Travel and Movies, we only have the test split.",
"Statistics for all the domains and splits are shown in Table 2.",
"The splits of the Cooking dataset have very similar characteristics, so we can expect them to be valid representatives of the whole Cooking dataset.",
"In the test splits we do not allow more than one dialogue about the same section, as it can end up producing inaccurate evaluation of the models.",
"In order to estimate the performance of a human in the task, we collected additional answers for the test splits for the three domains in a second round, after having completed the dialogues.",
"For each question in the dialogues collected in the first round, we show to the worker the previous questions and answers in the dialogue (if available), and he has to provide an answer span.",
"The interface for the collection of multiple answers can be seen in Appendix D. 6 https://github.com/stanfordnlp/cocoa (He et al., 2017) 7306 3.5 Information Retrieval Scenario In the usual setting for this kind of tasks, the system is given the question and the passage where the answer is to be extracted from.",
"In a realistic scenario, however, relevant answer passages that may contain the answer will need to be retrieved first.",
"More specifically, if a user has an information need and asks a question to a conversational QA system on a FAQ, the system can search for similar questions which have already been answered, or the system can directly search in existing answer passages.",
"In other words, there are two ways to check automatically if the forum contains a relevant answer passage to a new question: (1) question retrieval, where relevant or similar questions are searched (and thus, the answer for this relevant question is taken as a relevant answer), and (2) answer retrieval, where relevant answers are searched directly among existing answers.",
"We added information about both relevant cases to the main Cooking dataset, in the form of the 20 most relevant answer passages for each dialogue in the dataset.",
"We followed a basic approach to get these relevant answer passages.",
"We created two separate indexes using an IR system 7 for the two mentioned approaches, question and answer retrieval.",
"For the former, we indexed the original topics posted in the forum; and for the latter, we indexed the answer passages for each post in the forum.",
"Then, for each dialogue in the development and test splits, the top 20 documents were retrieved using the first question of the dialogue.",
"Given that the dialogues are about a single topic, we only use the first question in the dialogue, and then use the retrieved passages for the rest of questions in the dialogue as well.",
"The question retrieval approach yields very good results (0.94 precision at one), as expected, as the crowdworker doing the questions has access to the topic when asking the first question and usually did minor edits.",
"The results for answer retrieval are more modest, 0.54 precision at one.",
"The results section shows the results of the conversational QA system when relying on the passages returned by the IR module.",
"Overall statistics In this section we present an quantitative and qualitative analysis of DoQA and we compare them to similar conversational datasets 7",
"Solr https://lucene.apache.org/solr/",
"like QuAC and CoQA, stressing its similarities and differences.",
"Table 3 shows the overall statistics of DoQA, together with the statistics of QuAC and CoQA.",
"As can be seen, DoQA has the smallest amount of questions and dialogues.",
"However, other features makes it very interesting for the research of conversational QA.",
"For instance, the average tokens per questions and answers ( 10 . 43 and 12 . 99 , respectively) are closer to real dialogues if we compare to the other datasets.",
"Specially CoQA has very short questions and answers on average, suggesting that CoQA is closer to factoid QA than dialogue, as human dialogues tend to be longer and convoluted, not just short answers.",
"DoQA has the lower ratio of questions per dialogue, which is expected, as most of the dialogues are about a very specific topic and the user is satisfied and gets the answer without the need of long dialogues.",
"CoQA ends up on having almost all of its questions answerable, facing the same issues as SQuAD 1.0 (Rajpurkar et al., 2016) that motivated the addition of unanswerable questions in SQuAD 2.0 (Rajpurkar et al., 2018).",
"We also have the results of a short survey that workers had to respond to at the end of each HIT.",
"On the one hand, the user had to give feedback on how satisfied was with the answers of the expert in a scale of 1-5.",
"The average satisfaction was 3.9.",
"On the other hand, the expert had to give feedback on how sensible were the questions and the helpfulness of the answers.",
"The average scores obtained were 4.27 and 4.10, respectively, which makes the AMT task satisfactory.",
"Naturalness One of the main positive aspects of our dataset is the naturalness of the dialogues that other similar datasets like QuAC do not have.",
"The answers of DoQA come from a forum where the answer text is directed to a person who posted the 7307 question, and does not come from a much formal text like Wikipedia, as it is the case of QuAC.",
"The naturalness and casual register of the former it is more adequate than the formal register of the latter for a conversational QA system.",
"The dialogue in Figure 1 is a clear example of such naturalness, where the expert answers to the user with casual and directed expressions like You may want and you may be having .",
"To verify whether dialogues in DoQA are more natural than the ones in QuAC, we sample randomly 50 dialogues in DoQA Cooking domain and QuAC and performed A/B testing to determine which of the two dialogues is more natural.",
"This test showed that 84% of the times DoQA dialogues are more natural.",
"This naturalness is probably caused because a dialogue in DoQA is started by a user with a very specific aim or topic to solve in mind, and thus, follow-up questions are very related to previous answers, and all the questions are set within a context.",
"In contrast, dialogues in QuAC do not show so clear objective and questions seem to be asked randomly.",
"Dialogues in DoQA are ended when the initial information need of the user is satisfied and this adds naturalness to dialogues.",
"Further analysis of the samples showed that answers in DoQA seem to be more spontaneous because they have more orality aspects, such as higher level of expressivity ( Normally when I try they end up burned not crispy! , My biggest worry here would be... , hey let's not be hasty ), opinions ( I came across a suggestion to cover the lid... , I'd recommend simply adding... , It sounds like fermentation to me ) and humor ( well yeah but booze is booze ).",
"Contrarily, answers in QuAC are more hermetic and do not show any features of orality or spontaneity that a dialogue should have.",
"All these features make DoQA dialogues look more natural.",
"We also analyzed the remaining 16% cases where DoQA dialogues appear less natural.",
"In most of these dialogues there were responses that did not really answer the question.",
"The following question (Q) and answer (A) pairs are good examples of it: (Q) Is the taste going to be significantly different?",
"(A)",
"there is cornstarch in confectioner's sugar ;",
"(Q)",
"how about reheating?",
"(A)",
"When you defrost it, do so in your fridge leaving it overnight so that it defrosts gradually ;",
"(Q)",
"Can I use my potatoes or carrots if they already have some roots?",
"(A)",
"The green portions of a potato are toxic .",
"In some of these cases the correct answer for the respective question is not in the answer text provided to the expert.",
"If this was the case, the expert should answer I don't know, instead of giving a nonsense answer.",
"Question types Table 4 includes the most frequent two initial words of the questions in the Cooking dataset along with their percentages of occurrences and some examples.",
"Most of the questions start with what and how ( 16 . 6% and 15 . 1% of the questions, respectively), which are also the most frequent in QuAC and CoQA.",
"Contrary to them, the questions in the Cooking dataset do not refer to factoids, with the exception of How long questions.",
"The questions in DoQA require long and complex answers.",
"In contrast to this, in CoQA and QuAC many of the most frequent initial words such as who , where , and when indicate factoid questions.",
"In order to confirm this fact, we manually inspected 50 random questions from the Cooking domain and QuAC datasets.",
"This analysis revealed that 66% of the questions are non-factoid in the DoQA Cooking domain, showing that most of the questions are open-ended.",
"These amount is larger than in QuAC, as in our analysis for QuAC we found that only 36% of the questions are non-factoid.",
"These values differ slightly from those reported by Choi et al. (2018), as they say that about half of questions are non-factoid.",
"Context or history dependence The manual analysis also shows that 61% of the questions are dependent on the conversation history, as many questions have coreferences to previous questions or answers in the dialogue.",
"For example, What are other methods to sharpen a knife? , How long should I cook it in the microwave? , Can you explain the science behind this cooking procedure? .",
"Moreover, we could note that less than 1% ask further advice or tips about the current topic, con-firming that these conversations are about specific topics where the user is satisfied with the expert answers after a few questions.",
"Dialogue coherence Related to the just mentioned fact that the user does not usually ask any other tips, users in DoQA do not tend to switch topics in a dialogue.",
"In order to confirm it, we performed another A/B testing to the same 50 dialogues samples of the DoQA Cooking domain and QuAC to determine which of the two dialogues is more coherent, that is, which dialogue has a smoother flow.",
"This test revealed that in 64% of 7308 Bigram prefix % Example What is 30.8 What is the purpose of adding water to an egg wash?",
"the cases dialogues of DoQA are more coherent than QuAC.",
"Only in 10% of the cases dialogues of DoQA are less coherent, with the remaining 26% equally coherent.",
"We analyzed the 10% and saw that they contain similar questions one after the other, or repeated answers in the same dialogue.",
"Summary Table 1 summarizes the positive characteristics of DoQA compared to the similar datasets like QuAC and CoQA.",
"Given a textual passage and a question, traditional QA systems find an answer to the question within the passage.",
"Conversational QA systems are more complex, as they need to deal with a sequence of possibly inter-dependent questions.",
"That is, the meaning of the current question may depend on the dialogue history.",
"For this reason, a dialogue history comprised by previous question/answer pairs is also provided to the system.",
"In addition, some dialogue acts have to be predicted as an output: yes/no answers, which are required for affirmation questions, and continuation feedback, which might be useful for information-seeking dialogues.",
"We denote the answer passage as p , the dialogue history of questions and respective ground truth answers as { q 1 , a 1 , ...q k 1 , a k 1 } , current question as q k , the answer span a k which is delimited by its starting index i and ending index j in the passage p , and dialogue act list v .",
"The dialogue act list contains { yes,no,} values for predicting affirmation and { follow-up,don't follow-up } for continuation feedback.",
"We present two strong baseline models to address our task.",
"Although the state-of-the-art evolves quickly, our choice has the benefit of simplicity and strong performance.",
"BERT We took the fine-tuning approach for QA of BERT, which predicts the indexes i and j of the a k answer span given p and q k as input.",
"This baseline has shown strong performance on QA datasets such as SQuAD (Devlin et al., 2018).",
"BERT+HAE The previous baseline does not model dialogue history.",
"We used BERT with History Answer Embedding (HAE) as proposed by Qu et al. (2019) as a baseline that deals with the multi-turn problem, as this is the publicly available system that performs best in the QuAC leader-board 8 .",
"The system introduces dialogue history { q 1 , a 1 , ...q k 1 , a k 1 } to BERT by adding a history answer embedding layer, which learns whether a token is part of history or not.",
"Evaluation metrics Given the similarity between QuAC and DoQA, we use the same evaluation metrics and criteria used in QuAC.",
"F1 is the main evaluation metric and is computed by the overlap at word level of the prediction and reference answers.",
"As the test set contains multiple answers for each question we take the maximum F1 among them.",
"Note that when computing F1 QuAC filters 8 accessed on August 20, 2019 7309 Cooking Travel Movies Setting Model F1 HEQ-Q F1all F1 HEQ-Q F1all F1 HEQ-Q F1all Native BERT 40.1 35.1 38.3 36.2 34.8 34.8 36.1 33.5 35.0 BERT+HAE 47.8 43.0 45.9 44.0 37.4 42.9 42.8 37.1 41.9 Zero-shot BERT 40.2 34.7 38.9 34.0 30.1 33.1 38.2 33.2 37.4 BERT+HAE 46.2 42.0 44.5 42.7 37.1 42.3 45.4 41.4 44.8 Transfer BERT 43.3 37.8 42.4 40.6 33.6 40.1 41.8 36.3 41.3 BERT+HAE 53.2 48.3 51.4 50.8 42.1 50.6 51.6 44.3 51.5 Transfer BERT 43.1 37.0 42.0 40.6 33.4 40.5 42.0 34.5 41.6 all BERT+HAE 53.4 46.9 52.7 51.6 43.3 50.9 52.1 45.2 51.7 Human -100.0 86.6 -100.0 87.4 -100.0 88.8 Table 5: Results of the baseline systems in the three DoQA domains (columns) in all four settings (rows).",
"out answers with low agreement among human annotators.",
"An additional F1-all is provided for the whole set.",
"We also report HEQ-Q (human equivalence score on a question level) which measures the percentage of questions for which system F1 exceeds or matches human F1.",
"Experimental Setup We carried out experiments using the extractive information in DoQA, leaving the abstractive information for the future.",
"The parameters we used to train the baseline models are the ones proposed in the original papers.",
"We tested the models in four settings.",
"In the native setting the Cooking DoQA train and dev data are used, the first for training and the second for early stopping.",
"In the zero-shot setting we use QuAC training data for training and early stopping.",
"In the transfer setting we use QuAC and Cooking DoQA for training.",
"Finally, in the transfer all setting we additionally use the test data from the other two domains for training.",
"We also experimented on the IR scenario, using the provided IR rankings (see Section 3.5), which contain the top 20 passages for each dialogue.",
"In the first experiment, Top-1 , we just use the top 1 passage and apply the baseline BERT model.",
"In a second experiment, Top-20:BERT , the passages are fed to the BERT model and the passage that contains the answer with highest confidence score is selected.",
"Note that we discard passages that produced I don't know type of answers.",
"In a third experiment, Top-20:BERT*IR , we select the passage with highest combined score according to BERT and the search engine.",
"All the reported results have been achieved using the BERT Base Uncased model.",
"Results Table 5 summarizes our results.",
"In the bottom row we give the human upperbound.",
"The three metrics used for evaluation behave similarly, so we focus on one (e.g. F1) for easier discussion.",
"We report all three for completion and easier comparison with related datasets.",
"In all settings and domains the BERT+HAE model yields better results than BERT, showing that DoQA is indeed a conversational dataset , where question and answer history needs to be modelled.",
"Regarding the different settings, we first focus on the Cooking dataset.",
"The native scenario and the zero-shot settings yield similar results, showing that the 1000 dialogues on Cooking provide the same performance as 13000 dialogues on Wikipedia from QuAC 9 .",
"The combination of both improves performance by 7 points (Trans-fer row), with small additional gains when adding Movies and Travel dialogues for further fine-tuning (Transfer all row).",
"Note that the performance obtained for Cooking in the Transfer or Transfer all setting is comparable to the one reported for QuAC , where the training and test are from the same domain 10 .",
"Yet, the most interesting results are those for the Travel and Movies domains, which do not have access to in-domain training data on Travel or Movies.",
"In this case, the native and transfer results with 9 When randomly subsampling QuAC to the same size as DoQA the results on the cooking domain fall down to 36.5.",
"10 BERT+HAE obtains 62.4 in QuAC (Qu et al., 2019), 9 points higher than in DoQA Cooking, but note that QuAC contains more reference answers per question than DoQA, and thus the resulting F1 scores are higher.",
"When evaluating BERT+HAE using a single reference answer in both datasets, the score is 45.9 on QuAC and 47.8 on the Cooking dataset of DoQA.",
"These results show that it is not necessary to train for each domain in a FAQ, and that training data from other FAQ domains is highly reusable.",
"The results obtained on out-of-domain test conversations (Movie and Travel) when trained on Wikipedia and Cooking are striking, as they are comparable to the in-domain results obtained for the Cooking test conversations.",
"We hypothesize that when people write the answer documents in FAQ websites such as Stackexchange, they tend to use linguistic patterns that are common across domains such as Travel, Cooking or Movies.",
"This is in contrast to Wikipedia text, which is produced with a different purpose, and might contain different linguistic patterns.",
"As an example, in contrast to FAQ text, Wikipedia text does not contain first-person and second-person pronouns.",
"We leave an analysis of this hypothesis for the future.",
"Table 6 presents the results of the experiments on the IR scenario .",
"The simplest Top-1 approach is the best performing for both question and answer retrieval strategies.",
"We leave the exploration of more sophisticated techniques for future work.",
"The results using question retrieval are very close to those in Table 5.",
"Given the large gap in the IR results in Section 3.5 for answer retrieval, it is a surprise to see a small 5 point decrease with respect to question retrieval.",
"We found that there is a high correlation between the errors of the dialogue system and the answer retrieval system, which explains the smaller difference.",
"In both retrieval strategies the results are close to the performance obtaining when having access to the reference target passage.",
"The goal of this work is to access the large body of domain-specific information in the form of Frequently Asked Question sites via conversational QA systems.",
"We have presented DoQA, a dataset for accessing Domain specific FAQs via conversational QA that contains 2,437 information-seeking dialogues on the Cooking, Travel and Movies domain (10,917 questions in total).",
"These dialogues are created by crowdworkers that play the following two roles: the user asks questions about a certain topic posted in Stack Exchange, and the domain expert who replies to the questions by selecting a short span of text from the long textual reply in the original post.",
"The expert can rephrase the selected span, in order to make it look more natural.",
"In contrast to previous conversational QA datasets, our dataset responds to a real information need, is multi-domain, more natural and coherent.",
"DoQA introduces a more realistic scenario where the passage with the answer needs to be retrieved.",
"Together with the dataset, we presented results of a strong conversational model, including transfer learning from Wikipedia QA datasets to our FAQ dataset.",
"Our dataset and experiments show that it is possible to access domain-specific FAQs using conversational QA systems with little or no in-domain training data, yielding quality which is comparable to those reported in QuAC.",
"For the future, we would like to exploit the abstractive answers in our dataset, explore more sophisticated systems in both scenarios and perform user studies to study how real users interact with a conversational QA system when accessing FAQs.",
"This research was partially supported by a Google Faculty Award, EU ERA-Net CHIST-ERA LIH-LITH funded by the Agencia Estatal de Investigacion (AEI, Spain) project PCIN-2017-118 and the Swiss National Science Foundation (SNF, Switzerland) project 20CH21 174237, project DeepReading (RTI2018-096846-BC21) supported by the Ministry of Science, Innovation and Universities of the Spanish Government, the Basque Government (DL4NLP KK-2019/00045 and excel-lence research group), project BigKnowledge (Ayu-das Fundacion BBVA a Equipos de Investigacion Cientfica 2018) and the NVIDIA GPU grant program.",
"Jon Ander Campos enjoys a doctoral grant from the Spanish MECD."
] | [
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"other",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"other",
"other"
] |
[
"Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes.",
"While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout.",
"In this paper, we introduce the first complete pipelines to leverage deep summarization models to generate these notes based on transcripts of conversations between physicians and patients.",
"After exploring a spectrum of methods across the extractive-abstractive spectrum, we propose CLUSTER 2S ENT , an algorithm that",
"(i) extracts important utterances relevant to each summary section;",
"(ii) clusters together related utterances; and then",
"(iii) generates one summary sentence per cluster.",
"CLUSTER 2S ENT outperforms its purely abstractive counterpart by 8 ROUGE-1 points, and produces significantly more factual and coherent sentences as assessed by expert human evaluators.",
"For reproducibility, we demonstrate similar benefits on the publicly available AMI dataset.",
"Our results speak to the benefits of structuring summaries into sections and annotating supporting evidence when constructing summarization corpora.",
"Electronic health records (EHR) play a crucial role in patient care.",
"However, populating them can take as much time as attending to patients (Sinsky et al., 2016) and constitutes a major cause of physician burnout (Kumar and Mezoff, 2020).",
"In particular, doctors document patient encounters with SOAP notes, semi-structured written accounts containing four sections: (S)ubjective information reported by the patient; (O)bjective observations, e.g., lab results; (A)ssessments made by the doctor (typically, the diagnosis); and a (P)lan for future care, including diagnostic tests, medications, and treatments.",
"Sections can be subdivided into 15 subsections.",
"In a parallel development, patients increasingly record their doctor's visits, either in lieu of taking notes or to share with a family member.",
"A budding line of research has sought to leverage transcripts of these clinical conversations both to provide insights to patients and to extract structured data to be entered into EHRs (Liu et al., 2019b; Schloss and Konam, 2020; Krishna et al., 2021).",
"In this paper, we introduce the first end-to-end methods for generating whole SOAP notes based on clinical conversations.",
"Our work builds on a unique corpus, developed in collaboration with Abridge AI, Inc. 1 ), that consists of thousands of transcripts of recorded clinical conversations together with associated SOAP notes drafted by a work force trained in the official style of SOAP note documentation.",
"On one hand, this task is much harder than traditional summarization benchmarks, in part, because SOAP notes are longer (320 words on average) than summaries in popular datasets like CNN/Dailymail (Nallapati et al., 2016), Newsroom (Grusky et al., 2018), and SamSum (Gliwa et al., 2019) (55, 27, and 24 words on average).",
"On the other hand, our dataset offers useful structure:",
"(i) segmentation of each SOAP note into subsections; and",
"(ii) a set of supporting utterances that provide evidence for each sentence in the SOAP note.",
"Exploiting this structure, our methods outperform appropriate baselines.",
"Our first methodological contribution is to propose a spectrum of methods, for decomposing sum-marizaton tasks into extractive and abstractive subtasks.",
"Starting from a straightforward sequence-to-sequence model, our methods shift progressively more work from the abstractive to the extractive component:",
"(i) CONV 2N OTE : the extractive module does nothing, placing the full burden of summarization on an end-to-end abstractive module.",
"(ii) 1 http://abridge.com ...DR: So are you taking the Monteluekast",
"EXT 2N OTE : the extractive module selects all utterances that are noteworthy (i.e., likely to be marked as supporting utterances for at least one SOAP note sentence), and the decoder is conditioned only on these utterances;",
"(iii) EXT 2S EC : the extractive module extracts per-subsection noteworthy utterances and the decoder generates each subsection, conditioned only on the corresponding utterances;",
"(iv) CLUSTER 2S ENT : the extractive module not only extracts per-subsection noteworthy utterances but clusters together those likely to support the same SOAP sentencehere, the decoder produces a single sentence at a time, each conditioned upon a single cluster of utterances and a token indicating the SOAP subsection.",
"We see consistent benefits as we move from approach",
"(i) through",
"(iv).",
"Both to demonstrate the generality of our methods and to provide a reproducible benchmark, we conduct parallel experiments on the (publicly available) AMI corpus (Carletta, 2007) 2 Like our medical conversations dataset, the AMI corpus exhibits section-structured summaries and contains annotations that link summary sentences to corresponding supporting utterances.",
"Our experiments with AMI data show the same trends, favoring pipelines that demand more from the extractive component.",
"These results speak to the wider usefulness of our proposed approaches, EXT 2S EC and CLUSTER 2S ENT , whenever section-structured summaries and annotated evidence utterances are available.",
"Our best performing model, CLUSTER 2S ENT (Figure 1), demands the most of the extractive module, requiring that it both select and group each subsection's noteworthy utterances.",
"Interestingly, we observe that given oracle (per-subsection) noteworthy utterances, a simple proximity-based 2 Our code and trained models for the AMI dataset: https://github.com/acmi-lab/ modular-summarization clustering heuristic leads to similar performance on SOAP note generation as we obtain when using ground-truth clusterseven though the ground truth noteworthy utterances are not always localized.",
"Applied with predicted noteworthy utterances and clusters, this approach achieves the highest ROUGE scores and produces the most useful (fac-tual, coherent, and non-repetitive) sentences as rated by human experts.",
"As an additional benefit of this approach, due to the smaller lengths of the input and output sequences involved, we can feasibly train large transformer-based abstractive summarization models (e.g., T5), whose memory requirements grow quadratically with sequence length.",
"Additionally, our approach localizes the precise utterances upon which each SOAP note sentence depends, enabling physicians to verify the correctness of each sentence and potentially to improve the draft by highlighting the correct noteworthy utterances (versus revising the text directly).",
"In summary, we contribute the following: The first pipeline for drafting entire SOAP notes from doctor-patient conversations.",
"A detailed human study to evaluate the factuality and quality of generated SOAP notes, and qualitative error analysis.",
"A new collection of extractive-abstractive approaches for generating long section-segmented summaries of conversations, including new methods that leverage annotations attributing summary sentences to conversation utterances.",
"A rigorous quantitative evaluation of our proposed models and appropriate baselines for both the extractive and abstractive components, including sensitivity of the pipeline to simulated ASR errors.",
"Summarization is a well-studied problem in NLP (Nenkova et al., 2011).",
"While early works focused on simply extracting important content from a document (Erkan and Radev, 2004; Wong et al., 2008), later approaches attempted to paraphrase the content into new sentences (abstractive summarization) (Filippova, 2010; Berg-Kirkpatrick et al., 2011; Wang and Cardie, 2013).",
"Following the development of neural sequence models (Sutskever et al., 2014), more research focuses on neural generation of abstractive summaries (Nallapati et al., 2016; See et al., 2017; Celikyilmaz et al., 2018).",
"While many papers summarize news articles, others summarize conversations, in business meetings (Wang and Cardie, 2013; Zhu et al., 2020), customer service (Liu et al., 2019a), and tourist information center (Yuan and Yu, 2019) contexts.",
"In the space of two-step extractive-abstractive summarization approaches, Subramanian et al. (2019) summarize scientific papers by first extracting sentences from it and then abstractively summarizing them.",
"Chen and Bansal (2018) extract important sentences from the input and then paraphrase each of them to generate the abstractive summary.",
"While they assume that each summary sentence is supported by exactly one source sentence, in our medical conversations, many summary sentences synthesize content spread across multiple dialogue turns (e.g., a series of questions and answers).",
"Past work on abstractive summarization of medical conversations has focused on summarizing patient-nurse conversations with goals including capturing symptoms of interest (Liu et al., 2019c) and past medical history (Joshi et al., 2020).",
"These tasks are respectively similar to generating the review of systems and past medical history subsections of a SOAP note.",
"In contrast, we aim to generate a full-length SOAP note containing up to 15 subsections, and propose methods to address this challenge by extracting supporting context for smaller parts and generating them independently.",
"We use two different datasets in this work.",
"The primary medical dataset, developed through a collaboration with Abridge AI, consists of doctor-patient conversations with annotated SOAP notes.",
"Additionally, we evaluate our summarization methods on the AMI dataset (Carletta, 2007), comprised of business meeting transcripts and their summaries.",
"Our work builds on a unique resource: a corpus consisting of thousands of recorded English-language clinical conversations, with associated SOAP notes created by a work force trained in SOAP note documentation standards.",
"Our dataset consists of transcripts from real-life patient-physician visits from which sensitive information such as names have been de-identified.",
"The full medical dataset consists of 6862 visits consisting of 2732 cardiologist visits, 2731 visits for family medicine, 989 interventional cardiologist visits, and 410 internist visits.",
"Owing to the sensitive nature of the data, we cannot share it publicly (an occupational hazard of research on machine learning for healthcare).",
"For each visit, our dataset contains a human-generated transcript of the conversation.",
"The transcript is segmented into utterances, each annotated with a timestamp and speaker ID.",
"The average conversation lasts 9 .",
"43 minutes and consists of around 1 .",
"5 k words (Appendix Figure A1).",
"Associated with each conversation, we have a human-drafted SOAP note created by trained, professional annotators.",
"The annotators who created the SOAP notes worked in either clinical transcription, billing, or related documentation-related departments, but were not necessarily professional medical scribes.",
"The dataset is divided into train, validation and test splits of size 5770 , 500 and 592 , respectively.",
"Our annotated SOAP notes contain (up to) 15 subsections, each of which may contain multiple sentences.",
"The subsections vary in length.",
"The Allergies subsections is most often empty, while the Assessment subsection contains 5 .",
"16 sentences on average (Table 1).",
"The average SOAP note contains 27 .",
"47 sentences.",
"The different subsections also differ in the style of writing.",
"The Medications subsection usually consists of bulleted names of medicines and their dosages, while the Assessment subsection typically contains full sentences.",
"On average, the fraction of novel (i.e., not present in the conversation) unigrams, bigrams, and trigrams, in each SOAP note are 24 .",
"09% , 67 .",
"79% and 85 .",
"22% , respectively.",
"Each SOAP note sentence is also annotated with utterances from the conversation which provide evidence for that sentence.",
"A SOAP note sentence can have one or more supporting utterances.",
"On average, each SOAP sentence has 3 .",
"84 supporting utterances, but the mode is 1 (Appendix Figure A1).",
"We refer to these utterances as noteworthy utterances Subsection Mean length Family Medical History 0.23 Past Surgical History 0.58 Review of Systems 3.65 Chief Complaint 2.17 Miscellaneous 2.81 Allergies 0.06 Past Medical History 2.93 Social History 0.27 Medications 3.74 Immunizations 0.11 Laboratory and Imaging Results 2.27 Assessment 5.16 Diagnostics and Appointments 1.65 Prescriptions and Therapeutics 1.75 Healthcare Complaints 0.09 Table 1: Average number of sentences in different SOAP note subsections grouped by parent sections ( Subjective , Objective , Assessment , Plan , Others resp.) throughout this paper.",
"Throughout this work, we deal with the 15 more granular subsections rather than the 4 coarse sections of SOAP notes, and thus for convenience, all further mentions of section technically denote a SOAP subsection .",
"The AMI dataset is a collection of 138 business meetings, each with 4 participants with various roles (e.g., marketing expert, product manager, etc.).",
"Each meeting transcript comes with an associated abstractive summary that is divided into four sections abstract , decisions , actions , and problems .",
"Each conversation also has an associated extractive summary, and there are additional annotations linking the utterances in the extractive summary to sentences in the abstractive summary.",
"For any given sentence in the abstractive summary, we refer to the linked set of utterances in the extractive summary as its noteworthy utterances .",
"We note that 7.9% of the abstractive summary sentences have no annotated noteworthy utterances.",
"To simplify the analysis, we remove these sentences from summaries in the training, validation, and test splits.",
"We investigate the following four decompositions of the summarization problem into extractive and abstractive phases, ordered from abstraction-heavy",
"to extraction-heavy: CONV 2N OTE takes an end-to-end approach, generating the entire SOAP note from the entire conversation in one shot.",
"EXT 2N OTE first predicts all of the noteworthy utterances in the conversation (without regard to the associated section) and then generates the entire SOAP note in one shot from only those utterances.",
"EXT 2S EC extracts noteworthy utterances, while also predicting the section(s) for which they are relevant, and then generates each SOAP section separately using only that section's predicted noteworthy utterances.",
"CLUSTER 2S ENT attempts to group together the set of noteworthy utterances associated with each summary sentence.",
"Here, we cluster separately among each set of section-specific noteworthy utterances and then generate each section one sentence at a time, conditioning each on the associated cluster of utterances.",
"Each of these pipelines leaves open many choices for specific models to employ for each subtask.",
"For the abstractive modules of CONV 2N OTE and EXT 2N OTE , we use a pointer-generator network.",
"The abstractive modules of EXT 2S EC and CLUSTER 2S ENT , which require conditioning on section are modeled using conditioned pointer-generator networks (described in Section 5), and fine-tuned T5 models which condition on the section being generated by means of prepending it to the input.",
"T5 models could not be used in the CONV 2N OTE and EXT 2N OTE settings because their high memory requirement for long inputs could not be accommodated even with 48GB of GPU memory.",
"For noteworthy utterance extraction, we primarily use a hierarchical LSTM model and a BERT-LSTM model as described in the next section.",
"All models are configured to have a scalar output for binary classification in EXT 2N OTE , whereas for EXT 2S EC and CLUSTER 2S ENT , they have multilabel output separately predicting noteworthiness for each section.",
"Note that the same utterance can be noteworthy with respect to multiple sections.",
"We use the same trained utterance extraction models for both EXT 2S EC and CLUSTER 2S ENT .",
"For the clustering module in CLUSTER 2S ENT , we propose a heuristic that groups together any two supporting utterances that are close , meaning they have less than or equal to utterances separating them, where is a hyperparameter.",
"This process is iterated, with the clusters growing in size by merging with other singletons or clusters, until every pair of close utterances have the same cluster membership.",
"The value of is tuned on the validation set.",
"Since each cluster necessarily produces one sentence in the SOAP note, having too many or too few clusters can make the SOAP note too long or too short, respectively.",
"Therefore, for any given value of the hyper-parameter and any given section, the prediction thresholds of the extractor are tuned on the validation set to produce approximately the same number of clusters over the entire validation set as present in the ground truth for that section.",
"Among ground truth clusters containing multiple noteworthy utterances, 82% are contiguous.",
"In an experiment where the heuristic is used to cluster the oracle noteworthy utterances for each section, and summaries are subsequently generated via the abstractive modules from CLUSTER 2S ENT , ROUGE-1 and ROUGE-2 metrics deteriorate by less than 1 point as compared to oracle clusterings (Appendix Table A3), demonstrating our heuristic's effectiveness.",
"Pointer-Generator Network We use the pointer-generator network introduced by See et al. (2017) for CONV 2N OTE and EXT 2N OTE .",
"The model is a bidirectional LSTM-based encoder-decoder model with attention.",
"It employs a pointer mechanism to copy tokens directly from the input in addition to generating them by predicting generation probabilities for the entire vocabulary.",
"The model also computes the weights that govern copying versus generating at each decoding timestep.",
"Section-conditioned Pointer-Generator Network We modify the pointer-generator network for algorithms EXT 2S EC and CLUSTER 2S ENT , to condition on the (sub)section of the summary to be generated.",
"The network uses a new lookup table to embed the section z into an embedding e z .",
"The section embedding is concatenated to each input word embedding fed into the encoder.",
"The section embedding is also appended to the inputs of the decoder LSTM in the same fashion.",
"T5 We use the recently released T5 model (Raf-fel et al., 2020) as an abstractive module.",
"It is an encoder-decoder model, where both encoder and decoder consist of a stack of transformer layers.",
"The T5 model is pre-trained on 5 tasks, including summarization, translation etc.",
"We use the pre-trained T5 model parameters and fine-tune it on our task dataset.",
"For introducing the section-conditioning in EXT 2S EC and CLUSTER 2S ENT , we simply add the name of the section being generated to the beginning of the input.",
"Hierarchical LSTM classifier(H-LSTM) In this model, we first encode each utterance u i independently by passing its tokens through a bidirectional LSTM and mean-pooling their encoded representations to get the utterance representation h i .",
"We pass the sequence of utterance representations { h 1 , h 2 , ..., h n } through another bidirectional LSTM to get new utterance representations which incorporate neighboring contexts.",
"These are then passed through a sigmoid activated linear layer to predict each utterance's probability of noteworthiness with respect to each section.",
"BERT-LSTM classifier(B-LSTM) In this model, tokens in the utterance u i are passed through a BERT encoder to obtain their contextual-ized representations, which are mean-pooled to get the utterance representation h i .",
"The subsequent architecture exactly mirrors hierarchical LSTM, and involves passing utterance representations through a bidirectional LSTM and linear layer to get output probabilities.",
"BERT-LSTM is fine-tuned in an end-to-end manner.",
"We first establish two baselines.",
"RANDOMNOTE randomly and uniformly samples a SOAP note from the training set and outputs it as the summary for any input conversation.",
"ORACLEEXT presents all the ground truth noteworthy utterances (evidence) from the conversation as the SOAP note without any abstractive summarization.",
"Thus, the ORACLEEXT baseline has the advantage of containing all the desired information (e.g., names of medicines) from the conversation, but the disadvantage of not being expressed in the linguistic style of a SOAP note which leads to lower n-gram overlap.",
"The opposite is true for the RANDOMNOTE baseline.",
"Both baselines give similar performance and are outperformed by the simple CONV 2N OTE approach (Table 2).",
"We train the abstractive modules for the 4 approaches described in Section 4 with the ground truth noteworthy utterances as inputs.",
"To estimate an upper bound on the performance we can reasonably hope to achieve by improving our noteworthy utterance extractors, we test our models with oracle noteworthy utterances in the test set.",
"All algorithms relying on oracle noteworthy utterances outperform CONV 2N OTE , and exhibit a monotonic and signifi-cant rise in ROUGE scores as we move towards the extraction-heavy end of the spectrum (Table 3) 3 .",
"For predicting noteworthy utterances, we use two baselines:",
"(i) logistic regression on TF-IDF utterance representations; and",
"(ii) a model with a bidirectional LSTM to compute token-averaged utterance representations, followed by a linear classification layer.",
"These two models make the predictions for each utterance independent of others.",
"In contrast, we also use models which incorporate context from neighboring utterances:",
"(a) a hierarchical LSTM; and",
"(b) a BERT-LSTM model as described in Section 5.",
"The latter two methods perform much better (Table 5), demonstrating the benefit of incorporating neighboring context, with BERT-LSTM performing the best (see Appendix Table A6 for section-wise performance).",
"Using predicted noteworthy utterances and clusters instead of oracle ones leads to a drop in ROUGE scores, but the performance of EXT 2S EC and CLUSTER 2S ENT is still better than CONV 2N OTE (Table 2).",
"For the medical dataset, using a BERT-LSTM extractor leads to the best performance, with CLUSTER 2S ENT outperforming CONV 2N OTE by about 8 points in ROUGE-1 (see Appendix Table A5 for section-wise performance).",
"Interestingly, the T5-Small variant achieves similar performance to T5-Base, despite being only about a quarter of the latter's size.",
"Performance on AMI dataset We see a similar trend in the ROUGE scores when applying these methods on the AMI dataset.",
"One exception is the poor performance of pointer-generator based EXT 2N OTE , which excessively repeated sentences despite using a high coverage loss coef-ficient.",
"There is a larger gap between the performance of the T5-Small and T5-Base abstractive models on this dataset.",
"As an extractor, the performance of BERT-LSTM is again better than HLSTM (Table 5), but when used in tandem with the abstractive module, ROUGE scores achieved by the overall pipeline do not always follow the same order.",
"We also observe that the clustering heuristic does not work as well on this dataset.",
"Specifically, tuning the thresholds of the extractive model, while fixing the clustering threshold gave worse results on this dataset.",
"Tuning the thresholds independent 3 The character -' represents GPU memory overflow of the clusters performed better.",
"Performance with ASR errors In the absence of human-generated transcripts of conversations, Automatic Speech Recognition (ASR) techniques can be used to transcribe the conversations for use by our models.",
"To account for ASR errors, we ar-tificially added errors in transcripts of the medical dataset by randomly selecting some percentage of the words and replacing them with phonetically similar words using RefinedSoundEx (Commons) (details in the Appendix).",
"Models trained on clean dataset perform worse on a 10% corrupted test dataset (Table 4).",
"Since ASR errors lead to replacement of a correct word by only a small set of phonetically similar words, there is still some information indicating the original word that can be used by the models.",
"When we train our models on data corrupted at the 10% ASR error rate, our models recover much of the performance drop (Table 4).",
"Notably when simulated ASR errors are dialed up to a 30% error rate, (both at train and test time) we see a smaller performance drop for CLUSTER 2S ENT as compared to CONV 2N OTE .",
"The conditioned pointer-generator and T5 models used in CLUSTER 2S ENT learn to place information regarding different topics in appropriate sections.",
"Hence, given a cluster of supporting utterances, the models can generate different summaries for multiple sections (Figure 2).",
"For example, given the same supporting utterances discussing the pa-tient's usage of lisinopril for low blood pressure, a model generates low blood pressure in the review of systems section, and lisinopril in medications section.",
"We direct the reader to the appendix for examples of full-length generated SOAP notes.",
"Interestingly, when the abstractive model is given a cluster of utterances that are not relevant to the section being generated, the model sometimes outputs fabricated information relevant to that section such as saying the patient is a non-smoker in social history , or that the patient has taken a flu shot in immunizations .",
"Hence, the quality of produced summaries heavily depends on the ability of the extractive step to classify the extracted utterances to the correct section.",
"Another cause of false information is the usage of pronouns in clusters without a mention of the referred entity.",
"In such Medical dataset AMI corpus Method R-1 R-2 R-L R-1 R-2 R-L RANDOMNOTE 34.99 12.69 21.37 42.47 11.55 21.47 ORACLEEXT 33.07 12.22 17.42 39.97 11.17 20.91 CONV 2N OTE (PG) 49.56 25.68 32.87 39.62 13.16 23.95 EXT 2N OTE (PG + HLSTM) 49.58 24.91 31.68 21.28 7.06 15.96 EXT 2N OTE (PG + BLSTM) 50.50 25.4 31.93 21.71 6.83 15.69 EXT 2N OTE (T5-Small + HLSTM) --40.48 13.82 24.64 EXT 2N OTE (T5-Small + BLSTM) --40.36 13.73 24.13 EXT 2S EC (PG + HLSTM) 55.23 27.14 35.15 43.75 15.25 23.46 EXT 2S EC (PG + BLSTM) 55.74 27.54 36.09 40.48 15.61 23.31 EXT 2S EC (T5-Small + HLSTM) 55.77 28.64 37.50 42.45 15.20 23.92 EXT 2S EC (T5-Small + BLSTM) 56.00 29.16 38.38 45.44 16.59 26.14 CLUSTER 2S ENT (PG + HLSTM) 55.46 27.41 35.81 46.19 16.64 24.29 CLUSTER 2S ENT (PG + BLSTM) 55.60 27.68 36.29 42.31 15.92 23.51 CLUSTER 2S ENT (T5-Small + HLSTM) 56.88 28.63 36.78 45.10 15.06 23.52 CLUSTER 2S ENT (T5-Small + BLSTM) 57.14 29.11 37.43 42.38 15.36 23.9 CLUSTER 2S ENT (T5-Base + HLSTM) 57.27 29.10 37.38 50.52 17.56 24.89 CLUSTER 2S ENT (T5-Base + BLSTM) 57.51 29.56 38.06 45.91 17.70 25.24 Table 2: ROUGE scores achieved by different methods on the two datasets Cluster of utterances Subsection Summary-PG Summary-T5 DR That one thing that we can do to reduce risk with that cholesterol is 100 mg metoprolol.",
"Occasionally, the abstractive module produces new inferred information that is not mentioned explicitly in the conversation.",
"In one instance, the model generated that the patient has a history of heart disease conditioned on a cluster that mentioned he/she takes digoxin , a popular medicine for heart disease.",
"Similarly, the model can infer past medical history of high cholesterol upon seeing pravastatin usage.",
"Such inferences can also lead to incorrect summaries, e.g., when a doctor explained that a patient has leaky heart valves, a model added a sentence to the diagnostics and appointments section saying check valves.",
"CLUSTER 2S ENT summarizes localized regions Medical dataset AMI corpus Method PG T5-Small PG T5-Small EXT 2N OTE 52.95 -20.44 41.10 EXT 2S EC 61.00 62.37 43.32 46.85 CLUSTER 2S ENT 63.63 66.50 51.86 54.23 Table 3: ROUGE-1 achieved on test set when using the abstractive models with oracle noteworthy utterances and clusters (more results with oracle in the Appendix) Method R-1 R-2 R-L Train on clean data + Test on data with 10% error rate CONV 2N OTE (PG) 46.52 22.60 30.45 CLUSTER 2S ENT (PG + BLS) 51.84 23.74 32.94 CLUSTER 2S ENT (T5-Base+ BLS) 54.88 26.65 35.88 Train and test on data with 10% error rate CONV 2N OTE (PG) 48.85 24.85 31.27 CLUSTER 2S ENT (PG + BLS) 54.68 26.59 35.70 CLUSTER 2S ENT (T5-Base+ BLS) 56.35 28.50 37.04 Train and test on data with 30% error rate CONV 2N OTE (PG) 45.16 22.26 30.14 CLUSTER 2S ENT (PG + BLS) 53.69 25.88 35.12 CLUSTER 2S ENT (T5-Base+ BLS) 55.90 27.73 36.06 Table 4: Performance of models trained and tested on data with different simulated ASR error rates.",
"In one visit, the patient was asked about chest pain twiceonce in the beginning to get to know his/her current state, and once as a question about how he/she felt just before experiencing a fall in the past.",
"This led to the model generating both that the patient denied chest pain as well as confirmed chest pain, without clarifying that one statement was for the present and another for the past.",
"We asked trained human annotators to evaluate generated SOAP notes for 45 conversations.",
"Every sentence in each SOAP note was labeled according to various quality dimensions such whether it was factually correct, incoherent, irrelevant, redundant, or placed under an inappropriate section.",
"The detailed statistics of annotations received for each quality dimension are provided in the Appendix.",
"We also collected aggregate annotations for the comprehensiveness of each SOAP note and the extent to which it verbatim copied the transcript on a 5-point Likert scale.",
"Human raters were presented with a web in-Medical conversations AMI corpus Metric LR LS HLS BLS HLS BLS Accuracy 96.0 96.1 96.5 96.5 93.77 94.16 Ma-AUC 78.1 79.3 90.0 90.5 83.81 90.76 Ma-F1 29.5 31.0 38.6 40.9 19.95 33.08 Mi-AUC 87.3 87.6 92.7 93.3 93.21 94.90 Mi-F1 31.2 32.9 39.6 41.1 43.76 49.93 Table 5: Performance on multilabel classification of noteworthy utterances with logistic regres-sion(LR), LSTM(LS), Hierarchical-LSTM(HLS) and BERT-LSTM(BLS).",
"The summaries generated by three methods (CONV 2N OTE (pointer-generator), CLUSTER 2S ENT (pointer-generator) and CLUSTER 2S ENT (T5-base)) were presented in random order to hide their identities.",
"For each sentence, we asked for",
"(i) Factual correctness of the sentence;",
"(ii) If the statement is simply repeating what has already been mentioned before;",
"(iii) If the statement is clinically irrelevant;",
"(iv) If the statement is incoherent (not understandable due to grammatical or semantic errors); and",
"(v) If the state-ment's topic does not match the section in which it is placed.",
"In addition, we asked two separate questions for rating the overall summary on a scale of 1-5 for its",
"(i) comprehensiveness and",
"(ii) extent of verbatim copying from conversation.",
"The human evaluation of the SOAP notes was done by workers who had also participated in the creation of the dataset of SOAP notes.",
"Hence, they had already been extensively trained in the task of SOAP note creation, which gave them appropriate knowledge to judge the SOAP notes.",
"To quantify the performance among different methods, we consider a scenario where each generated SOAP note has to be post-edited by discarding undesirable sentences.",
"For a generated SOAP note, we define its yield as the fraction of its total sentences that are not discarded.",
"The sentences that are retained are those that are both factually correct and were not labeled as either repetitive or incoherent.",
"The human annotations show that both CLUSTER 2S ENT -based methods tested produced a higher yield than the CONV 2N OTE baseline (p < 0 . 02 ).",
"T5-base performs better than conditioned pointer-generator as the abstractive module in CLUSTER 2S ENT setting, producing significantly more yield (Table 6).",
"T5 also produces fewer inco-Medical conversations AMI corpus Metric C2N C2S-P C2S-T C2N C2S-P C2S-T Length 21.2 28.2 28.4 20.7 17.9 19.05 %Yield 62.0 69.0 74.7 27.22 30.22 59.45 Comp 2.44 2.42 2.76 2.30 2.55 3.75 Copy 2.18 2.64 2.76 1.80 1.80 1.90 Table 6: Averages of different metrics for CONV 2N OTE (C2N), CLUSTER 2S ENT with pointer-generator (C2S-P) and T5-base (C2S-T).",
"herent sentences (Appendix Table A4) likely due to its exposure to a large number of well-formed coherent sentences during pretraining.",
"We conducted an analogous human evaluation of summaries generated for all 20 conversations in the test set of the AMI corpus, and saw a similar trend in the expected yield for different methods.",
"Notably, for the AMI corpus, CONV 2N OTE produced a very high proportion of redundant sentences ( > 0 . 5 ) despite using the coverage loss, while the pointer-generator based CLUSTER 2S ENT produced a high proportion of incoherent sentences (Appendix Table A4).",
"This paper represents the first attempt at generating full-length SOAP notes by summarizing transcripts of doctor-patient conversations.",
"We proposed a spectrum of extractive-abstractive summarization methods that leverage:",
"(i) section-structured form of the SOAP notes and",
"(ii) linked conversation utterances associated with every SOAP note sentence.",
"The proposed methods perform better than a fully abstractive approach and standard extractive-abstractive approaches that do not take advantage of these annotations.",
"We demonstrate the wider applicability of proposed approaches by showing similar results on the public AMI corpus which has similar annotations and structure.",
"Our work demonstrates the benefits of creating section-structured summaries (when feasible) and collecting evidence for each summary sentence when creating any new summarization dataset.",
"The methods proposed in this work to generate SOAP notes involve neural models that sometimes generate factually incorrect text (Maynez et al., 2020).",
"The detection and correction of such factual errors in automatically generated summaries is an active area of research (Cao et al., 2018; Zhang et al., 2020; Dong et al., 2020).",
"We emphasize that the methods are intended to be used with supervision from a medical practitioner who can check for factual errors and edit the the generated SOAP note if needed.",
"We have estimated the frequency of such factual errors (Appendix Table A4) and characterized multiple types of errors seen in generated SOAP notes in Section 7, for which the medical practitioners should remain vigilant.",
"For example, there is a bias to incorrectly generate information that occur frequently in specific sections (e.g. pa-tient took flu shot), and to replace pronouns with frequently seen entities (such as lisinopril for references to medicine).",
"All data used in this study was manually de-identified before we accessed it.",
"Deploying the proposed methods does not require long-term storage of conversations.",
"After the corresponding SOAP notes are generated, conversations can be discarded.",
"Hence, we do not anticipate any additional privacy risks from using the proposed methods.",
"This work was funded by the Center for Machine Learning and Health in a joint venture between UPMC and Carnegie Mellon University.",
"We gratefully acknowledge support from Abridge AI, Inc. for creating the dataset of SOAP notes and providing human resources for evaluation."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"other",
"abstain",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech.",
"One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames.",
"The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks.",
"In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions.",
"We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task.",
"Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content.",
"Unsupervised speech representation learning, which aims to learn a function that transforms surface features, such as audio waveforms or spectrograms, to higher-level representations using only unlabeled speech, has received great attention recently (Baevski et al., 2019, 2020; Liu et al., 2020; Song et al., 2019; Jiang et al., 2019; Schneider et al., 2019; Chorowski et al., 2019; Pascual et al., 2019; Oord et al., 2018; Kamper, 2019; Chen et al., 2018; Chung and Glass, 2018; Chung et al., 2018; Milde and Biemann, 2018; Chung et al., 2016; Hsu et al., 2017).",
"A large portion of these approaches leverage self-supervised training, where the learning target is generated from the input itself, and thus can train a model in a supervised manner.",
"Chung et al. (2019) propose a method called Autoregressive Predictive Coding (APC), which trains an RNN to predict a future frame that is n steps ahead of the current position given a context such as the past frames.",
"The training target can be easily generated by right-shifting the input by n steps.",
"Their intuition is that the model is required to produce a good summarization of the past and encode such knowledge in the hidden states so as to accomplish the objective.",
"After training, the RNN hidden states are taken as the learned representations, and are shown to contain speech information such as phonetic and speaker content that are useful in a variety of speech tasks (Chung and Glass, 2020).",
"Following their intuition, in this work we aim to improve the generalization of the future frame prediction task by adding an auxiliary objective that serves as a regularization.",
"We empirically demonstrate the effectiveness of our approach in making more accurate future predictions, and confirm such improvement leads to a representation that contains richer phonetic content.",
"The rest of the paper is organized as follows.",
"We start with a brief review of APC in Section",
"2. We then introduce our approach in Section",
"3. Experiments and analysis are presented in Section 4, followed by our conclusions in Section 5.",
"Given a context of a speech signal represented as a sequence of acoustic feature vectors ( x 1 , x 2 , . . . , x t ) , the objective of Autoregressive Predictive Coding (APC) is to use the context to infer a future frame x t + n that is n steps ahead of x t .",
"Let x = ( x 1 , x 2 , . . . , x N ) denote a full utterance, where N is the sequence length, APC incorporates an RNN to process each frame x t sequentially and update its hidden state h t accordingly.",
"For t = 1 , . . . , N n , the RNN produces Figure 1: Overview of our method.",
"an output y t = W h t , where W is an affin-ity matrix that maps h t back to the dimensionality of x t .",
"The model is trained by minimizing the frame-wise L1 loss between the predicted sequence ( y 1 , y 2 , . . . , y N n ) and the target sequence ( x 1+ n , x 2+ n , . . . , x N ) : L f ( x ) = N n (cid:88) t =1 | x t + n y t | .",
"When n = 1 , one can view APC as an acoustic version of neural LM (NLM) (Mikolov et al., 2010) by thinking of each acoustic frame as a token embedding, as they both use a recurrent encoder and aim to predict information about the future.",
"A major difference between NLM and APC is that NLM infers tokens from a closed set, while APC predicts frames of real values.",
"Once an APC model is trained, given an utterance ( x 1 , x 2 , . . . , x N ) , we follow Chung et al. (2019) and take the output of the last RNN layer ( h 1 , h 2 , . . . , h N ) as its extracted features.",
"Our goal is to make APC's prediction of x t + n given h t more accurate.",
"In Section 4 we will show this leads to a representation that contains richer phonetic content.",
"An overview of our method is depicted in Figure 1.",
"We propose an auxiliary loss L r to improve the generalization of the main objective L f (Equation 1).",
"The idea of L r is to refresh the current hidden state h t with the knowledge learned in the past.",
"At time step t , we first sample a past sequence p t = ( x t s , x t s +1 , . . . , x t s + (cid:96) 1 ) , where s is how far the start of this sequence is from t and (cid:96) is the length of p t .",
"We then employ an auxiliary RNN, denoted as RNN aux , to perform predictive coding defined in Equation 1 conditioning on h t .",
"Specifically, we initialize the hidden state of RNN aux with h t , and optimize it along with the corresponding W aux using L f ( p t ) , which equals to (cid:80) t s + (cid:96) 1 t (cid:48) = t s | x t (cid:48) + n y t (cid:48) | .",
"Such a process reminds h t of what has been learned in h t s , h t s +1 , . . . , h t s + (cid:96) 1 .",
"For a training utterance x = ( x 1 , x 2 , . . . , x N ) , we select each frame with probability P as an anchor position.",
"Assume we end up with M anchor positions: a 1 , a 2 , . . . , a M .",
"Each a m defines a sequence p a m = ( x a m s , x a m s +1 , . . . , x a m s + (cid:96) 1 ) before x a m , which we use to compute L f ( p a m ) .",
"Averaging over all anchor positions gives the final auxiliary loss L r : L r ( x ) = 1 MM (cid:88) m =1 L f ( p a m ) .",
"The final APC objective combines Equations 1 and 2 with a balancing coefficient :",
"We re-sample the anchor positions for each x during each training iteration, while they all share the same RNN aux and W aux .",
"We demonstrate the effectiveness of L r in helping optimize L f , and investigate how the improvement is reflected in the learned representations.",
"We follow Chung et al. (2019) and use the audio portion of the LibriSpeech (Panayotov et al., 2015) train-clean-360 subset, which contains 360 hours of read speech produced by 921 speakers, for training APC.",
"The input features are 80-dimensional log Mel spectrograms, i.e., x t R 80 .",
"Both RNN and RNN aux are a 3-layer, 512-dim unidirectional GRU (Cho et al., 2014) network with residual connections between two consecutive layers (Wu et al., 2016).",
"Therefore, W , W aux R 512 80 .",
"is set to 0.1 and the sampling probability P is set to 0.15, that is, each frame has a 15% of chance to be selected as an anchor position.",
"P and are selected based on the validation loss of L f on a small data split.",
"All models are trained for 100 epochs using Adam (Kingma and Ba, 2015) with a batch size of 32 and a learning rate of 10 3 .",
"We first validate whether augmenting L r improves L f .",
"As a recap, n is the number of time steps ahead of the current position t in L f , and s and (cid:96) denote the start and length, respectively, of a past sequence before t to build L r .",
"We consider ( n, s, (cid:96) ) { 1 , 3 , 5 , 7 , 9 } { 7 , 14 , 20 } { 3 , 7 } .",
"Note that each phone has an average duration of about 7 frames.",
"Figures 2a and 2b present L r (before multiplying ) and L f of the considered APC variants on the LibriSpeech dev-clean subset, respectively.",
"Each bar of the same color represents one ( s, (cid:96) ) combination.",
"We use ( , ) to denote an APC optimized only with L f .",
"Bars are grouped by their n 's with different ( s, (cid:96) ) combinations within each group.",
"We start with analyzing Figure 2a.",
"Note that L r does not exist for ( , ) and is set to 0 in the figure.",
"We see that under the same n , the performance of L r is mainly decided by how far ( s ) the past sequence is from the current position rather than the length ( (cid:96) ) to generate: when we keep (cid:96) fixed and increase s from 7 (red), 14 (green), to 20 (blue), we observe the loss surges as well.",
"For a small n , the improvement in L f brought by L r is minor.",
"By comparing ( , ) with other bars, we see that when n 3 , which is smaller than half of the average phone duration (7 frames), adding L r does not lower L f by much.",
"We speculate that when n 3 , x t + n to be inferred is usually within the same phone as x t , making the task not challenging enough to force the model to leverage more past information.",
"L r becomes useful when n gets larger.",
"We see that when n is close to or exceeds the average phone duration ( n 5 ), an evident reduction in L f after adding L r is observed, which validates the effectiveness of L r in assisting with the optimization of L f .",
"When n = 9 , the improvement is not as large as when n = 5 or 7 .",
"One possible explanation is that x t +9 has become almost independent from the previous context h t and hence is less predictable.",
"By observing the validation loss, we have shown that L r indeed helps generalize L f .",
"Next, we want to examine whether an improvement in L f leads to a representation that encodes more useful information.",
"Speech signals encompass a rich set of acoustic and linguistic properties.",
"Here Feature Time shift -15 -10 -5 0 +5 +10 +15 log Mel 83.3 80.3 67.6 49.9 65.5 77.9 82.7 APC trained with L f (Equation 1) n = 1 56.1 45.8 36.1 33.7 56.5 73.7 81.6 n = 3 50.8 41.8 34.8 33.4 56.0 73.5 81.1 n = 5 48.7 38.2 32.5 31.9 54.8 73.0 80.5 n = 7 44.6 38.6 32.9 32.1 56.3 73.8 80.4 n = 9 51.0 41.8 35.7 36.9 58.4 74.6 81.0 APC trained with L m (Equation 3) n = 1 50.6 42.2 35.1 33.1 54.4 73.4 81.4 n = 3 46.4 38.0 34.1 32.4 54.1 71.4 80.5 n = 5 41.8 35.1 29.8 28.1 49.6 64.6 76.8 n = 7 39.8 33.8 28.7 27.8 46.8 60.6 74.4 n = 9 42.3 35.3 30.3 29.7 50.0 63.3 76.6 Table 1: Phonetic classification results using different types of features as input to a linear logistic regression classifier.",
"we will only focus on analyzing the phonetic content contained in a representation, and leave other properties such as speaker for future work.",
"We use phonetic classification on TIMIT (Garo-folo et al., 1993) as the probing task to analyze the learned representations.",
"The corpus contains 3696, 400, and 192 utterances in the train, validation, and test sets, respectively.",
"For each n { 1 , 3 , 5 , 7 , 9 } , we pick the ( s, (cid:96) ) combination that has the lowest validation loss.",
"As described in Section 2, we take the output of the last RNN layer as the extracted features, and provide them to a linear logistic regression classifier that aims to correctly classify each frame into one of the 48 phone categories.",
"During evaluation, we follow the protocol (Lee and Hon, 1989) and collapse the prediction to 39 categories.",
"We report frame error rate (FER) on the test set, which indicates how much phonetic content is contained in the representations.",
"We also conduct experiments for the task of predicting x t w and x t + w given x t for w { 5 , 10 , 15 } .",
"This examines how contextualized h t is, that is, how much information about the past and future is encoded in the current feature h t .",
"We simply shift the labels in the dataset by { 5 , 10 , 15 } and retrain the classifier.",
"We keep the pre-trained APC RNN fixed for all runs.",
"Results are shown in Table 1.",
"We emphasize that our hyperparameters are cho-sen based on L f and are never selected based on their performance on any downstream task, including phonetic classification, speech recognition, and speech translation to be presented next.",
"Tuning hyperparameters towards a downstream task defeats the purpose of unsupervised learning.",
"Phonetic classification We first study the standard phonetic classification results, shown in the column where time shift is 0.",
"We see that APC features, regardless of the objective ( L f or L m ), achieve lower FER than log Mel features, showing that the phonetic information contained in the surface features has been transformed into a more accessible form (defined as how linearly separable they are).",
"Additionally, we see that APC features learned by L m outperform those learned by L f across all n .",
"For n 5 where there is a noticeable improvement in future prediction after adding L r as shown in Figure 2b, their improvement in phonetic classification is also larger than when n 3 .",
"Such an outcome suggests that APC models that are better at predicting the future do learn representations that contain richer phonetic content.",
"It is also interesting that when using L f , the best result occurs at n = 5 (31.9); while with L m , it is when n = 7 that achieves the lowest FER (27.8).",
"Predicting the past or future We see that it is harder to predict the nearby phone identities from a log Mel frame, and the FER gets higher further away from the center frame.",
"An APC feature h t contains more information about its past than its future.",
"The result matches our intuition as the RNN generates h t conditioning on h i for i < t and thus their information are naturally encoded in h t .",
"Furthermore, we observe a consistent improvement in both directions by changing L f to L m across all n and time shifts.",
"This confirms the use of L r , which requires the current hidden state h t to recall what has been learned in previous hidden states, so more information about the past is encoded in h t .",
"The improvement also suggests that an RNN can forget the past information when training only with L f , and adding L r alleviates such problem.",
"The above phonetic classification experiments are meant for analyzing the phonetic properties of a representation.",
"Finally, we apply the representations learned by L m to automatic speech recognition (ASR) and speech translation (ST) and show their superiority over those learned by L f .",
"We follow the exact setup in Chung and Glass (2020).",
"For ASR, we use the Wall Street Journal corpus (Paul and Baker, 1992), use si284 for training, and report the word error rate (WER) on dev93 .",
"For ST, we use the LibriSpeech En-Fr corpus (Ko-cabiyikoglu et al., 2018), which aims to translate an English speech to a French text, and report the BLEU score (Papineni et al., 2002).",
"For both tasks, the downstream model is an end-to-end, sequence-to-sequence RNN with attention (Chorowski et al., 2015).",
"We compare different input features to the same model.",
"Results, shown in Table 2, demonstrate that the improvement in predictive coding brought by L r not only provides representations that contain richer phonetic content, but are also useful in real-world speech applications.",
"1 Feature ASR (WER ) ST (BLEU ) log Mel 18.3 12.9 APC w/ L f 15.2 13.8 APC w/ L m 14.2 14.5 Table 2: Automatic speech recognition (ASR) and speech translation (ST) results using different types of features as input to a seq2seq with attention model.",
"We improve the generalization of Autoregressive Predictive Coding by multi-target training of fu-1",
"fu-1 According to Chung and Glass (2020), when using a Transformer architecture (Vaswani et al., 2017; Liu et al., 2018) as the autoregressive model, representations learned with L f can achieve a WER of 13.7 on ASR and a BLEU score of 14.3 on ST.",
"ture prediction L f and past memory reconstruc-tion L r , where the latter serves as a regularization.",
"Through phonetic classification, we find the representations learned with our approach contain richer phonetic content than the original representations, and achieve better performance on speech recognition and speech translation."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result"
] |
[
"Understanding natural language requires common sense, one aspect of which is the ability to discern the plausibility of events.",
"While distributional modelsmost recently pre-trained, Transformer language models have demonstrated improvements in modeling event plausibility, their performance still falls short of humans'.",
"In this work, we show that Transformer-based plausibility models are markedly inconsistent across the conceptual classes of a lexical hierarchy, inferring that a person breathing is plausible while a dentist breathing is not, for example.",
"We find this inconsistency persists even when models are softly injected with lexical knowledge, and we present a simple post-hoc method of forcing model consistency that improves correlation with human plausibility judgements.",
"Of the following events, a human reader can easily discern that (1) and (2) are semantically plausible, while (3) is nonsensical.",
"(1) The person breathes the air.",
"(2) The dentist breathes the helium.",
"(3) The thought breathes the car.",
"This ability is required for understanding natural language: specifically, modeling selectional preference the semantic plausibility of predicate-argument structuresis known to be implicit in discriminative tasks such as coreference resolution (Hobbs, 1978; Dagan and Itai, 1990; Zhang et al., 2019b), word sense disambiguation (Resnik, 1997; McCarthy and Carroll, 2003), textual entailment (Zanzotto et al., 2006; Pantel et al., 2007), and semantic role labeling (Gildea and Jurafsky, 2002; Zapirain et al., 2013).",
"More broadly, modeling semantic plausibility is a necessary component of generative inferences 0.43 0.53 0.53 0.06 0.98 0.42 clothing shirt p e r s on w o r k e r c h e f A chef knits clothing.",
"Is it plausible an [X] knits a [Y]?",
"such as conditional commonsense inference (Gor-don et al., 2011; Zhang et al., 2017), abductive commonsense reasoning (Bhagavatula et al., 2020), and commonsense knowledge acquisition (Zhang et al., 2020a; Hwang et al., 2020).",
"Learning to model semantic plausibility is a difficult problem for several reasons.",
"First, language is sparse, so most events will not be attested even in a large corpus.",
"Second, plausibility relates to likelihood in the world, which is distinct from the likelihood of an event occurring in language.",
"Third, plausibility reflects human intuition, and thus modeling plausibility at its extreme requires the entire representational arsenal that people use in understanding language, ranging from social mores to naive physics (Resnik, 1996).",
"A key property of plausibility is that the plausibility of an event is generally consistent across some appropriate level of abstraction.",
"For example, events of the conceptual form the [ PERSON ] breathes the [ GAS ] are consistently plausible.",
"Plausibility judgments follow this pattern because people understand that similar concept classes share similar affordances.",
"Furthermore, the change in plausibility between levels of abstraction is often consistent.",
"Consider that as we abstract from person breathes to organism breathes to entity breathes, plausibility consistently decreases.",
"In this paper, we investigate whether state-of-the-art plausibility models based on fine-tuning Transformer language models likewise exhibit these types of consistency.",
"As we will show, inconsistency is a significant issue in existing models which results in erroneous predictions (See Figure 1 for an example).",
"To address this issue, we explore two methods that endow Transformer-based plausibility models with knowledge of a lexical hierarchyour hypothesis being that these methods might correct conceptual inconsistency without over-generalizing.",
"The first method makes no a priori assumptions as to how the model should generalize and simply provides lexical knowledge as an additional input to the model.",
"The second explicitly enforces conceptual consistency across a lexical hierarchy by taking the plausibility of an event to be a maximum over the plausibility of all conceptual abstractions of the event.",
"We find that only the second proposed method sufficiently biases the model to more accurately correlate with human plausibility judgments.",
"This finding encourages future work that forces Transformer models to make more discrete abstractions in order to better model plausibility.",
"We focus our analysis on simple events in English represented as subject-verb-object (s-v-o) triples, and we evaluate models by correlation with two datasets of human plausibility judgements.",
"Our models build off of RoBERTa (Liu et al., 2019), a pre-trained Transformer masked language model.",
"1 We use WordNet 3.1 (Miller, 1995) hypernymy relations as a lexical hierarchy.",
"Concretely, our contributions are: We evaluate the state of the art in modeling plausibility, both in terms of correlation with human judgements and consistency across a lexical hierarchy.",
"We propose two measures of the consistency of plausibility estimates across conceptual abstractions.",
"1 Our implementation and data is available at https://github.com/ianporada/modeling_event_plausibility We show that injecting lexical knowledge into a plausibility model does not overcome conceptual inconsistency.",
"We present a post-hoc method of generalizing plausibility estimates over a lexical hierarchy that is necessarily consistent and improves correlation with human plausibility judgements.",
"While plausibility is difficult to define precisely, we adopt the following useful distinctions from the literature:",
"Plausibility is a matter of degree (Wilks, 1975; Resnik, 1993).",
"We therefore evaluate models by their ability to estimate the relative plausibility of events.",
"Plausibility describes non-surprisal conditioned on some context (Resnik, 1993; Gordon et al., 2011).",
"For example, conditioned on the event breathing, it is less surprising to learn that the agent is a dentist than a thought and thus more plausible.",
"Plausibility is dictated by likelihood of occurrence in the world rather than text (Zhang et al., 2017; Wang et al., 2018).",
"This discrepancy is due to reporting biasthe fact that people do not state the obvious (Gordon and Van Durme, 2013; Shwartz and Choi, 2020); e.g., a person dying is more likely to be attested than a person breathing (Figure 2).",
"static word embeddings lack the world knowledge needed for modeling plausibility.",
"The state of the art is to take the conditional probability of co-occurrence as estimated by a distributional model as an approximation of event plausibility (Zhang et al., 2020a).",
"Our fine-tuned RoBERTa baseline follows this approach.",
"Similar in spirit to our work, He et al. (2020) extend this baseline method by creating additional training data using the Probase taxonomy (Wu et al., 2012) in order to improve conceptual generalization; specifically, for each training example they swap the event's arguments with its hypernym or hyponym, and they take this new, perturbed example to be an implausible event.",
"There is also recent work focusing on monotonic inferences in semantic entailment (Yanaka et al., 2019; Goodwin et al., 2020; Geiger et al., 2020).",
"Plausibility contrasts with entailment in that plausibility is not strictly monotonic with respect to hypernymy/hyponymy relations: the plausibility of an entity is not sufficient to infer the plausibility of its hyponyms (i.e., not downward entailing: it is plausible that a person gives birth but not that a man gives birth) nor hypernyms (i.e., not upward entailing: it is plausible that a baby fits inside a shoebox but not that a person does).",
"Non-monotonic inferences have recently been explored in the context of defeasible reasoning (Rudinger et al., 2020): inferences that may be strengthened or weakened given additional evidence.",
"The change in plausibility between an event and its abstraction can be formulated as a type of defeasible inference, and our findings may contribute to future work in this area.",
"Modeling the plausibility of single events is also studied in the context of selectional preference the semantic preference of a predicate for taking an argument as a particular dependency relation (Evens, 1975; Resnik, 1993; Erk et al., 2010); e.g., the relative preference of the verb breathe for the noun dentist as its nominal subject.",
"Models of selectional preference are sometimes evaluated by correlation with human judgements ( Saghdha, 2010; Zhang et al., 2019a).",
"The primary distinction between such evaluations and those of semantic plausibility, as in our work, is that evaluations of semantic plausibility emphasize the importance of correctly modeling atypical yet plausible events (Wang et al., 2018).",
"Closely related to our work are models of selectional preference that use the WordNet hierarchy to generalize co-occurrence probabilities over concepts.",
"These include the work of Resnik (1993), related WordNet-based models (Li and Abe, 1998; Clark and Weir, 2002), and a more recent experiment by Saghdha and Korhonen (2012) to combine distributional models with WordNet.",
"Notably, these methods make a discrete decision as to the right level of abstractionif the most preferred subject of breathe is found to be person, for example, then all hyponyms of person will be assigned the same selectional preference score.",
"Our second proposed method can be thought of as finding the right level of abstraction at which to infer plausibility.",
"This problem has been broadly explored by existing work.",
"Van Durme et al. (2009) extract abstracted commonsense knowledge from text using WordNet, obtaining inferences such as A [ PERSON ] can breathe.",
"They achieve this by first extracting factoids and then greedily taking the WordNet synset that dominates the occurrences of factoids to be the appropriate abstraction.",
"Gong et al. (2016) similarly abstract a verb's arguments into a set of prototypical concepts using Probase and a branch-and-bound algorithm.",
"For a given verb and argument position, their algorithm finds a small set of concepts that has high coverage of all nouns occurring in said position.",
"Conceptual abstractions are captured to some extent in pre-trained language models' representations (Ravichander et al., 2020; Weir et al., 2020).",
"Given a vocabulary of subjects S , verbs V , and objects O , let an event be represented by the s-v-o triple e S V O .",
"We take g to be a ground-truth, total ordering of events expressed by the ordering function g ( e ) > g ( e (cid:48) ) iff e is more plausible than e (cid:48) .",
"Our objective is to learn a model f : S V O R that is monotonic with respect to g , i.e., g ( e ) > g ( e (cid:48) ) = f ( e ) > f ( e (cid:48) ) .",
"This simplification follows from previous work (Wang et al., 2018), and the plausibility score for a given triple can be considered the relative plausibility of the respective event across all contexts and realizations.",
"While meaning is sensitive to small linguistic perturbations, we are interested in cases where one event is more plausible than another marginalized over context.",
"Consider that person-breathe-air is more plausible than thought-breathe-car regardless of the choice of determiners or tense of the verb.",
"In practice, we would like to learn f without supervised training data, as collecting a sufficiently large dataset of human judgements is prohibitively expensive (Zhang et al., 2020b), and supervised models often learn dataset-specific correlations (Levy et al., 2015; Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019).",
"Therefore, we train model f with distant supervision and evaluate by correlation with human ratings of plausibility which represent the ground-truth ordering g .",
"We define C to be the set of concepts in a lexical hierarchy, in our case synsets in WordNet, with some root concept c (1) C .",
"The hypernym chain of concept c ( h ) C at depth h in the lexical hierarchy is defined to be the sequence of concepts ( c ( h ) ) = ( c (1) , c (2) , . . . , c ( h ) ) where i, c ( i ) is a direct hypernym of c ( i +1) .",
"A lexical hierarchy may be an acyclic graph in which case concepts can have multiple hypernyms, and it follows that there may be multiple hypernym chains to the root.",
"In this case, we take the hypernym chain ( c ( h ) ) to be the shortest such chain.",
"Based on our intuition as to how we expect plausibility estimates to be consistent across abstractions in a hypernym chain, we propose two quantitative metrics of inconsistency , Concavity Delta (CC ) and Local Extremum Rate (LER).",
"These metrics provide insight into the degree to which a model's estimates are inconsistent.",
"For a given event, as we traverse up the hypernym chain to higher conceptual abstractions, we expect plausibility to increase until we reach some maximally appropriate level of abstraction, and then decrease thereafter.",
"In other words, we expect that consistent estimates will be concave across a sequence of abstractions.",
"For example, in the sequence of abstractions penguin flies bird flies animal flies, plausibility first increases and then decreases.",
"Our intuition is that plausibility increases as we approach the most appropriate level of abstraction, then decreases beyond this level.",
"A concave sequence is defined to be a sequence ( a 1 , a 2 , a 3 , ... ) where i, 2 a i > a i 1 + a i +1 .",
"Let a i 1 , a i , and a i +1 be the plausibility estimates for three sequential abstractions of an event.",
"We define the divergence from concavity to be = (cid:40) 12 ( a i 1 + a i +1 ) a i 2 a i < a i 1 + a i +1 0 otherwise We then define the Concavity Delta , CC , to be the average across all triples of conceptually sequential estimates.",
"Ideally, a model's estimates should have low CC .",
"A higher CC reflects the extent to which models violate our intuition.",
"LER simply describes how often a conceptual abstraction is a local extremum in terms of its plausibility estimate.",
"Most often, the change in plausibility between sequential abstractions is consistently in the same direction.",
"For example, from bird flies animal flies organism flies, plausibility consistently decreases.",
"The majority of abstractions will not be the most appropriate level of abstraction and therefore not a local extremum.",
"As in 3.2.1, we consider all triples of conceptually sequential estimates of the form a i 1 , a i , and a i +1 .",
"Formally, LER is the number of triples where a i > max ( a i 1 , a i +1 ) or a i < min ( a i 1 , a i +1 ) divided by the total number of triples.",
"A high LER signifies that plausibility estimates have few monotonic subsequences across abstractions.",
"Therefore, a more consistent model should have a lower LER.",
"There are, of course, exceptions to our intuition, and this metric is most insightful when it varies greatly between models.",
"The models that we consider are all of the same general form.",
"They take as input an event and output a relative plausibility score.",
"Our proposed models are structured on top of a RoBERTa baseline.",
"We use RoBERTa in the standard sequence classification framework.",
"We format an event in the raw form as [CLS] subject verb object [SEP]' where the s-v-o triple is tokenized [CLS] dentist [SEP] breathe helium CONCEPTINJECT adult.n.01 person.n.01 gas.n.02 =0 =1 =2 =3 =4 =1 =2 RoBERTa [CLS] dentist breathe helium [SEP] [CLS] adult breathe helium [SEP] [CLS] person breathe gas [SEP] RoBERTa RoBERTa RoBERTa LogSumExp CONCEPTMAX Figure 3: Left: The general formulation of CONCEPTINJECT ; this model takes as input an event and the full hypernym chains of each argument.",
"using a byte pair encoding.",
"2 These tokens are used as input to a pre-trained RoBERTa model, and a linear layer is learned during fine-tuning to project the final-layer [CLS] token representation to a single logit which is passed through a sigmoid to obtain the final output, f ( e ) .",
"CONCEPTINJECTCONCEPTINJECT is an extension of the existing state-of-the-art plausibility models.",
"This model takes as input, in addition to an event, the hypernym chains of the synsets corresponding to each argument in the event.",
"We propose this model to explore how injecting simple awareness of a lexical hierarchy affects estimates.",
"CONCEPTINJECT is similar in principle to Onto-LSTM (Dasigi et al., 2017), which provides the entire hypernym chains of nouns as input to an LSTM for selectional preference, and also similar to K-BERT (Liu et al., 2020), which injects knowledge into BERT during fine-tuning by including relations as additional tokens in the input.",
"K-BERT has demonstrated improved performance over Chinese BERT on several NLP tasks.",
"to RoBERTa for each synset c C .",
"We initialize the embedding of c as the average embedding of the sub-tokens of c 's lemma.",
"3 We refer to RoBERTa's positional embedding matrix as the x -position and randomly initialize a second positional embedding matrix, the y -position.",
"The model input format follows that used for RoBERTa (4.1), with the critical distinction that we also include the tokens for the hypernyms of the subject and object as additional input.",
"For the subject s , we first disambiguate the synset c of s using BERT-WSD (Yap et al., 2020).",
"Then for each hypernym c ( i ) in the hypernym chain ( c ) , the token of c ( i ) is included in the model input: this token takes the same x -position as the first sub-token of s and takes its y -position to be i , the depth in the lexical hierarchy.",
"Finally, the x -position, y -position, and token embedding are summed for each token to compute its initial representation (Figure 3).",
"The hypernyms of the object are included by the same procedure.",
"Non-synset tokens have a y position of zero.",
"CONCEPTINJECT thus sees an event and the full hypernym chains of the arguments when computing a plausibility score.",
"3 We refer to the name of a synset as the synset's lemma, e.g. the lemma of the synset [dog.n.01] is taken to be dog.",
"For synsets that correspond to multiple lemmas, we randomly sample one.",
"CONCEPTMAXCONCEPTMAX is a simple post-hoc addition to the vanilla RoBERTa model (4.1).",
"We compute a score for all abstractions of an event e and take the final plausibility f ( e ) to be a soft maximum of these scores.",
"This method is inspired by that of Resnik (1993) which takes selectional preference to be a hard maximum of some plausibility measure over concepts.",
"Again, we use BERT-WSD to disambiguate the synset of the subject, c ( h ) s , and the synset of the object, c ( l ) o .",
"Using RoBERTa as in 4.1, we then compute a plausibility score for every triple of the form ( c ( i ) s , v, c ( j ) o ) where c ( i ) s and c ( j ) o are hypernyms in the hypernym chains ( c ( h ) s ) and ( c ( l ) o ) , respectively.",
"Synsets are represented by their lemma when used as input to RoBERTa.",
"Finally, we take the LogSumExp, a soft maximum, of these scores to be the ultimate output of the model (Figure 3).",
"During training, we sample only three of the abstractions ( c ( i ) s , v, c ( j ) o ) to reduce time complexity.",
"Thus we only need to compute four total scores instead of h l .",
"At inference time, we calculate plausibility with a hard maximum over all triples.",
"RoBERTa Zero-shot We use MLConjug 4 to realize an s-v-o triple in natural language with the determiner the for both the subject and object, and the verb conjugated in the indicative, third person tense; e.g., person-breathe-air The person breathes the air.",
"We first mask both the subject and object to compute P ( o | v ) , then mask just the subject to compute P ( s | v, o ) .",
"Finally we calculate f ( e ) = P ( s, o | v ) = P ( s | v, o ) P ( o | v ) .",
"In the case that a noun corresponds to multiple tokens, we mask all tokens and take the probability of the noun to be the geometric mean of its token probabilities.",
"GloVe+MLP The selectional preference model of Van de Cruys (2014) initialized with GloVe embeddings (Pennington et al., 2014).",
"n-gram A simple baseline that estimates P ( s, o | v ) by occurrence counts.",
"We use a bigram model as we found trigrams to correlate less with human judgments.",
"Models are all trained with the same objective to discriminate plausible events from less plausible ones.",
"Given a training set D of event pairs ( e, e (cid:48) ) where e is more plausible than e (cid:48) , we minimize the binary cross-entropy loss L = (cid:88) ( e,e (cid:48) ) D log( f ( e )) + log(1 f ( e (cid:48) )) (2) In practice, D is created without supervised labels.",
"For each ( e, e (cid:48) ) D , e is an event attested in a corpus with subject s , verb v , and object o .",
"e (cid:48) is a random perturbation of e uniformly of the form ( s (cid:48) , v, o ) , ( s, v, o (cid:48) ) , or ( s (cid:48) , v, o (cid:48) ) where s (cid:48) and o (cid:48) are arguments randomly sampled from the training corpus by occurrence frequency.",
"This is a standard pseudo-disambiguation objective.",
"Our training procedure follows recent works that learn plausibility models with self-supervised fine-tuning (Kocijan et al., 2019; He et al., 2020; Zhang et al., 2020a).",
"For the models that use WordNet, we use a filtered set of synsets: we remove synsets with a depth less than 4, as these are too broad to provide useful generalizations (Van Durme et al., 2009).",
"We also filter out synsets whose corresponding lemma did not appear in the training corpus.",
"The WordNet models also require sense disambiguation.",
"We use the raw triple as input to BERT-WSD (Yap et al., 2020) which outputs a probability distribution over senses.",
"We take the argmax to be the correct sense.",
"We train all models with gradient descent using an Adam optimizer, a learning rate of 2e-5, and a batch size of 128.",
"We train for two epochs over the entire training set of examples with a linear warm-up of the learning rate over the first 10,000 iterations.",
"Fine-tuning RoBERTa takes five hours on a single Nvidia V100 32GB GPU.",
"Fine-tuning CONCEPTINJECT takes 12 hours and CONCEPTMAX 24 hours.",
"We use English Wikipedia to construct the self-supervised training data.",
"As a relatively clean, definitional corpus, plausibility models trained on Wikipedia have been shown to correlate with human judgements better than those trained on similarly sized corpora (Zhang et al., 2019a; Porada et al., 2019).",
"We parse a dump of English Wikipedia using the Stanford neural dependency parser (Qi et al., 2018).",
"For each sentence with a direct object, no indirect object, and noun arguments (that are not proper nouns), we extract a training example ( s, v, o ) : we take s and o to be the lemma of the head of the respective relations ( nsubj and obj ), and v to be the lemma of the head of the root verb.",
"This results in some false positives such as the sentence The woman eats a hot dog. being extracted to the triple woman-eat-dog (Table 1).",
"We filter out triples that occur less than once and those where a word occurred less than 1,000 times in its respective position.",
"We do not extract the same triple more than 1,000 times so as not to over-sample common events.",
"In total, we extract 3,298,396 triples (representing 538,877 unique events).",
"We evaluate models by their correlation with human plausibility judgements.",
"Each dataset consists of events that have been manually labelled to be plausible or implausible (Table 3).",
"We use AUC (area under the receiver-operating-characteristic curve) as an evaluation metric which intuitively reflects the ability of a model to discriminate a plausible event from an implausible one.",
"These datasets contain plausible events that are both typical and atypical.",
"While a distributional model should be able to discriminate typical events given that they frequently occur in text, discriminating atypical events (such as dentist-breathe-helium ) is more difficult.",
"KPEP -3 K , the crowdsourced P hysical E vent P lausbility ratings of Wang et al. (2018), consists of 3,062 events rated as physically plausible or implausible by five crowdsourced workers.",
"Annotators were instructed to ignore possible metaphorical meanings of an event.",
"We divide the dataset Topic Question Answer cat Does it lay eggs?",
"To evaluate on this dataset, we make the assump-tion that all events labeled physically plausible are necessarily more plausible than all those labeled physically implausible.",
"The 20 Questions commonsense dataset 5 is a collection of 20 Questions style games played by crowdsourced workers.",
"We format this dataset as plausibility judgments of s-v-o triples similar to PEP -3 K .",
"In the game 20 Questions, there are two players one who knows a given topic, and the other who is trying to guess this topic by asking questions that have a discrete answer.",
"The dataset thus consists of triples of topics, questions, and answers where the answer is one of: always, usually, sometimes, rarely, or never (Table 2).",
"We parse the dataset using the Stanford neural dependency parser (Qi et al., 2018).",
"We then extract questions that contain a simple s-v-o triple 5 https://github.com/allenai/ twentyquestions Model PEP -3 K 20Q Avg.",
"with no modifiers where either the subject or object is a third person singular pronoun.",
"We replace this pronoun with the topic, and otherwise replace any occurrence of a personal pronoun with the word person.",
"We filter out examples where only two of three annotators labelled the likelihood as never.",
"Finally, we take events labelled never to be less plausible than all other events.",
"This process results in 5,096 examples equally divided between plausible and implausible.",
"We split examples into equal sized validation and test sets.",
"Despite making a discrete decision about the right level of abstraction, CONCEPTMAX has higher AUC on both evaluation sets as compared to CONCEPTINJECT and the vanilla RoBERTa baseline (Table 4).",
"The fact that the CONCEPTMAX model aligns with human judgments more than the baselines supports the hypothesis that conceptual consistency improves plausibility estimates.",
"CONCEPTINJECT performs similarly to the RoBERTa baseline even though this model is aware of the WordNet hierarchy.",
"We hypothesize that the self-supervised learning signal does not incentivize use of this hierarchical information in a way that would increase correlation with plausibility judgements.",
"We do find that CONCEPTINJECT attends to the hypernym chain, however, by qualitatively observing the self-attention weights.",
"All fine-tuned RoBERTa models correlate better with plausibility judgements than the RoBERTa Zero-shot baseline, and the n-gram baseline performs close to randomthis is perhaps to be expected, as very few of the evaluation triples occur in our Wikipedia training data.",
"To better understand the performance of these models, we manually inspect 100 examples from each dataset.",
"We find that RoBERTa rarely assigns a high score to a nonsensical event (although this does occur in five cases, such as turtle-climb-wind and person-throw-library ).",
"RoBERTa also rarely assigns a low score to a seemingly typical event, although this is somewhat more common (in cases such as kid-use-handbag and basket-hold-clothes , for example).",
"This finding confirms our expectation that discerning the typical and nonsensical should be relatively easy for a distributional model.",
"Examples not at the extremes of plausibility are harder to categorize; however, one common failure seems to be when the plausibility of an event hinges on the relative size of the subject and object, such as in the case of dog-throw-whale .",
"This finding is similar to the limitations of static word embeddings observed by Wang et al. (2018).",
"For every event e in the evaluation sets of human plausibility judgments (6), we disambiguate e using BERT-WSD and then calculate models' estimates for the plausibility of every possible abstraction of e (Figure 4).",
"Based on these estimates, we can analyze the consistency of each model across abstractions.",
"We use our proposed metrics of consistency (3.2) to evaluate the extent to which models' estimates are consistent across a hypernym chain (Table 5).",
"RoBERTa Zero-shot , which correlates with plausibility the least of the RoBERTa models, has by far the highest inconsistency.",
"The fine-tuned RoBERTa and CONCEPTINJECT estimates are also largely inconsistent by our metrics.",
"For these models, half of all estimates are a local extrema in the lexical hierarchy.",
"As shown in Figure 4, the space of plausibility estimates is rigid for these models, and most estimates are a local extremum with respect to the plausibility of the subject or object of the event.",
"CONCEPTMAX is almost entirely consistent by these metrics, which is to be expected as this model makes use of the same WordNet hierarchy that we are using for evaluation.",
"We also evaluated consistency using the longest rather than the shortest hypernym chain in WordNet, but did not find a significant change in results.",
"This is likely because for the consistency evaluation we are using the hypernym chains that have been filtered as described in 3.1.",
"We qualitatively evaluate the consistency of models by observing the matrix of plausibility estimates for all abstractions as show in Figure 4.",
"In agreement with our quantitative metrics, we observe that RoBERTa estimates are often inconsistent in that they vary greatly between two abstractions that have similar plausibility.",
"Surprisingly, however, it is also often the case that RoBERTa estimates are similar or identical between abstractions.",
"In some cases, this may be the result of the model being invariant to the subject or object of a given event.",
"We also observe the individual examples with the highest CC .",
"In these cases, it does appear that the variance of model estimates is unreasonable.",
"In contrast, LER is sometimes high for an example where the estimates are reasonably consistent.",
"This is a limitation of the LER metric not taking into account the degree of change between estimates.",
"Finally, we observe that the BERT-WSD sense is often different from what an annotator primed to rate plausibility would assume.",
"For example, in the case of dog-cook-turkey , BERT-WSD takes dog to be a hyponym of person.",
"While this is reasonable in context, it results in a different plausibility than that annotated.",
"While the state of the art in modeling plausibility has improved in recent years, models still fall short of human ability.",
"We show that model estimates are inconsistent with respect to a lexical hierarchy: they correlate less with human judgments as compared to model estimates that are forced to be consistent, and they do not satisfy our intuitively defined quantitative measures of consistency.",
"In addition, we show that simply injecting lexical knowledge into a model is not sufficient to correct this limitation.",
"Conceptual consistency appears to require a more discrete, hierarchical bias.",
"Interesting questions for future work are: 1) can we design a non-monotonic , consistent model of plausibility that better correlates with human judgements?",
"2) Can we induce a hierarchy of abstractions rather than using a manually created lexical hierarchy?",
"We would like to thank Ali Emami and Abhilasha Ravichander for useful discussions and comments, as well as the anonymous reviewers for their insightful feedback.",
"This work is supported by funding from Microsoft Research and resources from Compute Canada.",
"The last author is supported by the Canada CIFAR AI Chair program."
] | [
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"objective",
"other",
"result",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"method",
"other",
"other",
"other"
] |
[
"Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions.",
"A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited.",
"To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM).",
"Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels.",
"Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially.",
"We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.",
"Fine-grained entity typing (Ling and Weld, 2012) has been long studied in the natural language processing community as the extracted type information is useful for downstream tasks such as entity linking (Ling et al., 2015; Onoe and Durrett, 2020), relation extraction (Koch et al., 2014), coreference resolution (Onoe and Durrett, 2020), etc.",
"Recently, ultra-fine entity typing (Choi et al., 2018) extends the effort to using a richer set of types (e.g., person , actor , company , victim ) to label noun phrases including not only named entity mentions, but also pronouns and nominal nouns.",
"This task directly uses type words or phrases as tags.",
"Its tag set can contain more than 10,000 types.",
"A challenge is that with the large type set, it is extremely difficult and time-consuming for humans to annotate samples.",
"As a result, most existing works use weak labels that are automatically generated (Ling and Weld, 2012; Choi et al., 2018; Lee et al., 2020).",
"There are two main approaches to obtaining weakly labeled training examples.",
"One approach is to find the Wikipedia pages that correspond to entity mentions, which can be done by using hyper-links to Wikipedia or applying entity linking.",
"Then the entity types can be obtained from knowledge bases.",
"The other approach is to directly use the head words of nominal mentions as ultra-fine type labels.",
"For example, if a nominal mention is a famous actor, then the head word actor can be used as its type label.",
"Several problems exist when using these weak labels for the ultra-fine typing task.",
"First, in the dataset created by Choi et al. (2018), on average there are fewer than two labels (types) for each sample annotated through either entity linking or head word supervision.",
"On the other hand, a human annotated sample has on average 5.4 labels.",
"As a result, models trained from the automatically obtained labels have a low recall.",
"Second, neither of the above approaches can create a large number of training samples for pronoun mentions.",
"Third, it is difficult to obtain types that are highly dependent on the context.",
"For example, in I met the movie star Leonardo DiCaprio on the plane to L.A., the type passenger is correct for Leonardo DiCaprio.",
"However, this type cannot be obtained by linking to knowledge bases.",
"In this paper, to alleviate the problems above, we propose an approach that combines hypernym extraction patterns (Hearst, 1992; Seitner et al., 2016) with a masked language model (MLM), such as BERT (Devlin et al., 2019), to generate weak labels for ultra-fine entity typing.",
"Given a sentence that contains a mention, our approach adds a short piece of text that contains a [MASK] token into it Input Top Words for [MASK] In late 2015, [MASK] such as Leonardo DiCaprio starred in The Revenant.",
"to construct an input to BERT.",
"Then, the pretrained MLM will predict the hypernyms of the mention as the most probable words for [MASK].",
"These words can then be used as type labels.",
"For example, consider the first example in Table",
"1. The original sentence is In late 2015, Leonardo DiCaprio starred in The Revenant.",
"We construct an input for the BERT MLM by inserting [MASK] such as before the mention Leonardo DiCaprio.",
"With this input, the pretrained BERT MLM predicts actors, stars, actor, directors, and filmmakers as the five most probable words for [MASK].",
"Most of them are correct types for the mention after singularization.",
"This approach can generate labels for different kinds of mentions, including named entity mentions, pronoun mentions, and nominal mentions.",
"Another advantage is that it can produce labels that needs to be inferred from the context.",
"This allows us to generate more context-dependent labels for each mention, such as passenger , patient , etc.",
"Then, we propose a method to select from the results obtained through different hypernym extraction patterns to improve the quality of the weak labels.",
"We also use a weighted loss function to make better use of the generated labels for model training.",
"Finally, we adopt a self-training step to further improve the performance of the model.",
"We evaluate our approach with the dataset created by Choi et al. (2018), which to the best of our knowledge, is the only English ultra-fine entity typing dataset currently available.",
"On this dataset, we achieve more than 4% absolute F1 improvement over the previously reported best result.",
"Additionally, we also apply our approach to a traditional fine-grained entity typing dataset: Ontonotes (Gillick et al., 2014), where it also yields better performance than the state of the art.",
"Our contributions are summarized as follows.",
"We propose a new way to generate weak labels for ultra-fine entity typing.",
"We propose an approach to make use of the newly obtained weak labels to improve entity typing results.",
"We conduct experiments on both an ultra-fine entity typing dataset and a traditional fine-grained entity typing dataset to verify the effectiveness of our method.",
"The ultra-fine entity typing task proposed by Choi et al. (2018) uses a large, open type vocabulary to achieve better type coverage than the traditional fine-grained entity typing task (Ling and Weld, 2012) that uses manually designed entity type ontologies.",
"There are only limited studies on this newly proposed task: A neural model introduced by (Onoe and Durrett, 2019) filters samples that are too noisy to be used and relabels the remaining samples to get cleaner labels.",
"A graph propagation layer is introduced by (Xiong et al., 2019) to impose a label-relational bias on entity typing models, so as to implicitly capture type dependencies.",
"Onoe et al. (2021) use box embeddings to capture latent type hierarchies.",
"There is also some work on the applications of ultra-fine entity typing: Onoe and Durrett (2020) apply ultra-fine entity typing to learn entity representations for two downstream tasks: coreference arc prediction and named entity disambiguation.",
"The traditional fine-grained entity typing task (Ling and Weld, 2012; Yosef et al., 2012) is closely related to ultra-fine entity typing.",
"Automatic annotation (Ling and Weld, 2012; Gillick et al., 2014; Dai et al., 2020) is also commonly used in the studies of this task to produce large size training data.",
"Many different approaches have been proposed to improve fine-grained entity typing performance.",
"For example, denoising the automatically generated labels (Ren et al., 2016), taking advantage of the entity type hierarchies or type inter-dependencies (Chen et al., 2020; Murty et al., 2018; Lin and Ji, 2019), exploiting external resources such as the information of entities provided in knowledge bases (Jin et al., 2019; Dai et al., 2019; Xin et al., 2018), etc.",
"Our work is also related to recent studies (Petroni et al., 2019; Jiang et al., 2020; Zhang et al., 2020) that probe pretrained language models to obtain knowledge or results for target tasks.",
"Different from them, we use the predictions produced by BERT as intermediate results that are regarded as weak supervision to train better models.",
"(Zhang et al., 2020) also uses Hearst patterns to probe masked language models.",
"However, they target at the entity set expansion task.",
"Our methodology consists of two main steps.",
"First, we obtain weak ultra-fine entity typing labels from a BERT masked language model.",
"Second, we use the generated labels in model training to learn better ultra-fine entity typing models.",
"Given a sentence and a mention of interest in the sentence, our goal is to derive the hypernym or the type of the mention using a BERT MLM.",
"To do this, we insert into the sentence a few tokens to create an artificial Hearst pattern (Hearst, 1992).",
"One of the inserted tokens is a special [MASK] token, which serves as the placeholder of the hypernym of the mention.",
"As the BERT MLM predicts the [MASK] token, we derive the hypernyms of the mention.",
"Consider the first sentence in Table 1 as an example: In late 2015, Leonardo DiCaprio starred in The Revenant.",
"To find the hypernym or the type of Leonardo DiCaprio, we insert three tokens to create a such as pattern: In late 2015, [MASK] such as Leonardo DiCaprio starred in The Revenant.",
"Applying the BERT MLM on the sentence, we derive hypernyms such as actors, Pattern F1 M and any other H 25.3 M and some other H 24.8 H such as M 20.7 such H as M 18.1 H including M 17.4 H especially M 11.5 Table 2: Hypernym extraction patterns.",
"We consider the 63 Hearst-like patterns (Hearst, 1992) presented in (Seitner et al., 2016) that express a hypernym-hypnonym relationship between two terms.",
"Table 2 lists some of the patterns, wherein H and M denote a hypernym and a hyponym, respectively.",
"For example, M and some other H can be used to match Microsoft and some other companies.",
"The general procedure to use these patterns to create input samples for BERT MLM and obtain labels from its predictions is as follows.",
"We first regard the mention as M .",
"Then, we insert the rest of the pattern either before or after the mention, and we replace H with the special [MASK] token.",
"After applying the BERT MLM on sentences with artificial Hearst patterns, we derive top k type labels from the prediction for [MASK].",
"To drive these k labels, we first sigularize the most probable words that are plural.",
"Then, remove those that are not in the type vocabulary of the dataset.",
"Finally, use the most probable k different words as k labels.",
"For example, if we want to obtain 3 labels, and the most probable words are people, actors, celebrities, famous, actor, etc.",
"Then the 3 labels should be person , actor , celebrity .",
"Because actor is the singluar form of actors, and famous is not in the type vocabulary.",
"We show the performance of our method for obtaining 10 type labels for each mention with different patterns in Table",
"2. A pre-trained BERT-Base-Cased MLM is used to obtain the results 1 .",
"For nominal mentions, directly applying the patterns that starts with M with the above procedure 1 We use the pretrained model provided in the Transformers library.",
"We also tried using BERT-Large and RoBERTa models.",
"However, they do not yield better performance.",
"may sometimes be problematic.",
"For example, consider the noun phrase the factory in Thailand as a mention.",
"If we use the M and some other H pattern and insert and other [MASK] after the mention, the BERT MLM will predict the type country for Thailand instead of for the entire mention.",
"To avoid such errors, while applying patterns that starts with M for nominal mentions, we regard the head word of the mention as M instead.",
"A more subtle and challenging problem is that the quality of the type labels derived from different patterns for different mentions can be very different.",
"For example, for the mention He in sentence He has won some very tough elections and he's governor of the largest state, the pattern H such as M leads to person , election , time , thing , leader as the top five types.",
"But using the pattern M and any other H , we get candidate , politician , man , person , governor .",
"On the other hand, for mention the Al Merreikh Stadium in It was only Chaouchi's third cap during that unforgettable night in the Al Merreikh Stadium, the results of using H such as M (the top five types are venue , place , facility , location , area ) is better than using M and any other H (the top five types are venue , stadium , game , match , time ).",
"To address the above problem, we do not use a same pattern for all the mentions.",
"Instead, for each mention, we try to select the best pattern to apply from a list of patterns.",
"This is achieved by using a baseline ultra-fine entity typing model, BERT-Ultra-Pre , which is trained beforehand without using labels generated with our BERT MLM based approach.",
"Details of BERT-Ultra-Pre can be found in Section 5.2.",
"Denote the pattern list as L .",
"With each pattern in L , we can apply it on the given mention to derive a set of labels from the BERT MLM.",
"Then, we find the set of labels that have the most overlap with the labels predicted by BERT-Ultra-Pre .",
"Finally, the given mention is annotated with this set of labels.",
"It is not necessary to use all the patterns in (Seit-ner et al., 2016).",
"To construct L , the list of patterns used for annotation, we perform the following procedure.",
"Step 1: Initialize L to contain the best performing pattern (i.e., M and any other H ) only.",
"Step 2: From all the patterns not in L , find the one that may bring the greatest improvement in F1 score if it is added to L .",
"Step 3: Add the pattern found in Step 2 to the L if the improvement brought by it is larger than a threshold.",
"Step 4: Repeat steps 2-3 until no patterns can be added.",
"Discussion on Type Coverage Since we only use one [MASK] token to generate labels, the model cannot produce multi-word types (e.g., football player ) or single word types that are not present in the BERT MLM vocabulary.",
"The BERT MLM vocabulary covers about 92% of the labels in the human annotated dataset constructed by Choi et al. (2018).",
"Type coverage is a known issue with weak supervision, and is tolerable if the generated labels can be used to achieve our final goal: improving the performance of the ultra-fine entity typing model.",
"Our approach generates type labels for all three types of mentions: named entity mentions, pronoun mentions, and nominal mentions.",
"For named entity mentions and nominal mentions, existing automatic annotation approaches can already provide some labels for them by using the entity types in knowledge bases or using the head words as types (Ling and Weld, 2012; Choi et al., 2018).",
"Thus, we combine these labels with the labels generated by us.",
"For pronoun mentions, no other labels are used.",
"Besides the automatically annotated samples, we can also use a small amount of human annotated samples provided by the dataset for model training.",
"Our ultra-fine entity typing model follows the BERT-based model in (Onoe and Durrett, 2019).",
"Given a sentence that contains an entity mention, we form the sequence [CLS] sentence [SEP] mention string [SEP] as the input to BERT.",
"Then, de-noting the final hidden vector of the [CLS] token as u , we add a linear classification layer on top of u to model the probability of each type: p = ( W u ) , (1) where is the sigmoid function, W is a trainable weight matrix.",
"p R d , where d is the number of types used by the dataset.",
"We assign a type t to the mention if p t , its corresponding element in p , is larger than 0.5.",
"If no such types are found, we assign the one with the largest predicted probability to the mention.",
"To make use of the automatically labeled samples, some existing approaches mix them with high quality human annotated samples while training models (Choi et al., 2018; Onoe and Durrett, 2019).",
"However, we find that better performance can be obtained by pretraining the model on automatically labeled samples, then fine-tuning it on human annotated samples.",
"Following (Choi et al., 2018), we partition the whole type vocabulary used by the dataset into three non-overlapping sets: general, fine, and ultrafine types, denoted with T g , T f and T u , respectively.",
"Then, we use the following objective for training: J ( x ) = L ( x, T g ) 1 ( L , T g ) + L ( x, T f ) 1 ( L , T f ) + L ( x, T u ) 1 ( L , T u ) , (2) where x is a training sample; L denotes the set of type labels assigned to x through either human or automatic annotation.",
"The function 1 ( L , T ) equals 1 when a type in L is in set T and 0 otherwise.",
"This loss can avoid penalizing some false negative labels.",
"Unlike existing studies, we define the function L differently for human annotated samples and automatically labeled samples.",
"While pretraining with automatically labeled samples, the labels obtained through entity linking and head word supervision are usually of higher precision than those obtained through BERT MLM.",
"Thus, we propose to assign different weights in the training objective to the labels generated with different methods: L ( x, T ) = (cid:88) t T ( t )[ y t log( p t ) + (1 y t ) log(1 p t )] , (3) where y t equals to 1 if t is annotated as a type for x and 0 otherwise; p t is the probability of whether t should be assigned to x predicted by the model.",
"The value of ( t ) indicates how confident we are about the label t for x .",
"Specifically, it equals to a predefined constant value larger than 1 when t is a positive type for x obtained through entity linking or head word supervision, otherwise, it equals to",
"1. While fine-tuning with human annotated samples, we directly use the binary cross entropy loss: L ( x, T ) = (cid:88) t T [ y t log( p t )+(1 y t ) log(1 p t )] .",
"Denote the ultra-fine entity typing model obtained after pretraining on the automatically labeled data as h , and the model obtained after fine-tuning h with human annotated data as m .",
"A weakness of m is that at the fine-tuning stage, it is trained with only a small number of samples.",
"Thus, we employ self-training to remedy this problem.",
"By using m as a teacher model, our self-training step fine-tunes the model h again with a mixture of the samples from the automatically labeled data and the human annotated data.",
"This time, for the automatically annotated samples, we use pseudo labels generated based on the predictions of m instead of their original weak labels.",
"The newly fine-tuned model should perform better than m , and is used for evaluation.",
"Denote the set of human annotated samples as H , the set of automatically labeled samples as A .",
"The training objective at this step is JST = 1 | H | (cid:88) x HJ ( x ) + 1 | A | (cid:88) x ALST ( x ) , (5) where is a hyperparameter that controls the strength of the supervision from the automatically labeled data.",
"While computing loss for the samples in A , we only use the types that are very likely to be positive or negative.",
"For a sample x , let p t be the probability of it belonging to type t predicted by the model m .",
"We consider a type t very likely to be positive if p t is larger than a threshold P , or if t is a weak label of x and p t is larger than a smaller threshold P w .",
"Denote the set of such types as Y + ( x ) .",
"We consider a type t very likely to be negative if p t is smaller than 1 P .",
"Denote the set of such types as Y ( x ) .",
"Then we have: LST ( x ) = (cid:88) t Y + ( x ) log( p t ) (cid:88) t Y ( x ) log(1 p t ) .",
"Our approach to generating weak entity type labels with BERT MLM can also be applied to the",
"traditional fine-grained entity typing task.",
"Different from ultra-fine entity typing, traditional fine-grained entity typing uses a manually designed entity type ontology to annotate mentions.",
"The types in the ontology are organized in an hierarchical structure.",
"For example, the ontology used by the Ontonotes dataset contains 89 types including /organization, /organization/company, /person, /person/politician, etc.",
"On this dataset, our automatic annotation approach can mainly be helpful to generate better labels for nominal mentions.",
"We still use the same method described in Section 3.1 to create input for BERT MLM based on the given mention.",
"But with traditional fine-grained entity typing, most mentions are assigned only one type path (e.g., a company mention will only be assigned labels { /organization, /organiza-tion/company } , which includes all the types along the path of /organization/company).",
"Thus, while generating labels, we only use the most probable word predicted by the BERT MLM, which is mapped to the types used by the dataset if possible.",
"For example, the word company and its plural form are both mapped to /organization/company.",
"Such a mapping from free-form entity type words to the types used by the dataset can be created manually, which does not require much effort.",
"We mainly construct the mapping with two ways: 1) Check each type used by the dataset, and think of a few words that should belong to it, if possible.",
"For example, for the type /person/artist/author, corresponding words can be author, writer, etc. 2) Run the BERT MLM on a large number of inputs constructed with unannotated mentions, then try to map the words that are most frequently predicted as the most probable word to the entity type ontology.",
"Since only the most probable word predicted by the BERT MLM is used to produce labels, we also only use one hypernym relation pattern: M and any other H .",
"For traditional fine-grained entity typing, we use our approach to generate labels for mentions that are not previously annotated with other automatic annotation approaches.",
"While training, all the automatically labeled mentions are used together.",
"The typing model is the same as the model described in 3.3.",
"The binary cross entropy loss is directly employed as the training objective.",
"We conduct experiments on our primary task: ultrafine entity typing.",
"In addition, we evaluate the performance of our approach when applied to traditional fine-grained entity typing.",
"For ultra-fine entity typing, we use the dataset created by Choi et al. (2018).",
"It uses a type set that contains 10,331 types.",
"These types are partitioned into three categories: 9 general types, 121 fine-grained types, and 10,201 ultra-fine types.",
"There are 5,994 human annotated samples.",
"They are split into train/dev/test with ratio 1:1:1.",
"It also provides 5.2M samples weakly labeled through entity linking and 20M samples weakly labeled through head word supervision.",
"UFET (Choi et al., 2018).",
"This approach obtains the feature vector for classification by using a bi-LSTM, a character level CNN, and pretrained word embeddings.",
"LabelGCN (Xiong et al., 2019).",
"LabelGCN uses a graph propagation layer to capture label correlations.",
"LDET (Onoe and Durrett, 2019).",
"LDET learns a model that performs relabeling and sample filtering to the automatically labeled samples.",
"Their typing model, which employs ELMo embeddings and a bi-LSTM, is train with the denoised labels.",
"Box (Onoe et al., 2021).",
"Box represents entity types with box embeddings to capture latent type hierarchies.",
"We use the BERT-Base-Cased version of BERT for both weak label generation and the typing model in Section 3.3.",
"The hyperparameters are tuned through grid search using F1 on the dev set as criterion.",
"The value of ( t ) in Equation (3) is set to 5.0 for positive types obtained through entity linking or head word supervision.",
"in Equation (5) is set to 0.01.",
"P and P w in Section 3.4 are set to 0.9 and 0.7, respectively.",
"Our approach to generate labels through BERT MLM is applied to each weak sample provided in the original dataset.",
"In addition, we also use our approach to annotate about 3.7M pronoun mentions, which are extracted through string matching from the English Gigaword corpus Method P R F1 UFET 47.1 24.2 32.0 LabelGCN 50.3 29.2 36.9 LDET 51.5 33.0 40.2 Box 52.8 38.8 44.8 Ours 53.6 45.3 49.1 Table 3: Macro-averaged Precision, Recall, and F1 of different approaches on the test set .",
"(Parker et al., 2011).",
"We generate 10 types for each sample 2 .",
"With the procedure described in Sectiton 3.1, three hypernym extraction patterns are used while generating labels with BERT MLM: M and any other H , H such as M , M and some other H .",
"Specifically, adding H such as M and M and some other H improves the F1 score from 0.253 to 0.274, and from 0.274 to 0.279, respectively.",
"Adding any more patterns cannot improve the F1 score for more than 0.007.",
"Following existing work (Onoe et al., 2021; Onoe and Durrett, 2019), we evaluate the macro-averaged precision, recall, and F1 of different approaches on the manually annotated test set.",
"The results are in Table 3.",
"Our approach achieves the best F1 score.",
"It obtains more than 4% F1 score improvement over the existing best reported performance by Box in (Onoe et al., 2021).",
"This demonstrates the effectiveness of our approach.",
"For ablation study, we verify the effectiveness of the different techniques used in our full entity typing approach by evaluating the performance of the following variants: Ours (Single Pattern) only",
"2 The performance of the trained model is relatively insensitive with respect to the number of labels generated with MLM.",
"The difference between the F1 scores of the models trained using 10 and 15 generated types is less than 0.005.",
"uses one pattern: M and any other H ; Ours (Un-weighted Loss) removes the ( t ) term in Equation (3); Ours (No Self-train) does not perform the self-training step.",
"We also evaluate two baseline approaches: BERT-Ultra-Direct uses the same BERT based model described in Section 3.3, but is trained with only the human annotated training samples; BERT-Ultra-Pre also uses the same BERT based model, but is first pretrained with the existing automatically generated training samples in the dataset provided by Choi et al. (2018), then fine-tuned on the human annotated training data.",
"First, the benefit of using the labels generated through BERT MLM can be verified by comparing Ours (No Self-train) and BERT-Ultra-Pre.",
"Because the techniques employed in Ours (No Self-train) , including the use of multiple hypernym extraction patterns and the weighted loss, are both for better utilization of our automatic entity type label generation method.",
"The effectiveness of the use of multiple hypernym extraction patterns, the weighted loss, and the self-training step can be verified by comparing Ours with Ours (Single Pattern) , Ours (Un-weighted Loss) and Ours (No Self-train) , respectively.",
"Among them, self-training is most benefi-cial.",
"It is also interesting to see how our approach performs on different kinds of mentions.",
"Table 5 lists the performance of our full approach and two baseline systems on the three kinds of mentions in the dataset: named entity mention, pronoun mentions, and nominal mentions.",
"Our approach performs much better than BERT-Ultra-Pre on all three kinds of mentions.",
"The improvements in F1 on pronoun and nominal mentions are relatively more substantial.",
"Table 6 presents several ultra-fine entity typing examples, along with the human annotated labels, and the labels predicted by BERT-Ultra-Pre, BERT MLM, and our full approach.",
"In the first example, the label prisoner is a type that depends on the context, and is usually not assigned to humans in knowledge bases.",
"We think that since we can assign such labels to the training samples with our BERT MLM based approach, our Named Entity Pronoun Nominal Method P R F1 P R F1 P R F1 BERT-Ultra 58.1 45.1 50.8 52.9 42.9 47.4 47.4 26.9 34.3 BERT-Ultra-Pre 54.7 50.5 52.5 51.3 46.1 48.6 45.2 33.7 38.6 Ours 58.3 54.4 56.3 57.2 50.0 53.4 49.5 38.9 43.5 Table 5: Performance on named entity mentions, pronoun mentions, and nominal mentions, respectively.",
"The second and third examples demonstrate that our model may not only improve the recall by predicting more correct types, but also reduce incorrect predictions that do not fit the mention or the context well.",
"The Ontonotes dataset uses an ontology that contains 89 types to label entity mentions.",
"We use the version provided by Choi et al. (2018).",
"It includes 11,165 manually annotated mentions, which are split into a test set that contains 8,963 mentions, and a dev set that contain 2,202 mentions.",
"It also provides about 3.4M automatically labeled mentions.",
"Since existing annotations for named entity mentions may be more accurate than the annotations obtained through our approach, we only apply our method to label nominal mentions.",
"Applying the approach in Section 4, we create 1M new automatically labeled mentions with the head word supervision samples (such samples contain mostly nominal mentions) in the ultra-fine dataset.",
"They are used together with the originally provided 3.4M mentions to train the typing model.",
"On this dataset, we compare with the following approaches: UFET (Choi et al., 2018), LDET (Onoe and Durrett, 2019), DSAM (Hu et al., 2020), LTRFET (Lin and Ji, 2019), BERT-Direct .",
"Where BERT-Direct uses the same BERT based model as our approach, but trains with only the weak samples provided in the dataset.",
"LTRFET adopts a hybrid classification method to exploit type inter-dependency.",
"DSAM is a diversified semantic attention model with both mention-level attention and context-level attention.",
"For our approach and BERT-Direct, we still use the pretrained BERT-Base-Cased model for initialization.",
"Although a very large number of weakly labeled mentions are provided, not all of them are needed for training the models.",
"In our experiments, for both our approach and BERT-Direct, the performance does not increase after training on about 0.3M mentions.",
"We report strict accuracy, macro-averaged F1, and micro-averaged F1 (Ling and Weld, 2012).",
"The results are in Table 7.",
"As we can see, our approach also achieves the best performance on this dataset.",
"Comparing it with BERT-Direct demonstrates the benefit of the samples automatically labeled with BERT MLM.",
"However, less improvement is achieved on OntoNotes than on the ultra-fine entity typing Method Acc Macro F1 Micro F1 UFET 59.5 76.8 71.8 LTRFET 63.8 82.9 77.3 LDET 64.9 84.5 79.2 DSAM 66.06 83.07 78.19 BERT-Direct 63.25 80.84 75.90 Ours 67.44 85.44 80.35 Table 7: Performance of different approaches on Ontonotes.",
"dataset.",
"We think there are two main reasons.",
"First, OntoNotes uses a much smaller entity type set (89 types) than the ultra-fine entity typing dataset (10,331 types).",
"As a result, some finer grained types that can be produced by our approach become less beneficial.",
"Second, generating type labels that are highly dependent on the context (e.g., types like criminal , speaker ) is an advantage of our approach, and the ultra-fine entity typing dataset contains more such type labels.",
"In this work, we propose a new approach to automatically generate ultra-fine entity typing labels.",
"Given a sentence that contains a mention, we insert a hypernym extraction pattern with a [MASK] token in it, so that a pretrained BERT MLM may predict hypernyms of the mention for [MASK].",
"Multiple patterns are used to produce better labels for each mention.",
"We also propose to use a weighted loss and perform a self-training step to learn better entity typing models.",
"Experimental results show that our approach greatly outperforms state-of-the-art systems.",
"Additionally, we also apply our approach to traditional fine-grained entity typing, and verify its effectiveness with experiments.",
"This paper was supported by the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, No. R6020-19 and No. R6021-20) from the Research Grants Council (RGC) of Hong Kong, with special thanks to the WeChat-HKUST WHAT Lab on Artificial Intelligence Technology."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"method",
"result",
"method",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"objective",
"result",
"result",
"other"
] |
[
"In this paper, we study the problem of identifying the principals and accessories from the fact description with multiple defendants in a criminal case.",
"We treat the fact descriptions as narrative texts and the defendants as roles over the narrative story.",
"We propose to model the defendants with behavioral semantic information and statistical characteristics , then learning the importances of defendants within a learning-to-rank framework.",
"Experimental results on a real-world dataset demonstrate the behavior analysis can effectively model the defendants' impacts in a complex case.",
"In recent years, much previous work has focused on the building of legal assistant systems with different functions, e.g. searching relevant cases for a given query (Chen et al., 2013), predicting charge labels based on the fact description in a criminal case (Luo et al., 2017; Hu et al., 2018; Zhong et al., 2018), generating the interpretable court views from the fact descriptions (Ye et al., 2018).",
"Though having achieved promising results in this field, most of the work only studies the simple cases with only one defendant.",
"However, there exist lots of complex criminal cases in practice, which will involve multiple criminals.",
"In this work, we propose to study the identification of principals and accessories from the fact description in a criminal case .",
"The principal refers to a criminal who organizes and leads criminal groups to carry out criminal activities or plays a major role in joint crimes.",
"Correspondingly, we refer the accessory as the one who plays a secondary or auxiliary role.",
"As the illustration of our task in Fig. 1, given the fact description as well as a list indicates equal contribution.",
"of defendants, we expect to identify the principals and accessories from the defendants.",
"Since the fact descriptions in criminal cases are usually narrative texts which mostly record the criminal events, we treat the defendants in the fact descriptions as the narrative roles and the protagonists who have the greater impact will be identified as the principles.",
"Narrative comprehension has been studied in NLP for a long time.",
"The traditional method to measure the importances of roles is based on the roles' dispersion over the story (Karsdorp et al., 2012).",
"It supposes that comparing to less important roles, the roles with bigger impact are expected to appear at more places and are more evenly distributed over the story.",
"However, this assumption ignores actions of roles (denoted as behavioral semantic information ), which may be a key factor that estimates their impacts in legal-context scenarios.",
"In this paper, we propose to model a defendant from two perspectives of behavioral semantic information and statistical characteristics .",
"After that, we further learned to estimate their importances with a learning-to-rank framework (Joachims et al., 2007).",
"Our contributions in this paper can be summarized as: We are the first to identify principals and accessories from complex cases with multiple defendants based on the comprehension of a narrative fact description.",
"We treat the fact descriptions as narrative texts and the defendants as roles in a narrative story.",
"We propose to model a defendant with semantic information and statistical characteristics and estimate his importance within a learning-to-rank framework.",
"Our work is a task related to narrative comprehension.",
"There has recently been a upsurge in re-Fact Description: [ 10 ]#[Xie Junbo started to rob. With the consent of Qiu, they followed Zhang to the bridgehead of Zhatou Village in Daxi Tawn. Xie Junbo pushed Zhang to the ground and then stole a bag, which contained a Nokia mobile phone, more than 10 yuan in cash, and a ID card, a bank card and so on.] Defendants: [ , ]#[Xie Junbo, Qiu] Principal: [ ] # [Xie Junbo] Accessory: [ ] # [Qiu] Figure 1: An example of a case involving two defendants.",
"search in information extraction of narrative and story understanding.",
"Ouyang and McKeown (2015) presents a change-based model to capture the rise and fall of story characteristics within narrative.",
"Goh et al. (2012) proposes to identity the protagonist in fairy tales automatically with the aid of verbs.",
"Karsdorp et al. (2012) presents a method for extraction the cast from fictional texts and ranks the different cast members on a scale of importance to the story on the basis of their dispersion in the text.",
"However, it only considers the position information and ignores the behavioral semantic information.",
"Meanwhile, the task is related to the researches on legal assistant system.",
"Studies on the application of machine learning in the judicial field have been concentrated in the following directions: learning to predict charges for criminal cases given the fact descriptions (Luo et al., 2017; Jiang et al., 2018; Chao et al., 2019), identifying applicable articles for a given case (Liu and Liao, 2005), providing a tool for automated text summary of legal documents based on word frequency augmented with additional domain-specific knowledge (Polsley et al., 2016).",
"In addition, Ye et al. (2018) put forward a new task of COURT-VIEW-GEN that generates court view from the fact description.",
"But those studies do not involve complex cases involving multiple criminals.",
"Given the fact description f of a case and its defendants set d ( d 1 , d 2 , ..., d n ) , we expect to classify each d i as either a principal or an accessory.",
"A function F for scoring each defendant is learned by a ranking method and we regard its result as the probability of d i being a principal.",
"Note that this could be treated as a classification problem without loss of generality.",
"We consider two feature families when modeling defendant: behavioral semantic features (denoted as f semantic , including Activity Fragments ) and statistical characteristics (denoted as f statistical including Sentence Syntactic Complexity , Cooperation Mode and Order and Frequency ).",
"Activity Fragments : Sentences containing one's actions can reflect his impact in the case to a great extent.",
"Then, we select sentences by name for each defendant and filter out those without verbs.",
"We feed them and the total fact description into two bidirectional LSTMs (Schuster and Paliwal, 1997) for automatic semantic information extraction.",
"Next, we introduce match-lstm (Wang and Jiang, 2016) to fuse those two outputs to measure a defendant's impact on the case.",
"The output from total fact description corresponds to the hypothesis of match-lstm and that from activity fragments corresponds to the premise.",
"Finally, the output of the match-lstm is treated as the behavioral semantic features ( f semantic ) of a defendant.",
"Sentence Syntactic Complexity : The principal of a case is defined as a person who plays a major role in criminal activities.",
"Then, he may appear in more sentences and there may be more verbs related to him.",
"Accordingly, we utilize the syntactic complexity of sentences (Ouyang and McKeown, 2015) as an important feature.",
"Several statistical characteristics are considered to model the syntactic complexity, including the length of the sentence ( sentlength ), the length of its verb phrases ( vplength ), the depth of the sentence's parse tree ( sentdepth ) (Klein and Manning, 2003), the depth of the verb phrase's parse tree ( vpdepth ), average number of words ( avgwords ), average number of verds ( avgverbs ).",
"Cooperation Mode : Moreover, the protagonist is often the plotter of the story and it can be expressed as who is the planner of the case.",
"This information is often reflected by some verbs or con After contacting in advance, Shen Chunxi accompanied Yang Huijun to the Yu's lodge in Fangjia village, Chengbei street 2013 9 11 During the period from September to November 2017, the defendants Tao, Wu and Gao were premeditated Figure 2: Two example that reflects the defendants' cooperation mode in a case.",
"The word in red represents a master-slave relation between two defendants and the word in green represents equality relation.",
"Therefore, we could consider only defendant pairs ( A , B ) such that A plays a more important role than http:/wenshu.court.gov.cn B and label it 1 .",
"Accordingly, samples containing only principals or accessories are labelled 0 .",
"We get a total of 15312 criminal cases with more than two defendants and the percentage of cases involving only one principal is 67%.",
"Finally, 41342 paired samples are generated.",
"Summary statistics of the data are listed in Tab.",
"To verify the reliability and stability of the model, we perform 10-fold cross-validation in our dataset.",
"junctions and can be obtained by mining the cooperation mode (could be master-slave or equality relation) between defendants as shown in Fig. 2.",
"total cases cases@ 2 cases@ 3 cases@ 4 + 15312 7364 5016 2932 Table 1: Summary statistics of the data.",
"corpus of criminal cases.",
"Then we utilize the Stanford CoreNLP (Manning et al., 2014) to find out the conjunction or verb between two defendants.",
"Finally, it is mapped to a vector based on which set of cooperation mode it belongs to.",
"Order and Frequency : Finally, we propose two other potentially useful features.",
"One is the order of appearances of the defendants and the other is the number of occurrences .",
"We suppose the principal of a case is the plotter and naturally should appears in the fact description earlier.",
"Besides, defendant with more frequent occurrences probably has a greater impact on the case.",
"3.2 Ranking Model We utilize RankNet (Burges et al., 2005) to train our ranking model.",
"4.2 Settings The dimension of word embedding is 200 and dimension of hidden states in BiLstm is set to 256.",
"In addition, mini-batch size is set to 32 and the default learning rate of Adam (Kingma and Ba, 2014) is 1 e 3 .",
"4.3 Baselines Previous studies on the importance distinction of roles in narrative texts are mainly based on statistical features and we are merely exploring solutions to this new problem proposed by this paper.",
"We calculate scores for both f semantic and f statistical respectively and regard their weighted sum as the final probability of being a principal.",
"The scoring units are all composed of linear functions.",
"4 Experiments 4.1 Data Preparation Extensive experiments are conducted on a real world dataset obtained from Chinese government website to evaluate our method.",
"Our baselines are as follows: Frequency : A basic method in which retrieved items are ranked according to their number of occurrences.",
"Dispersion : The basic idea is that more important roles are expected to appear at more places in the story and are more eventually distributed over the story than less important roles (Karsdorp et al., 2012).",
"Frequency & Dispersion : We combine the two methods above as our third baseline.",
"al. (2018), we regard the paragraph start with our court identified that and end with the above facts as the fact description.",
"Burges et al. (2005) shows that training on ties makes little difference.",
"Model P macro R macro F macro Frequency 66.54 63.73 65.10 Dispersion 71.28 69.44 70.35 Frequency & Dispersion 74.15 72.34 73.23 Ours 80.36 79.18 79.77 Table 2: The performances of different role modeling methods.",
"1.",
"Tab.",
"2 presents the performances of different role modeling methods.",
"It can be seen that our model achieves a considerable improvement in P macro , R macro and F macro .",
"As shown in Fig. 3, the defendant in red is the mastermind of the case and should be judged to be the principal, despite his low appearances.",
"Position or frequency information does not effectively reflect the status of a role in such samples.",
"However, our method captures this information by the cooperation mode feature between Yin and Zhao , with the help of verb instructed .",
"2 1000 1000",
"Yin instructed Zhao to be a drug seller and sold two small packages of methamphetamine to Wang Peng at the price of 1000 yuan.",
"Zhao collected 1000 yuan of poison money.",
"We compare the performances of our two feature families to explore which one contributes more to the task and the result is shown in Tab.",
"4.",
"We find that the feature family f semantic achieve better performance in all the evaluation metrics.",
"And its results are even better than our baselines.",
"It reveals that defendant's behavioral semantic information is more valuable than those statistical characteristics.",
"We expect to find a feature conjunction that makes the most sense for modeling role's impact in a story.",
"Like Duan et al. (2010), we use an advanced greedy method to find the best feature conjunction.",
"Given all n (it is 10 in this paper) features we extracted, we construct 2 n feature sets and randomly pick 100 of them.",
"Then, we run the greedy selection algorithm based on the feature set (de-noted as Best ) with the best MAP among those 100 feature sets.",
"Features excluded those in Best are denoted as Ex best and all the extracted features are denoted as Full .",
"We evaluate the Best and each feature in Ex best and if the result is better than the previous one, this feature will be added into the Best .",
"We repeat the process until the Best is no longer updated.",
"Finally, we get the best feature conjunction composed by f semantic , vpdepth , order of appearances , number of occurrences , cooperation mode .",
"To reflect the gap between the Best and the Full , we evaluate their performances on datasets with different numbers of defendants.",
"Tab.",
"4 illustrates the Best feature set also outperforms the Full feature set when dealing with cases with different numbers of defendants.",
"We are interested in which features in particular are highly valued for role modeling.",
"The importance of each feature is evaluated by the decrease of performance when removing this feature measured from the Best .",
"Fig. 4 reveals the importance of each feature for role modeling.",
"We observe that f semantic plays a very important role.",
"The F macro declines seriously (more than 6 percentage points) when we remove it from the feature set.",
"We suppose that semantic features represent the behavioral information of roles and a defendant's behavior is of great concern in determining his criminal responsibility.",
"The match result of a defendant's actions and global description of the case can effectively model his influence in the whole case.",
"In this paper, we study the task of identifying principals and accessories from the fact description in a complex case.",
"We find a set of effective features for role modeling.",
"and evaluate that the behavioral semantic information is most worthy of attention.",
"We hope to address this problem with a completely semantic-based approach in the future.",
"This work was supported by National Key Research and Development Program of China (Grant No. 2017YFB1402400), and National Natural Science Foundation of China (No. 61976221)."
] | [
"method",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"other"
] |
[
"Dialogue systems pretrained with large language models generate locally coherent responses, but lack the fine-grained control over responses necessary to achieve specific goals.",
"A promising method for controlling generated responses is exemplar-based generation , in which models edit exemplar responses that are retrieved from training data, or hand-written to strategically address discourse-level goals, to fit new dialogue contexts.",
"We present an E xemplar-based D ialogue GE neration model, EDGE , that uses the semantic frames present in exemplar responses to guide response generation.",
"We show that controlling dialogue generation based on the semantic frames of exemplars improves the coherence of generated responses, while preserving semantic meaning and conversation goals present in exemplar re1 sponses.",
"Large pre-trained language models (Radford et al., 2019; Devlin et al., 2019) currently used to power dialogue generation systems produce increasingly fluent and appropriate responses for novel dialogue contexts (Wolf et al., 2019; Zhang et al., 2020; Budzianowski and Vulic, 2019).",
"However, the generated responses are often uninformative or inconsistent with high-level constraints of a dialogue system and the tasks it supports.",
"Prior work added high-level control for specific intents such as politeness (Niu and Bansal, 2018), emotions (Zhong et al., 2019) and persona (Song et al., 2019) through a fixed set of coarse labels, but these methods require manually labelling data for each new intent.",
"One approach for adding control over response intents is to use response exemplars that are handwritten or strategically curated to promote high-level goals without explicit labels.",
"By condition-Context My friends and I have started eating vegan food since yesterday.",
"ing on response exemplars, we can generate coherent responses that follow the intents of the exemplars without manually labeling vast amounts of data.",
"Current exemplar-based methods (Cai et al., 2019b,a; Wu et al., 2019) have two key drawbacks: (1) the models often overfit to the training data, then produce incoherent responses by copying irrelevant tokens from exemplar responses into the generated responses, and (2) the models often learn to ignore the exemplars, then produce responses that are not controlled by the strategic exemplars.",
"To generate locally coherent responses that also adhere to high-level dialogue constraints, we present EDGE, a model that uses the semantic structure of an exemplar response, instead of the tokens of the exemplar response, to guide generation (Ta-ble 1).",
"For a novel dialogue context, we retrieve a human-written response exemplar and represent it using its semantic frames (Fillmore, 1982).",
"We then incorporate the dialogue context and the semantic frames of the response exemplars in a powerful pre-trained conditional language model (Rad-ford et al., 2019), thereby combining the benefits of fluency of language models and the semantic guidance of the exemplar responses that are structured 1 Code available at https://github.com/ prakharguptaz/EDGE-exemplars with rich linguistic knowledge.",
"By using semantic frames from exemplars, EDGE outperforms a set of generative and retrieval-based baselines in a quantitative evaluation of response quality (coherence, consistency, fluency and diversity of responses), and outperforms token-based approaches in capturing the semantic structure of exemplar responses.",
"Experiments demonstrate that semantic frames capture the meaning of the exemplars rather than their surface forms, such that EDGE does not copy inappropriate tokens from the exemplars.",
"In a zero-shot anti-scam application, we show that EDGE generates exemplar-conditioned responses that are coherent, context-specific, and adherent to underlying exemplar intents and their high-level goals.",
"To our knowledge, this work is the first to use frame semantics as a means of control in exemplar-based dialogue generation.",
"To achieve fluent and contextually-appropriate generated responses that adhere to the semantic structure of exemplars and capture their high-level goals, we use the frame semantics of the exemplars to guide the generation of responses.",
"The central idea of frame semantics is frames , which are semantic abstractions describing universal categories of events, concepts, and relationships, based on the linguistic resource FrameNet (Baker et al., 1998).",
"Frame semantics provide a higher-level representation of individual tokens in the response exemplars based on the purpose of those tokens in the response.",
"For instance, the tokens hear', say', see', smell', feel', all share a similar purpose of their semantic frame label Perception', such that each frame can have many possible lexical surface forms.",
"FrameNet defines more than 1200 frames such as Perception'.",
"Representing response exemplars in terms of their semantic frames allows our model to reuse their semantic structure to adapt the low-level response tokens to fit novel dialogue contexts, and produce diverse response variations that fit within the semantic constraints.",
"For example, in Table 1, EDGE generates multiple diverse and coherent variations for both exemplar responses by conditioning on their frame semantic structures.",
"The use of frame semantics to represent exemplars in terms of their semantic meaning rather than their surface forms provides two additional benefits: (1) preserving the semantic structure of exemplars helps to preserve implicit constraints of dialogue systems present in exemplar responses including desired strategies, intents, and emotional tones, and (2) using frames rather than tokens helps the model to avoid overfitting.",
"A model that uses exemplar tokens rather than frames during training can become over-relient on copying tokens, such that during generation the model copies inappropriate tokens from the exemplar response.",
"For example, given the exemplar response Eggs are very beneficial for your body (Table 1), a token-based model can access the token Eggs and incorrectly use Eggs in its response about vegan food.",
"EDGE reduces such overfitting by conditioning on the semantic frames of the exemplars during training and generation.",
"For example, EDGE uses the frame FOOD as input instead of Eggs (Table 1), and substitutes an appropriate token (Vegan food) in its generated response.",
"In our experiments, we find that using frame semantics in exemplar-conditioned dialogue generation improves the coherency of responses, while preserving the semantic structure and underlying intents of the exemplar responses.",
"Our model EDGE extends a dialogue generation model TransferTransfo (Wolf et al., 2019) to control generation by including semantic frames from an exemplar response in addition to the dialogue history.",
"TransferTransfo is based on the transformer architecture and fine-tunes a generative pretrained model (GPT) (Radford, 2018) with two objective functions: (1) a language modelling objective, and (2) a next-utterance classification objective.",
"The language modelling objective function maximizes the likelihood for a given sequence of tokens, and the next-utterance classification objective distinguishes a correct response for an input dialogue context from a set of randomly selected distractor responses.",
"We adapt the TransferTransfo model to our setting by first replacing GPT with GPT-2 (Radford et al., 2019) as our base architecture.",
"GPT-2 can be substituted with other language models such as Transformer-XL (Dai et al., 2019) or dialogue specific models such as DialoGPT (Zhang et al., 2020).",
"To incorporate semantic frames from exemplar responses in the TransferTransfo architecture, we uniquely add tokens representing the semantic frames to the input context.",
"Specifically, we concatenate the input context, 3020 Figure 1: The input representation of our proposed approach.",
"a < bof > token, semantic frame tokens, a < bor > token, and the response (Figure 1).",
"Prior work also uses concatenation to add different signals to the input for training dialog systems (Budzianowski and Vulic, 2019).",
"Following TransferTransfo model, we also add token, position, and speaker role em-beddings.",
"For frame extraction from exemplars, we use the open-sesame model Swayamdipta et al. (2017) and their open-sourced implementation 2 .",
"We use the frame predicates and ignore the arguments.",
"Because there are no frames corresponding to wh-question words such as why' and how', yes' and no', question mark or pronouns, we add each of these tokens in the frame vocabulary.",
"Training During training, the model learns to generate the ground truth responses conditioned on the dialogue context tokens followed by the in-order predicted semantic frames for the ground truth response (Figure 1).",
"Following TransferTransfo, we mask the tokens of the context for the language modelling objective.",
"To ensure that the model does not ignore the exemplar response, we use the frames of the ground truth response in input during training, instead of frames from a retrieved response.",
"In pilot experiments, our model generated incoherent replies to the dialogue context when the semantic frames were incorrectly detected or irrelevant to the dialogue context.",
"To make the model more robust to missing frames, frames changing order between the exemplar and the response, and irrelevant or inaccurate frames, we: (1) randomly drop 15% of semantic frames from the sequence, (2) randomly shuffle semantic frames sequences (over a length of 2 tokens) with a probability of 0.1, and (3) add random semantic frames in random positions to increase the sequence length by 30%.",
"EDGE's ability to generate coherent responses despite inaccurate frame detection is important as the semantic frame prediction model that EDGE uses reports F1 scores of 73.25% for frame target detection and 86.55% for frame identification.",
"However, informal dialogue text can lead to lower performance.",
"Evaluating on 110 conversational sentences in the FrameNet 1.7 test set, the semantic frame prediction model achieves F1 scores of 71.78% for frame target detection and 74.58% for frame identification.",
"We train EDGE by dropping, reordering and adding random frames so that EDGE learns to generate coherent responses in the presence of noisy frames from the exemplars.",
"Inference During inference, we either rely on pre-defined response exemplars, or perform retrieval by first using the state-of-the-art Poly-encoder model (Humeau et al., 2020) to retrieve response candidates and then select the highest ranked response as the exemplar response.",
"We add the semantic frame sequence from the exemplar response as the input along with the context of the conversation.",
"The model then creates a response which is controlled by the semantic frames from the exemplar, and coherent with the context of the conversation.",
"We compared our model to existing generative and retrieval-based approaches in two settings: (1) open-domain dialogue generation using the Dailydialog dataset (Li et al., 2017), and (2) goal-oriented anti-scam dialogue generation using a set of fraudulent emails (Radev, 2008) as prompts and a small set of intent-specific anti-scam response exemplars to inform responses.",
"For the anti-scam domain, we investigated exemplar conditioned responses in a case without domain-specific training (i.e. zero-shot generation).",
"Open-Domain We use the Dailydialog dataset (Li et al., 2017), which consists of 13,118 daily conversations covering topics such as culture, education, tourism and health.",
"The validation and test sets have 1000 conversations each.",
"We consider maximum of up to previous 5 utterances from the con2 https://github.com/swabhs/open-sesame 3021 versation history as the context for both retrieval and generation.",
"The 1000 conversations in the test set consists of 6740 such context-response pairs.",
"Anti-Scam We use fraudulent e-mails 3 as test data (Radev, 2008) consisting of 2500 emails.",
"The intent of the fraudulent email sender (a scammer) is to convince the recipient to give the sender a large amount of money or some other information.",
"We remove all links and email addresses from the email text and limit the text content to the first and last 3 sentences of the email, as these sentences typically reflect the setup and intent of the email, and the shorter email length reduces inference time.",
"We compared EDGE with a set of baseline models: Retrieval (Humeau et al., 2020) The Poly-encoder retrieval model allows for fast real-time inference by precomputing each candidate response representation once, and then ranking candidate responses for retrieval by attending to the context.",
"Specifically, the model encodes two separate transformers, one for the context and one for the response, and creates multiple vector representations from the context.",
"We use ParlAI's implementation 4 of this pre-trained transformer-based model.",
"GPT2-Gen (Wolf et al., 2019) The dialogue generation model TransferTransfo (except that we replaced GPT with GPT-2).",
"This model is the base architecture in our model.",
"It uses the dialogue context to inform response generation, and does not condition on exemplar responses.",
"LSTM-Tokens (Cai et al., 2019b) The state-of-the-art exemplar-conditioned open-domain response generation model.",
"It uses the dialogue context along with tokens extracted from an exemplar response (using a transformer-based matching framework) to inform generation.",
"LSTM with attention is used as the decoder.",
"LSTM-Frames An ablation model that varies LSTM-Tokens to use the semantic frames from exemplar responses instead of extracted tokens.",
"LSTM with attention is used as the decoder.",
"GPT2-Tokens An ablation model that modifies EDGE to use tokens extracted from the exemplar response, as in (Cai et al., 2019b), instead of semantic frames.",
"GPT-2 is used as the decoder.",
"GPT2-Frames (EDGE) Our model that uses the 3 https://kaggle.com/rtatman/fraudulent-email-corpus 4 https://parl.ai/projects/polyencoder dialogue context along with the semantic frames of the exemplar response to inform response generation.",
"GPT-2 is used as the decoder.",
"Human We collected human written responses for the test contexts.",
"We fine-tuned or trained each model on the Dailydialog dataset (Li et al., 2017).",
"We use the architecture described in (Wolf et al., 2019) and use their open-source implementation with fine-tunable GPT-2 architecture 5 .",
"We chose the 124M version of GPT-2 due to its performance and smaller size which accomodates resource constraints.",
"We used the Adam optimizer with learning rate of 6.25e-5, L2 weight decay of 0.01, and batch size of 2.",
"We set the number of candidates to 2 for the next-utterance classification objective.",
"Each model was trained until maximum of 10 epochs with early stopping criteria.",
"We set the maximum decoding length to 50 tokens and minimum to 4 for all models and use nucleus sampling (Holtzman et al., 2020) with threshold of 0.9.",
"For LSTM-Tokens model, we used the open-sourced implementation released by the authors 6 .",
"In this section we report results for both open-domain and goal-oriented anti-scam domains.",
"We compared EDGE with the baseline models on open-domain conversations in Dailydialog dataset, and report results in terms of human-rated and automatic metrics that capture aspects of response quality individually ( e.g. , is the response grammatically correct?) and with respect to the context ( e.g. , is the response a valid continuation of the preceding conversation?).",
"We additionally consider how well the responses adhere to the semantic structure of the retrieved response exemplars.",
"Word overlap metrics have been shown to correlate poorly with human judgements of quality of responses (Liu et al., 2016) as they don't account for all the plausible responses for any given conversational context (Gupta et al., 2019).",
"We therefore conducted human evaluations to capture aspects of 5 http://github.com/huggingface/transfer-learning-conv-ai 6 https://github.com/jcyk/seqgen/tree/master/ranker 3022 Model Dist-2 Dist-3 MaUdE Coherent Fluent Consistent Interesting Semantics Retrieval 0.294 0.526 0.921 2.41 2.61 2.48 2.32 GPT2-Gen 0.249 0.494 0.905 2.42 2.55 2 .",
"5.1.2 Results The human evaluations in Table 2 demonstrate that (1) Unsurprisingly, the GPT-2 based models Metric 1 Exemplar 5 Exemplars 10 Exemplars GPT2-Gen Dist-2 0.240 0.129 0.096 Dist-3 0.481 0.327 0.270 LSTM-Tokens SemCov 0.347 0.354 0.360 Avg BLEU-2 0.216 0.214 0.214 Dist-2 0.184 0.104 0.080 Dist-3 0.387 0.267 0.223 EDGE SemCov 0.650 0.620 0.625 Avg BLEU-2 0.192 0.170 0.161 Dist-2 0.274 0.155 0.118 Dist-3 0.569 0.409 0.344 Table 3: EDGE shows higher semantic coverage (Sem-Cov) with the exemplar responses while showing lower lexical overlap (lower Avg BLEU-2).",
"the model quality such as coherence and fluency.",
"Annotators on Amazon Mechanical Turk platform rated the responses of the models for 100 randomly selected test contexts on a scale of 1 to 3 (with 1 as the lowest and 3 the highest) on the following criteria: Coherent Does the response serve as a valid continuation of the preceding conversation?",
"Interesting Is the response dull or interesting?",
"Fluent Is the response naturally written, grammatical correct and non-repetitive?",
"Consistent Does the response make logical sense given the context and by itself?",
"Uses semantics Does the response share similar concepts with the retrieved response?",
"The annotators were shown a conversational context and responses to rate, and were provided more detailed instructions and examples for each criteria, following Mehri and Eskenazi (2020).",
"We collected ratings from 3 workers per context for all 7 models, with a total of 2100 ratings.",
"The Cohen's Kappa (Cohen, 1968) value for inter-annotator agreement is 0.45 for the annotations, indicating moderate agreement.",
"We also evaluate the models using an unreferenced automated evaluation metric MaUdE (Sinha et al., 2020) which uses large pre-trained language models to extract latent representations of utterances and is trained using Noise Contrastive Estimation.",
"It has shown high correlation with human judgements on criteria such as interestingness and fluency.",
"For measuring diversity of responses we calculate Dist-n (Li et al., 2016).",
"It is the ratio of distinct n-grams to total number n-grams for all the responses from a model.",
"(EDGE, GPT2-Tokens, and GPT2-Gen) achieve higher ratings for quality metrics of coherence, fluency, consistency, and interestingness compared to the LSTM based models (LSTM-Tokens and LSTM-Frames), and (2) The models that use semantic frames from retrieved responses (EDGE and LSTM-Frames) achieve higher ratings than the models that directly used tokens from the retrieved response (GPT2-Tokens and LSTM-Tokens).",
"EDGE, our GPT-2 based approach that uses semantic frames from response exemplars, outperforms all other models on overall quality metrics, and outperforms token-based approaches in preserving semantics from reference responses.",
"Both LSTM-Frames and EDGE achieve high Uses Semantics rating, indicating that the models which condition on frames preserve exemplar semantics better.",
"EDGE and GPT2-Tokens also achieve the highest MaUdE scores as well as the highest Dist-n scores, indicating high quality and diversity of the 3023 Context Human1 : they sell everything.",
"5.1.4 Qualitative Analysis We present sample dialogue contexts and model responses to demonstrate how EDGE performs on a range of retrieved response scenarios (Table 4).",
"Overall, EDGE controls the length and semantic structure of its responses based on retrieved human-written exemplars, and thus produces longer and more specific responses compared to the purely generative model, GPT2-Gen.",
"EDGE benefits from this exemplar-based control, even when retrieval or frame extraction fails.",
"When the retrieved responses are not appropriate for the dialogue context (left two examples), EDGE leverages the semantic frames in the retrieved response to generate a coherent and specific response (e.g., by adding details such as eat something chinese?), while other 3024 Context i want you to assist in investing money... want to acquire stock in multi national companies and to engage in safe investments....",
"Our results demonstrate that EDGE generates higher-quality responses while preserving retrieved response semantics as rated by humans (Table 2).",
"We further evaluate EDGE and baseline models (LSTM-Tokens, GPT2-Gen) to assess generated responses' consistency with retrieved responses, and the diversity of the generated responses (Ta-ble 3).",
"We do not limit this experiment to the top retrieved response and instead select subsets of retrieved responses (of sizes 1, 5 and 10) for each test dialogue context by consecutively selecting each next highest ranked response if the maximum Jaccard similarity of its semantic frames with the semantic frames of any response in the subset is less than 0.5, and generate responses based on each response in the subset.",
"We calculate Dist-n to measure diversity, or the ratio of distinct to total n-grams for all the responses.",
"EDGE achieves higher diversity than LSTM-Tokens and GPT2-Gen for all response set sizes.",
"Compared to LSTM-Tokens, EDGE generated responses with semantic frames that covered a higher percentage of the semantic frames present in the retrieved responses (SemCov is 36% for LSTM-Tokens, and 63% for EDGE).",
"This shows that compared to baselines, our model does not ignore the exemplar responses.",
"It also copied exact tokens less often as EDGE generated responses contained a lower level of token similarity to retrieved responses (BLEU-2 of 0.21 for LSTM-Tokens and BLEU-2 of 0.16 for EDGE).",
"This shows that while EDGE better controls the semantic content of the generated responses, it still produces more token-level diversity than other models (Dist-2, Dist-3).",
"an exhaustive set of exemplar responses.",
"Further, when such systems directly apply a pre-written response to a novel dialogue context, the response can be incoherent.",
"We demonstrate an application of EDGE in the anti-scam domain where we generate a variety of coherent responses to novel dialogue contexts that capture the high-level intents of exemplar responses without training the models on domain-specific data (a zero-shot test scenario).",
"We crafted our anti-scam response exemplars to follow high-level objectives of the domain (Dalton et al., 2020) such that each of our 20 response exemplars demonstrates one of 5 specific anti-scam intents: ask for details, ask for contact or location, show interest, show skepticism, and show disinterest .",
"Half of the response exemplars contain generic replies that may be appropriate for many scam emails, and half of response exemplar replies contain responses to specific emails.",
"models generate short or incoherent responses (e.g., what a pity?'). When some words in the retrieved response are missing semantic frames (top right example), EDGE leverages the frames that are still present and the context to generate a coherent response with contextually-appropriate details. On the other hand, when LSTM-Tokens inappropriately copies tokens (top left and bottom right ex-amples), the responses often become incoherent (e.g., copying singer and violinist results in i've got a singer, but i was the violinist.).",
"Although EDGE generates context specific responses which generally adhere to the semantics of the exemplars, EDGE still occasionally diverges from the exemplar response.",
"For instance, the model can hallucinate details irrelevant to the context (the word bank in the bottom left example), a problem common in neural generative models (Tian et al., 2019; Duek et al., 2020; Li et al., 2020).",
"Traditional dialogue systems use response exemplars to control system responses based on high-level goals and intents present in the exemplar responses.",
"However, it can be infeasible to write Model Coherence Intent Engagement GPT2-Gen 2.10 33.0 70.1 EDGE 2.39 79.7 87.3 Table 6: Human evaluation of Coherence (reported from 1-low to 3-high), Intent (Follows Intent reported as a percentage), and Engagement (reported as a percentage) in the Anti-Scam setting.",
"We include sample scam emails, strategic response exemplars, and generated responses in Table 5.",
"Human Evaluation We performed human evaluation to test whether generated responses: (1) capture the high-level intents of the exemplar responses, and (2) generate coherent and engaging responses to the scam emails.",
"We compared our system with the GPT2-Gen model, a GPT-2 based baseline that generates responses without conditioning on response exemplars.",
"For each of the 20 response exemplars, we selected 5 scam emails as test dialogue contexts (100 emails total).",
"We asked annotators to rate the responses of both models on the following criteria: (1) Coherence , or is the response on topic and strongly acknowledges the conversation history, (2) Follows intent , or does the response capture the intent of the exemplar, and (3) Engagement , or will the response engage the scammer in a conversation.",
"We collected 3 ratings per email and averaged the ratings (Table 6) and the inter-annotator agreement (Cohen's Kappa) is 0.67 indicating high agreement.",
"EDGE outperforms GPT2-Gen across all metrics, generating coherent replies that capture intents of the exemplars, and engage the scammer (high-level goals).",
"Qualitative Analysis GPT2-Gen responses often simply acknowledge the scammer's email ( e.g. , i am glad to tell you that i am in charge of your company. and thank you, i'm sure you've got it for the contexts in Table 5), while EDGE leverages the exemplars to generate longer replies that preserve the engagement aim and specific intent aims ( e.g. , i can use my company as an intermediary to invest in this business. to show interest).",
"GPT2-Gen achieves 33% intent accuracy, even without conditioning on response exemplars, because its responses often showed interest or asked for details (two of the possible intents).",
"While EDGE responses were more coherent, incoherent responses were typically due to long response exemplars, such that the resulting responses displayed faulty logic, a common problem across generative models generating long text (Holtzman et al., 2020).",
"Overall, EDGE can leverage the semantic frames of response exemplars to preserve their underlying intent and add context specific details where appropriate ( e.g. , influence the decision of the ministry in the last example).",
"Thus, EDGE's key advantages over prior approaches are its controllability and zero-shot performance.",
"EDGE controls dialogue generation based on semantic frames of exemplars, building on prior retrieval-based, controllable and semantics-based language generation methods.",
"Retrieval-Based Generation has been applied in summarization, machine translation, and paraphrasing (Peng et al., 2019; Gu et al., 2018; Grangier and Auli, 2018) tasks to improve the quality of text generation or to incorporate knowledge from retrieved text (Hua et al., 2019; Prabhumoye et al., 2019).",
"In dialogue generation, retrieval conditioned approaches have been proposed to address the lack of diversity in generated responses and the generation of short and dull responses, common in generative approaches.",
"Early approaches used LSTM-based models (Weston et al., 2018; Pandey et al., 2018; Wu et al., 2019) and their ensembles (Song et al., 2018; Zhang et al., 2019) to encode tokens of the retrieved responses to condition response generation.",
"Conditioning response generation directly on tokens of retrieved responses results in: (1) generating incoherent responses due to copying contextually irrelevant tokens, and (2) models learning to ignore retrieved responses due to a mismatch between retrieved responses and ground truth responses.",
"Prior work aimed to solve these problems by extracting only contextually relevant tokens from the retrieved response Cai et al. (2019a), and by replacing the retrieved response with a noisy version during training Cai et al. (2019b).",
"By using semantic frames that represent an exemplar token's meaning rather than the low-level tokens themselves to guide generation, EDGE exerts better semantic control over the generated response.",
"We additionally achieve higher coherence, fluency, and token-level diversity by reusing semantic frames rather than specific tokens.",
"Controllable Text Generation has been studied in tasks such as dialogue generation (Gao et al., 2019), summarization (Fan et al., 2018), paraphrasing (Goyal and Durrett, 2020), and other tasks (Dong et al., 2017; Peng et al., 2019), with the aim of controlling fixed attributes such as topic (Wang et al., 2017; Tang et al., 2019), emotion (Zhou et al., 2018), politeness (Niu and Bansal, 2018) and style (Keskar et al., 2019) through coarse-level labels or control phrases (Wu et al., 2020).",
"Some traditional approaches used templates to control the generation of text (Reiter et al., 2005; McRoy et al., 2003).",
"Some recent approaches learn templates from the data and exemplars (Wiseman et al., 2018; Ye et al., 2020; Yang et al., 2020).",
"We explore the common case of response exemplars instead of inflexible templates or coarse labels to guide the dialogue response generation.",
"Although state-of-the-art models pretrained on large dialogue corpus such as DialoGPT (Zhang et al., 2020), Meena (Adiwar-dana et al., 2020) and Blenderbot (Roller et al., 2020) are capable of generating interesting and human-like responses, our focus is on controlling the response generation process by conditioning on exemplars.",
"By using semantic frames from exemplar responses, our method flexibly captures intents implicitly present in the exemplar frames, and exercises fine-grained semantic control over generation of new responses based on these exemplars.",
"Semantics-Based Generation has reemerged for use in various tasks such as paraphrasing (Wang et al., 2019), machine translation (Marcheggiani et al., 2018) and story generation (Tu et al., 2019; Fan et al., 2019).",
"Semantic representations such as semantic frames and semantic role labels provide abstractions that capture the underlying meanings of different surface realizations ( e.g. , paraphrases, other languages).",
"We are the first to explicitly model frame semantic representations (Fillmore, 1982) in dialogue generation.",
"We present EDGE, an exemplar-based generative dialogue model.",
"By generating responses that preserve semantic structures from exemplars, EDGE maintains desired qualities of dialogue systems including intents and strategies implicitly present in the curated exemplar sets, while achieving fluent and coherent responses.",
"In future work, we plan to explore new mechanisms for incorporating semantic frames, experiment with other abstract representations of response exemplars, and apply our approach to other language generation tasks.",
"We thank Harsh Jhamtani and the anonymous conference reviewers for providing valuable feedback.",
"This work was funded by the Defense Advanced Research Planning Agency (DARPA) under DARPA Grant N6600198-18908, and the National Science Foundation under Awards No.",
"IIS1816012 and IIS2007960."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"objective",
"other",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"method",
"objective",
"other",
"other",
"objective",
"method",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Stereotypical language expresses widely-held beliefs about different social categories.",
"Many stereotypes are overtly negative, while others may appear positive on the surface, but still lead to negative consequences.",
"In this work, we present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM), a comprehensive causal theory from social psychology.",
"The SCM proposes that stereotypes can be understood along two primary dimensions: warmth and competence.",
"We present a method for defining warmth and competence axes in semantic embedding space, and show that the four quadrants defined by this subspace accurately represent the warmth and competence concepts, according to annotated lexicons.",
"We then apply our computational SCM model to textual stereotype data and show that it compares favourably with survey-based studies in the psychological literature.",
"Furthermore, we explore various strategies to counter stereotypical beliefs with anti-stereotypes.",
"It is known that countering stereotypes with anti-stereotypical examples is one of the most effective ways to reduce biased thinking, yet the problem of generating anti-stereotypes has not been previously studied.",
"Thus, a better understanding of how to generate realistic and effective anti-stereotypes can contribute to addressing pressing societal concerns of stereotyping, prejudice, and discrimination.",
"Stereotypes are widely-held beliefs about traits or characteristics of groups of people.",
"While we tend to think of stereotypes as expressing negative views of groups, some stereotypes actually express positive views (e.g. all women are nurturing ).",
"However, even so-called positive' stereotypes can be harmful, as they dictate particular roles that individuals are expected to fulfill, regardless of whether they have the ability or desire to do so (Kay et al., 2013).",
"The existence of stereotypes in our society including in entertainment, the workplace, public discourse, and even legal policy can lead to a number of harms.",
"Timmer (2011) organizes these harms into three main categories: (1) Misrecognition effects: harms caused by denying members of particular groups an equal place in society, diminishing their human dignity, or other forms of marginaliza-tion.",
"(2) Distribution effects: harms resulting from unfair allocation of resources, either by increasing the burden placed on a group, or decreasing a group's access to a benefit.",
"(3) Psychological effects: the distress and unhappiness caused by an awareness and internalization of the stereotyped biases against one's identity group.",
"Additionally, the internalization of these negative stereotypes can lead to anxiety and underachievement.",
"To reduce these harms and promote a more egalitarian society, we must identify and counter stereotypical language when it occurs.",
"Evidence from the psychological literature suggests that one of the most effective methods for reducing stereotypical thinking is through exposure to counter-stereotypes, or anti-stereotypes.",
"Finnegan et al. (2015) showed participants stereotypical and anti-stereotypical images of highly socially-gendered professions (e.g., a surgeon is stereotypically male, and a nurse is stereotypically female; the genders were reversed in the anti-stereotypical images), and then measured their gender bias in a judgement task.",
"Exposure to anti-stereotypical images significantly reduced gender bias on the task.",
"Blair et al. (2001) used a mental imagery task and reported that participants in the anti-stereotypical condition subsequently showed significantly weaker effects on the Implicit Association Test (IAT).",
"Dasgupta and Greenwald (2001) showed a similar effect by exposing participants to anti-stereotypical exemplars (e.g. admired Black celebrities, and disliked white individuals).",
"When Lai et al. (2014) compared 17 interventions aimed at reducing stereotypical thinking, methods involving anti-stereotypes were most successful overall.",
"Thus, creating technology that enables users to identify stereotypical language when it occurs, and then counter it with anti-stereotypes, could help to reduce biased thinking.",
"However, the idea of what constitutes an anti-stereotype remains ill-defined.",
"Is an anti-stereotype simply the semantic opposite of a stereotype?",
"Or can anything that is not a stereotype serve as an anti-stereotype?",
"If two groups are stereotyped similarly, do they have an identical anti-stereotype?",
"Can an anti-stereotype actually reflect an equally harmful view of a target group (e.g. the cold-hearted career woman as an anti-stereotype to the nurturing housewife) ?",
"Here, we begin to untangle some of these questions using the StereoSet dataset (Nadeem et al., 2020).",
"We begin by analyzing the stereotypes expressed in this dataset.",
"One widely-accepted model of stereotypes, prejudice, and inter-group relationships from social psychology is the Stereotype Content Model or SCM (Fiske et al., 2002).",
"The SCM proposes two fundamental and universal dimensions of social stereotypes: warmth and competence .",
"By defining the warmcold, competent incompetent axes in the semantic embedding space, we are able to cluster and interpret stereotypes with respect to those axes.",
"We can then examine the associated anti-stereotypes and their relation to both the stereotyped description and the target group.",
"Thus, our contributions are as follows: To develop a computational method for automatically mapping textual information to the warmth competence plane as proposed in the Stereotype Content Model.",
"To validate the computational method and optimize the choice of word embedding model using a lexicon of words known to be associated with positive and negative warmth and competence.",
"To compare the stereotypes in StereoSet with those reported in the survey-based social psychology literature.",
"To analyze human-generated anti-stereotypes as a first step towards automatically generating anti-stereotypes, as a method of countering stereotypes in text with constructive, alternative perspectives.",
"We provide more details on the Stereotype Content Model and its practical implications, and then briefly review the NLP research on computational",
"analysis of stereotypical and abusive content.",
"Stereotype Content Model : Stereotypes, and the related concepts of prejudice and discrimination, have been extensively studied by psychologists for over a century (Dovidio et al., 2010).",
"Conceptual frameworks have emerged which emphasize two principle dimensions of social cognition.",
"The Stereotype Content Model (SCM) refers to these two dimensions as warmth (encompassing sociability and morality) and competence (encompassing ability and agency) (Fiske et al., 2002).",
"When forming a cognitive representation of a social group to anticipate probable behaviors and traits, people are predominantly concerned with the others' intent are they friends or foes?",
"This intent is captured in the primary dimension of warmth.",
"The competence dimension determines if the others are capable to enact that intent.",
"A key finding of the SCM has been that, in contrast to previous views of prejudice as a uniformly negative attitude towards a group, many stereotypes are actually ambivalent ; that is, they are high on one dimension and low on the other.",
"Further, the SCM proposes a comprehensive causal theory, linking stereotypes with social structure, emotions, and discrimination (Fiske, 2015).",
"According to this theory, stereotypes are affected by a perceived social structure of interdependence (co-operation versus competition), corresponding to the warmth dimension, and status (prestige and power), determining competence.",
"Stereotypes then predict emotional response or prejudices.",
"For example, groups perceived as unfriendly and incompetent (e.g., homeless people, drug addicts) evoke disgust and contempt, groups allegedly high in warmth but low in competence (e.g., older people, people with disabilities) evoke pity, and groups perceived as cold and capable (e.g., rich people, businesspeople) elicit envy.",
"Finally, the emotions regulate the actions (active or passive help or harm).",
"Thus, low warmthlow competence groups often elicit active harm and passive neglect, whereas low warmthhigh competence groups may include envied out-groups who are subjects of passive help in peace times but can become targets of attack during social unrest (Cuddy et al., 2007).",
"The SCM has been supported by extensive quantitative and qualitative analyses across cultures and time (Fiske, 2015; Fiske and Durante, 2016).",
"To our knowledge, the current work presents the first computational model of the SCM.",
"Stereotypes in Language Models : An active line of NLP research is dedicated to quantifying and mitigating stereotypical biases in language models.",
"Early works focused on gender and racial bias and revealed stereotypical associations and common prejudices present in word embeddings through association tests (Bolukbasi et al., 2016; Caliskan et al., 2017; Manzini et al., 2019).",
"To discover stereotypical associations in contextualized word embeddings, May et al. (2019) and Kurita et al. (2019) used pre-defined sentence templates.",
"Similarly, Bartl et al. (2020) built a template-based corpus to quantify bias in neural language models, whereas Nadeem et al. (2020) and Nangia et al. (2020) used crowd-sourced stereotypical and anti-stereotypical sentences for the same purpose.",
"In contrast to these studies, while we do use word embeddings to represent our data, we aim to identify and categorize stereotypical views expressed in text, not in word embeddings or language models.",
"Abusive Content Detection: Stereotyping, explicitly or implicitly expressed in communication, can have a detrimental effect on its target, and can be considered a form of abusive behavior.",
"Online abuse, including hate speech, cyber-bullying, online harassment, and other types of offensive and toxic behaviors, has been a focus of substantial research effort in the NLP community in the past decade (e.g. see surveys by Schmidt and Wiegand (2017); Fortuna and Nunes (2018); Vidgen et al. (2019)).",
"Most of the successes in identifying abusive content have been reported on text containing explicitly obscene expressions; only recently has work started on identifying more subtly expressed abuse, such as stereotyping and micro-aggressions (Breitfeller et al., 2019).",
"For example, Fersini et al. (2018) and Chiril et al. (2020) examined gender-related stereotypes as a sub-category of sexist language, and Price et al. (2020) annotated unfair generalizations' as one attribute of unhealthy online conversation.",
"Sap et al. (2020) employed large-scale language models in an attempt to automatically reconstruct stereotypes implicitly expressed in abusive social media posts.",
"Their work showed that while the current models can accurately predict whether the online post is offensive or not, they struggle to effectively reproduce human-written statements for implied meaning.",
"Counter-narrative: Counter-narrative (or coun-terspeech) has been shown to be effective in confronting online abuse (Benesch et al., 2016).",
"Counter-narrative is a non-aggressive response to abusive content that aims to deconstruct and dele-gitimize the harmful beliefs and misinformation with thoughtful reasoning and fact-bound arguments.",
"Several datasets of counter narratives, spontaneously written by regular users or carefully crafted by experts, have been collected and analyzed to discover common intervention strategies (Mathew et al., 2018; Chung et al., 2019).",
"Preliminary experiments in automatic generation of counter-narrative demonstrated the inadequacy of current large-scale language models for generating effective responses and the need for a human-in-the-loop approach (Qian et al., 2019; Tekiroglu et al., 2020).",
"Countering stereotypes through exposure to anti-stereotypical exemplars is based on a similar idea of deconstructing harmful beliefs with counter-facts.",
"We develop our computational SCM using labelled data from Nicolas et al. (2020) and the POLAR framework for interpretable word embeddings (Mathew et al., 2020), and then apply it to stereotype and anti-stereotype data from StereoSet (Nadeem et al., 2020).",
"Details are provided in the following sections.",
"To construct and validate our model, we make use of the supplementary data from Nicolas et al. (2020) ( https://osf.io/yx45f/ ).",
"They provide a list of English seed words, captured from the psychological literature, associated with the warmth and competence dimensions; specifically, associated with sociability and morality (warmth), and ability and agency (competence).",
"They then use WordNet to generate an extended lexicon of English words either positively or negatively associated with aspects of warmth and competence.",
"Some examples from the seed data and extended lexicon are given in Table",
"1. 3.2 StereoSet For human-generated stereotype and anti-stereotype data, we use the publicly-available Dimension Component Sign Seed word examples n seed Extended lexicon examples n extended Warmth Sociability pos friendly, warm, pleasant 34 amusing, brother, fun 482 neg cold, repellent, disliked 32 detached, grim, surly 423 Morality pos trustworthy, sincere, honest 40 donor, justice, modest 460 neg dishonest, selfish, unfair 49 cheat, dreadful, henchman 1750 Competence Agency pos confident, assertive, secure 35 bravery, decisive, stubborn 444 neg fearful, lazy, inactive 31 follow, minion, quitter 265 Ability pos smart, intelligent, able 33 analytic, fluency, thorough 579 neg stupid, ignorant, incapable 29 forgetful, silly, unfit 301 Table 1: Examples of words from the training data (seed words) and validation data (extended lexicon), for each of the components comprising the warmth and competence dimensions.",
"portion of the StereoSet dataset (Nadeem et al., 2020).",
"This English-language dataset was constructed to test language model bias, and part of the data is kept hidden as the test set for a leaderboard on language model fairness ( https://stereoset.mit.edu/ ).",
"Instead, we use the development set, which contains stereotype data for 79 target groups across four broad demographic domains: gender, race or nationality, profession, and religion.",
"In StereoSet, there are two experimental conditions: intra-sentence and inter-sentence.",
"Here, we focus on the intra-sentence data only.",
"The data was collected from crowd-workers as follows (see Nadeem et al. (2020) for more detail): Given a target group label, the annotator is asked to generate a stereotypical word associated with that group, as well as an anti-stereotypical word and an unrelated word.",
"They then construct a context sentence containing the target group label, and a blank which can be filled with the stereotypical or anti-stereotypical word.",
"For example, if the target group was women, the annotator might come up with emotional and rational as the stereotype and anti-stereotype words respectively, and then construct a sentence like Women are known for being overly (cid:104) BLANK (cid:105) .",
"For our current analysis, we consider only the stereotype and anti-stereotype words, and discard the context sentence.",
"We also exclude any targets that do not directly refer to groups of people (e.g., we discard Norway but keep Norwegian ).",
"This results in 58 target groups with an average of 25 stereotype and anti-stereotype word pairs each.",
"We consider several possible representations for the words in our dataset, including GloVe (Pen-nington et al., 2014), word2vec (Mikolov et al.,",
"2013), and FastText (Mikolov et al., 2018).",
"1 In all cases, the key question is how to project the higher-dimensional word embedding onto the warmth competence plane.",
"Rather than using an unsupervised approach such as PCA, we choose the POLAR framework introduced by Mathew et al. (2020).",
"This framework seeks to improve the interpretability of word embeddings by leveraging the concept of semantic differentials,' a psychological rating scale which contrasts bipolar adjectives, e.g. hotcold , or good bad .",
"Given word embeddings that define these polar opposites for a set of concepts, all other word embeddings in the space are projected onto the polar embedding space,' where each dimension is clearly associated with a concept.",
"For our purposes, the polar opposites are warmthcoldness and competenceincompetence, as defined by the sets of seed words from Nicolas et al. (2020).",
"To reduce the dimensionality of the space to 2D, we average the word vectors for all seed words associated with each dimension and polarity.",
"That is, to define the warmth direction, we take the mean of all words in the seed dictionary which are positively associated with warmth.",
"Given vector definitions for warmth, coldness, competence, and incompetence, we can then use a simple matrix transformation to project any word embedding to the 2D subspace defined by these basis vectors (mathematical details are given in Appendix A).",
"We first evaluate the model's ability to accurately place individual words from the lexicons along the",
"1 We consider here only noncontextual word embeddings, in line with Mathew et al. (2020).",
"Because the POLAR framework is based on linear algebraic computations, it is not immediately obvious whether it will extend directly to contextualized embeddings, which are notably anisotropic (Ethayarajh, 2019).",
"warmth and competence dimensions.",
"We then explore whether we can reproduce findings describing where certain target groups are typically located in the warmthcompetence plane, based on the previous survey-based social psychology literature.",
"As described above, we use the extended lexicon from Nicolas et al. (2020) to validate our model.",
"We remove any words in the lexicon which appear in the seed dictionary and any words which do not have representations in all the pretrained embedding models, leaving a total of 3,159 words for validation.",
"In the extended lexicon, the words are annotated with either +1 or -1 to indicate a positive or negative association with the given dimension.",
"We pass the same words through our system, and observe whether the model labels the word as being positively or negatively associated with the relevant dimension.",
"Our evaluation metric is accuracy; i.e. the proportion of times our system agrees with the lexicon.",
"Note that all words are associated with either warmth or competence, and therefore we can only evaluate one dimension at a time.",
"We evaluate a number of pre-trained word embeddings in the gensim library ( Rehurek and Sojka, 2010), with the results given in Table",
"2. The FastText embeddings generally outperform the other embeddings on this task, with the 2M word model trained on 600B tokens in the Common Crawl leading to the highest accuracy.",
"Therefore, we use this embedding model in the analysis that follows.",
"We now address the question of whether our model, in conjunction with the StereoSet data, is able to reproduce findings from psychological surveys.",
"We project stereotypes from the StereoSet data onto the warmthcompetence space for the 24 target groups that meet both of the following criteria: (1) they are included in the publicly available portion of the StereoSet data, and (2) they have been previously studied for stereotyping in the psychological literature.",
"Based on the findings from psychological surveys, we expect these target groups will be mapped to the following quadrants: 2 Warm-Competent: nurse, psychologist (healthcare professions') (Brambilla et al., 2010), researcher ('professor') (Eckes, 2002).",
"Warm-Incompetent: grandfather (elderly'), mommy, mother (traditional women') (Cuddy et al., 2008), schoolboy, schoolgirl (children') (Fiske, 2018).",
"Cold-Competent: male, gentleman (man') (Glick et al., 2004), Japanese (Lee and Fiske, 2006), commander (Cuddy et al., 2011), manager, entrepreneur (Fiske, 2010), mathematician, physicist, chemist, engineer (scientist') (Losh et al., 2008), software developer (technical ex-pert') (Fiske, 2018).",
"Cold-Incompetent: African, Ethiopian, Ghanian, Eritrean, Hispanic (Lee and Fiske, 2006), Arab (Fiske et al., 2006).",
"To locate each target group on the plane, we generate word embeddings for each of the stereotype words associated with the target group, find the mean, and project the mean to the polar embedding space.",
"As we aim to identify commonly-held stereotypes, we use a simple cosine distance fil-ter to remove outliers, heuristically defined here as any words which are greater than a distance of 0.6 from the mean of the set of words.",
"We also remove words which directly reference a demographic group (e.g., black, white) as these words are vulnerable to racial bias in the embedding model and complicate the interpretation.",
"A complete list of the words in each stereotype cluster can be found in the Appendix B. Figure 1 confirms many of the findings predicted by the literature.",
"Most (67%) of the stereotypes lie in the predicted quadrant, including grandfather and schoolgirl in the paternalistic warm incompetent quadrant; nurse and psychologist in the admired warmcompetent quadrant, manager and male in the envied coldcompetent quadrant, and African and Hispanic in the coldcold quadrant.",
"2 Note that these research findings simply report stereotypical beliefs which are prevalent in North American society; we in no way aim to perpetuate, confirm, or promote these views.",
"reasonable on examination of the underlying data.",
"For example, while men are typically stereotyped as being competent yet cold in the psychological literature, the specific keyword gentlemen evokes a certain subset of men (described with words such as polite , respectful , and considerate ), which ranks higher on the warmth dimension than the target word male (example words: dominant , aggressive ).",
"We also observe that while children have generally been labelled as warmincompetent in previous work (Fiske, 2018), this dataset distinguishes between male and female schoolchildren, and, as expected based on studies of gender, schoolboys are ranked as lower warmth than schoolgirls.",
"The words used to describe schoolboys include references to the naughty' schoolboy stereotype, while the words describing schoolgirls focus on their innocence and naivety.",
"It is also notable that Arab , predicted to lie in the coldincompetent quadrant, is here mapped to the coldcompetent quadrant instead.",
"We hypothesize that this is due to the use of stereotype words like dangerous and violent , which suggest a certain degree of agency and the ability to carry out goals.",
"In contrast, the target group African as well as those associated with African countries are stereotyped as poor and uneducated , and thus low on the competence dimension.",
"In general, we conclude that in most cases the computational approach is successful in mapping stereotyped groups onto the predicted areas of the warmthcompetence plane, and that the cases which diverge from findings in the previous literature do appear to be reasonable, based on an examination of the text data.",
"Having validated the model, we can now apply it to the rest of the stereotype data in StereoSet, as well as the anti-stereotypes.",
"The SCM presents a concise theory to explain stereotypes and resulting prejudiced behaviour; however, it does not generate any predictions about anti-stereotypes.",
"Here, we explore the anti-stereotypes in StereoSet within the context of the SCM, first at the level of individual annotators, and then at the level of target groups (combining words from multiple annotators).",
"We then discuss how we might use information about warmth and competence to generate anti-stereotypes with the specific goal of reducing biased thinking.",
"In this section, we investigate the question: What do human annotators come up with when asked to produce an anti-stereotype?",
"One possibility is that they simply produce the antonym of their stereotype word.",
"To test this hypothesis, for all 58 groups and each pair of stereotype and anti-stereotype words, we obtain a list of antonyms for the stereotype word using the Python library PyDictionary.",
"We additionally search all the synonyms for the stereotype word, and add all of their antonyms to the list of antonyms as well.",
"Then, if the lemma of the anti-stereotype matches the lemma of any of the retrieved antonyms, we consider it a match.",
"However, as seen in Table 3, the strategy of simply producing a direct antonym is only used 23% of the time.",
"We consider four other broad possibilities: (1) that the annotator generates an anti-stereotype word that lies in the opposite quadrant from the stereotype word, e.g., if the stereotype word is low-competence, low-warmth (LC-LW), then the anti-stereotype word should be high-competence, high-warmth (HC-HW); (2) that the annotator chooses a word with the opposite warmth polarity (i.e. flips warmth), while keeping the competence polarity the same; (3) that the annotator chooses a word with the opposite competence polarity (i.e. flips competence), while keeping the warmth polarity the same; (4) that the annotator chooses a word that lies in the same quadrant as the stereotype word.",
"We report the proportion of times that each strategy is observed; first overall, then for each quadrant individually.",
"The choice of whether to modify warmth or competence might also depend on which of those dimensions is most salient for a given word, and so we consider separately words for which the absolute value of competence is greater than the absolute value of warmth, and Strategy Overall HC-HW LC-HW LC-LW HC-LW | C | > | W | | W | > | C | n = 895 n = 192 n = 183 n = 176 n = 344 n = 428 n = 467 Direct antonym 23.4 26.0 32.6 27.8 15.0 27.2 19.2 Opposite quadrant 29.6 30.2 15.5 26.1 38.3 28.1 31.2 Flip warmth 20.6 14.6 26.5 29.5 16.4 12.3 29.8 Flip competence 16.7 24.0 12.7 13.1 16.7 22.8 10.1 Same quadrant 9.6 5.2 12.7 3.4 13.5 9.6 9.6 Table 3: The percentage of times each of the hypothesized strategies of anti-stereotype generation is used for stereotypes, overall and in each quadrant.",
"vice versa.",
"The results are given in Table",
"3. While no single strategy dominates, we can make a few observations.",
"In general, it is more likely that people select an anti-stereotype which is not a direct antonym, but which lies in the opposite quadrant in the warmth-competence plane.",
"Flipping only one axis is less frequent, although we see in the last two columns that it is more likely that the competence will be flipped when competence is the salient dimension for a word, and similarly for warmth.",
"Finally, choosing another word in the same quadrant is rare, but more common in the ambivalent quadrants.",
"While it is not possible to know what thought process the annotators followed to produce anti-stereotypes, we consider the following possible explanation.",
"Just as we have here conceptualized a stereotype as being defined not by a single word, but by a set of words, perhaps each annotator also mentally represents each stereotype as a set of words or ideas.",
"Then, the anti-stereotype word they produce sometimes reflects a different component of their mental image than the initial stereotype word.",
"To give a concrete example from the data, one annotator stereotypes Hispanic people as aggressive , but then comes up with hygienic as an anti-stereotype, suggesting that unhygienic is also part of their multi-dimensional stereotype concept.",
"The choice of whether to select a direct antonym, or whether to negate some other component of the stereotype, may depend on the availability of a familiar lexical antonym, the context sentence, or any number of other factors.",
"In short, it appears that the process by which human annotators generate pairs of stereotype and anti-stereotype words is complex and not easily predicted by the SCM.",
"We then examine how these pairs of stereotype and anti-stereotype words combine to produce an overall anti-stereotype for the target group in question.",
"Taking the same approach as in the previous Target Stereotype Antonym Anti-stereotype African poor rich rich Hispanic poor rich hardworking mother caring uncaring hateful nurse caring uncaring rude commander strong weak stupid mover strong weak weak football player dumb smart weak Table 4: Examples comparing stereotypes with their direct antonym and the anti-stereotype from StereoSet.",
"section, we average the anti-stereotype word vectors to determine the location of the anti-stereotype in the warmthcompetence plane.",
"For each target group, we then select the word closest to the mean for both the stereotype and anti-stereotype clusters.",
"Similarly to when we look at individual word pairs, in 22% of cases, the mean of the anti-stereotype is the direct antonym of the stereotype mean.",
"In the other cases, 45% of the anti-stereotype means lie in the opposite quadrant to the stereotypes, in 16% of cases the warmth polarity is flipped, in 10% of cases the competence polarity is flipped, and in only 7% cases (4 target groups), the anti-stereotype lies in the same quadrant as the stereotype.",
"In Table 4, we offer a few examples of cases where the anti-stereotype means agree and disagree with the direct antonyms of the stereotypes.",
"As in the pairwise analysis, in many cases the anti-stereotypes appear to be emphasizing a supposed characteristic of the target group which is not captured by the stereotype mean; for example, the anti-stereotype for dumb football player' is not smart , but weak demonstrating that strength is also part of the football player stereotype.",
"This is also seen clearly in the fact that two target groups with the same stereotype mean are not always assigned the same anti-stereotype: for example, both Africans and Hispanics are stereotyped as poor , but Africans are assigned the straightforward anti-stereotype rich , while Hispanics are assigned hardworking (perhaps implying that their poverty is due to laziness rather than circumstance).",
"The general conclusion from these experiments is that stereotypes are indeed multi-dimensional, and the anti-stereotypes must be, also.",
"Hence it is not enough to generate an anti-stereotype simply by taking the antonym of the most representative word, nor is it sufficient to identify the most salient dimension of the stereotype and only adjust that.",
"When generating anti-stereotypes, annotators (individually, in the pairwise comparison, and on average) tend to invert both the warmth and competence dimensions, taking into account multiple stereotypical characteristics of the target group.",
"The anti-stereotypes in StereoSet were generated with the goal of evaluating language model bias.",
"Ultimately, our goal is quite different: to reduce biased thinking in humans.",
"In particular, we want to generate anti-stereotypes that emphasize the positive aspects of the target groups.",
"As underscored by Cuddy et al. (2008), many stereotypes are ambivalent: they take the form 'X but Y'.",
"Women are nurturing but weak , scientists are intelligent but anti-social .",
"When we simply take the antonym of the mean, we focus on the single most-representative word; i.e., the X. However, in the examples we can observe that it's actually what comes after the but ... that is the problem.",
"Therefore, in generating anti-stereotypes for these ambivalent stereotypes, we hypothesize that a better approach is not to take the antonym of the primary stereotype (i.e., women are uncaring , scientists are stupid ), but rather to challenge the secondary stereotype (women can be nurturing and strong , scientists can be intelligent and social ).",
"As a first step towards generating anti-stereotypes for such ambivalent stereotypes, we propose the following approach: first identify the most positive aspect of the stereotype (e.g., if the stereotype mean lies in the incompetentwarm quadrant, the word expressing the highest warmth), then identify the most negative aspect of the stereotype in the other dimension (in this example, the word expressing the lowest competence).",
"Then the stereotype can be phrased in the X but Y construction, where X is the positive aspect and Y is the negative aspect.",
"3 To generate a positive anti-stereotype 3 A similar method can be used for warmcompetent and coldincompetent stereotypes, although if all words are positive, an anti-stereotype may not be needed, and if all words which challenges stereotypical thinking while not promoting a negative view of the target group, take the antonym only of the negative aspect.",
"Some examples are given in Table 5.",
"A formal evaluation of these anti-stereotypes would involve carrying out a controlled psychological study in which the anti-stereotypes were embedded in an implicit bias task to see which formulations are most effective at reducing bias; for now, we simply present them as a possible way forward.",
"As shown in the table, taking into account the ambivalent aspects of stereotypes can result in more realistic anti-stereotypes than either taking the mean of the crowd-sourced anti-stereotypes, or simply generating the semantic opposite of the stereotype.",
"For example, the group grandfather is mostly stereotyped as old , and then counter-intuitively anti-stereotyped as young .",
"It is more useful in terms of countering ageism to combat the underlying stereotype that grandfathers are feeble rather than denying that they are often old.",
"Similarly, it does not seem helpful to oppose biased thinking by insisting that entrepreneurs can be lazy , engineers and developers can be dumb , and mothers can be uncaring .",
"Rather, by countering only the negative dimension of ambivalent stereotypes, we can create realistic and positive anti-stereotypes.",
"Despite their prevalence, stereotypes can be hard to recognize and understand.",
"We tend to think about other people on a group level rather than on an individual level because social categorization, although harmful, simplifies the world for us and leads to cognitive ease.",
"However, psychologists have shown that we can overcome such ways of thinking with exposure to information that contradicts those biases.",
"In this exploratory study, we present a computational implementation of the Stereotype Content Model to better understand and counter stereotypes in text.",
"A computational SCM-based framework can be a promising tool for large-scale analysis of stereotypes, by mapping a disparate set of stereotypes to the 2D semantic space of warmth and competence.",
"We described here our first steps towards developing and validating this framework, on a highly constrained dataset: in StereoSet, the annotators were explicitly instructed to produce stereotypical ideas, the target groups and stereotypical words are negative, then an antonym may be more appropriate.",
"are clearly specified, and every stereotype has an associated anti-stereotype generated by the same annotator.",
"In future work, this method should be further assessed by using different datasets and scenarios.",
"For example, it may be possible to collect stereotypical descriptions of target groups in the wild' by searching large corpora from social media or other sources.",
"We plan to extend this framework to analyze stereotypes on the sentence-level and consider the larger context of the conversations.",
"Working with real social media texts will introduce a number of challenges, but will offer the possibility of exploring a wider range of marginalized groups and cultural viewpoints.",
"Related to this, we reiterate that only a portion of the StereoSet dataset is publicly available.",
"Therefore, the data does not include the full set of common stereotypical beliefs for social groups frequently targeted by stereotyping.",
"In fact, some of the most affected communities (e.g., North American Indigenous people, LGBTQ+ community, people with disabilities, etc.) are completely missing from the dataset.",
"In this work, we use this dataset only for illustration purposes and preliminary evaluation of the proposed methodology.",
"Future work should examine data from a wide variety of subpopulations differing in language, ethnicity, cultural background, geographical location, and other characteristics.",
"From a technical perspective, with larger datasets it will be possible to implement a cluster analysis within each target group to reveal the different ways in which a given group can be stereotyped.",
"A classification model may additionally improve the accuracy of the warmthcompetence categorization, although we have chosen the POLAR framework here for its interpretability and ease of visualization.",
"We also examined how we might leverage the developed computational model to challenge stereotypical thinking.",
"Our analysis did not reveal a simple, intuitive explanation for the anti-stereotypes produced by the annotators, suggesting they exploited additional information beyond what was stated in the stereotype word.",
"This extra information may not be captured in a single pair of stereotypeanti-stereotype words, but by considering sets of words, we can better characterize stereotypes as multi-dimensional and often ambivalent concepts, consistent with the established view in psychology.",
"This also allows us to suggest anti-stereotypes which maintain positive beliefs about a group, while challenging negative beliefs.",
"We propose that this methodology may potentially contribute to technology that assists human professionals, such as psychologists, educators, human rights activists, etc., in identifying, tracking, analyzing, and countering stereotypes at large scale in various communication channels.",
"There are a number of ways in which counter-stereotypes can be introduced to users (e.g., through mentions of counter-stereotypical members of the group or facts countering the common beliefs) with the goal of priming users to look at others as individuals and not as stereotypical group representatives.",
"An SCM-based approach can provide the psychological basis and the interpretation of automatic suggestions to users.",
"Since our methodology is intended to be part of a technology-in-the-loop approach, where the final decision on which anti-stereotypes to use and in what way will be made by human professionals, we anticipate few instances where incorrect (i.e., not related, unrealistic, or ineffective) automatically generated anti-stereotypes would be disseminated.",
"In most such cases, since anti-stereotypes are designed to be positive, no harm is expected to be incurred on the affected group.",
"However, it is possible that a positive, seemingly harmless anti-stereotypical description can have a detrimental effect on the target group, or possibly even introduce previously absent biases into the discourse.",
"Further work should investigate the efficiency and potential harms of such approaches in real-life social settings.",
"Data: We present a method for mapping a set of words that represent a stereotypical view of a social category held by a given subpopulation onto the two-dimensional space of warmth and competence.",
"The Stereotype Content Model, on which the methodology is based, has been shown to be applicable across cultures, sub-populations, and time (Fiske, 2015; Fiske and Durante, 2016).",
"Therefore, the methodology is not specific to any subpopulation or any target social group.",
"In the current work, we employ the publicly available portion of the StereoSet dataset (Nadeem et al., 2020).",
"This English-only dataset has been created through crowd-sourcing US workers on Amazon Mechanical Turk.",
"Since Mechanical Turk US workers tend to be younger and have on average lower household income than the general US population (Difallah et al., 2018), the collected data may not represent the stereotypical views of the wider population.",
"Populations from other parts of the world, and even sub-populations in the US, may have different stereotypical views of the same social groups.",
"Furthermore, as discussed in Section 6, the StereoSet dataset does not include stereotype data for a large number of historically marginalized groups.",
"Future work should examine data both referring to, and produced by, a wider range of social and cultural groups.",
"Potential Applications: As discussed previously, the automatically proposed anti-stereotypes can be utilized by human professionals in a variety of ways, e.g., searching for or creating anti-stereotypical images, writing counter-narratives, creating educational resources, etc.",
"One potential concern which has not received attention in the related literature is the possibility that the process of generating counter-stereotypes may itself introduce new biases into the discourse, particularly if these counter-stereotypes are generated automatically, perhaps even in response to adversarial data.",
"We emphasize the importance of using counter-stereotypes not to define new, prescriptive boxes into which groups of people must fit (e.g., from Table 3, that all software developers should be intelligent and healthy, or that all entrepreneurs must be inventive and compassionate).",
"Rather, counter-stereotypes should weaken common stereotypical associations by emphasizing that any social group is not actually homogenous, but a group of individuals with distinct traits and characteristics.",
"In most cases, the algorithm-in-the-loop approach (with automatic suggestions assisting human users) should be adopted to reduce the risk of algorithmic biases being introduced into the public discourse.",
"Often, harmful stereotyping is applied to minority groups.",
"Work on identifying and analyzing stereotypes might propagate the harmful beliefs further, and it is possible that collections of stereotypical descriptions could be misused as information sources for targeted campaigns against vulnerable populations.",
"However, this same information is needed to understand and counter stereotypical views of society.",
"We also note that although we take advantage of word embedding models in our approach, we do not use the representations of target group names.",
"Previous work has shown that biased thinking is encoded in these models, and using them to represent groups can be harmful to specific demographics.",
"Identifying Demographic Characteristics: The proposed methodology deals with societal-level stereotypical and anti-stereotypical representations of groups of people and does not attempt to identify individual user/writer demographic characteristics.",
"However, work on stereotyping and anti-stereotyping entails, by definition, naming and defining social categories of people.",
"Labeling groups not only defines the category boundaries, but also positions them in a hierarchical social-category taxonomy (Beukeboom and Burgers, 2019).",
"We emphasize that our goal is not to maintain and reproduce existing social hierarchies, as cautioned by Blodgett et al. (2020), but rather to help dismantle this kind of categorical thinking through the use of anti-stereotypes.",
"Energy Resources: The proposed SCM-based method is computationally low-cost, and all experiments were performed on a single CPU.",
"Once the pretrained vectors are loaded, the projection and analysis is completed in less than a minute."
] | [
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Retrieve-and-edit seq2seq methods typically retrieve an output from the training set and learn a model to edit it to produce the final output.",
"We propose to extend this framework with a simple and effective post-generation ranking approach.",
"Our framework",
"(i) retrieves several potentially relevant outputs for each input,",
"(ii) edits each candidate independently, and",
"(iii) re-ranks the edited candidates to select the final output.",
"We use a standard editing model with simple task-specific re-ranking approaches, and we show empirically that this approach outperforms existing, significantly more complex methodologies.",
"Experiments on two machine translation (MT) datasets show new state-of-art results.",
"We also achieve near state-of-art performance on the Gigaword summarization dataset, where our analyses show that there is significant room for performance improvement with better candidate output selection in future work.",
"Retrieve-and-edit text generation methods have received significant recent interest; editing human-authored text can potentially avoid many of the challenges that are seen while generating text from scratch, including the tendency to be overly repetitive or to degrade on longer texts (Holtzman et al., 2018, 2019).",
"Retrieve-and-edit methods have been developed for summarization (Cao et al., 2018), machine translation (Wu et al., 2019), language modeling (Guu et al., 2018), and conversation generation (Weston et al., 2018).",
"These methods first retrieve a single output from the training set, and then use a learned model to edit it into the final output.",
"In this paper, we show that generation performance can be improved with a retrieve-edit-rerank approach that instead retrieves a set of outputs from Figure 1: Our retrieve-edit-rerank framework, generating candidate outputs with three retrieved outputs, and re-ranking y 2 as the best candidate post-generation.",
"the training set, edits each independently, and then re-ranks the results to produce the final output.",
"Figure 1 shows an overview of the approach.",
"We use standard keyword-based retrieval and a simple editor, where the retrieved output is concatenated to the original input to train a Transformer-based seq2seq editing model.",
"Our final re-ranking step is task specific, but again very simple in every case.",
"Our goal here is not to find the best possible way to do the re-ranking.",
"Instead, we show that gains are possible and that it helps to see what edits are made for multiple candidates before making the final decision, instead of following previous work by trying to select a single candidate before editing.",
"We evaluate performance on the Gigaword summarization dataset (Rush et al., 2015) and on the English to Dutch (EN-NL) and the English to Hungarian (EN-HU) machine translation (MT) tasks, following Bulte and Tezcan (2019).",
"For MT, we experimented with different re-ranking schemes but found that the original model score (log-likelihood) worked best, amounting to extended beam search within the complete retreive-edit-rerank pipeline.",
"We improve performance by 6.5 BLEU points on EN-NL and 7.5 on EN-HU over the state-of-art Neural Fuzzy Repair system (Bulte and Tezcan, 2019).",
"On Gigaword, we simply re-rank by returning the most common output, and we achieve up to 1.2 ROUGE improvement over the comparable Re 3 Sum model (Cao et al., 2018).",
"Finally, through qualitative analysis, we find evidence that better post-generation ranking is feasible and can lead to substantial performance improvement, which emphasizes the need for future work in developing new post-generation ranking techniques.",
"Recent work has developed retrieve-and-edit approaches for many tasks, including dialogue generation (Weston et al., 2018), language modeling (Guu et al., 2018), code generation (Hashimoto et al., 2018), neural machine translation (NMT) (Gu et al., 2018; Zhang et al., 2018; Cao and Xiong, 2018) and post-editing for NMT (Hokamp, 2017; Dabre et al., 2017).",
"Candidate ranking has served as a core part in some retrieval-based models (Ji et al., 2014; Yan et al., 2016), but these models do not edit the retrieved candidates.",
"For machine translation, Bulte and Tezcan (2019) developed a retrieve-and-edit based LSTM model called Neural Fuzzy Repair (NFR), which they applied on two MT datasets obtained from (Steinberger et al., 2012).",
"Using a keyword based followed by a token edit distance based retrieval method called sss+ed , they showed that concatenating the source and retrieved outputs as the input significantly boosts translation quality.",
"NFR is trained by augmenting the source with up to 3 retrieved outputs, which are fed together into the editing model in several ways.",
"Our approach, instead, simply edits multiple candidates separately and then re-ranks the final results.",
"For summarization, Re 3 Sum (Cao et al., 2018) is an LSTM-based model developed under the retrieve-and-edit framework, and tested on the Gigaword summarization (also headline generation) task (Rush et al., 2015).",
"Re 3 Sum retrieves 30 headlines from the training set using the popular information retrieval method Lucene 1 .",
"Next, it learns a model to pick the single best retrieved headline, which is then edited.",
"BiSET (Wang et al., 2019) is a retrieve-and-edit framework with more complex retrieval ranking and editing stages, which again edits only a single output.",
"We compare our framework's performance against those of NFR, Re 3 Sum, and BiSET, showing the effectiveness of post-generation ranking.",
"Figure 1 shows our proposed retrieve-edit-rerank framework.",
"It has three components:",
"(i) a retrieval mechanism to extract output from the training set;",
"(ii) a seq2seq model to generate output from the source concatenated with the retrieved output; and",
"(iii) a post-generation ranking module to select a high quality output from a set of generated candidates.",
"For the rest of this paper, we will use ( x , y ) to represent a source and target pair, ( x (cid:48) , y (cid:48) ) to denote a retrieved source and output pair from the training set, and y to represent the edited/generated output.",
"Given input x , the goal of the retrieve module is to find a similar training example ( x (cid:48) , y (cid:48) ).",
"We experiment with both Lucene and sss+ed.",
"These can be replaced with any other retrieval methods in the literature.",
"Similar to Re 3 Sum, we design a model that can jointly learn to produce the edited output y and re-rank the retrieved outputs y (cid:48) , which we refer to as pre-ranking , a common practice to determine which retrieved outputs are worth editing.",
"For editing, we use a Transformer as our seq2seq model.",
"We provide the model a concatenated input x [SEP] y (cid:48) , where [SEP] is a separator token, and we train it to produce the original target y with a standard cross entropy loss.",
"For pre-ranking, we add a [RANK] token to the Transformer's encoder analogous to the [CLS] token in BERT (Devlin et al., 2019).",
"We train the model to predict the similarity between y (cid:48) and y as the output of the [RANK] token, akin to predicting a token from a different vocabulary (Ghazvininejad et al., 2019).",
"We use a cross entropy loss based on a text similarity metric 2 , adding it to the Transformer's loss function.",
"For source x , given a set of N input ( x concatenated with N retrieved outputs y (cid:48) ) and generated candidate output pairs:",
"2 we use BLEU for MT and ROUGE-L for Gigaword.",
"This can be any other text similarity metric.",
"this module's objective is to select a high quality candidate output.",
"Ideally, we want to find: y = arg max y i similarity ( y i , y ) , 1 i N For post-ranking , we use simple ranking functions that work effectively.",
"For MT, we calculate the log-likelihood score of the generated candidate outputs using our trained model (Transformer based) and we choose the candidate that gets the highest model score.",
"For Gigaword, our ranking function simply chooses the most frequently generated output from the list of candidates.",
"In preliminary experiments, we tried other ranking methods, but we did not see a gain compared to our simple post-ranking methods.",
"Our goal here is not to find the best possible way to do the post-ranking, but only to show that gains are possible.",
"In particular, running the pre-ranker over a larger candidate list is not enough; we find that it is better to see what edits are made for multiple candidates before making the final decision.",
"This strongly suggests that the direction is worthy of future work, to determine how to best combine the evidence from x , x (cid:48) , y (cid:48) and y .",
"We test our proposed framework on the machine translation datasets English to Dutch (EN-NL) and English to Hungarian (EN-HU) following previous work (Bulte and Tezcan, 2019).",
"The training, validation, and test set sizes, respectively, are 2.4M, 3000 and 3207, and both datasets have the same source English sentences.",
"Additionally, we apply our framework on the Gigaword summarization task (Rush et al., 2015).",
"Here, the training, validation, and test set sizes are 3.8M, 189k, and 1951 respectively.",
"We evaluate MT performance using BLEU 3 scores.",
"For evaluation on Gigaword, we use the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L with commonly used evaluation parameters 4 .",
"follow most of the Transformer base hyperparam-eter configurations Vaswani et al. (2017).",
"We use a 6-layer Transformer with 8 attention heads per layer, 512 model dimensions, 2048 hidden dimensions and shared embeddings.",
"Our Transformer uses segment embeddings, with one segment for x and another for y (cid:48) .",
"For training, we use a learning rate of 5 e 4 , a batch size of 128k tokens, the Adam optimizer (Kingma and Ba, 2014), a dropout of 0.3, and a joined dictionary.",
"We train our models for 200k update steps, and we calculate validation loss following each epoch to choose our final model.",
"For test, we use a beam size of 5.",
"For MT, we use the 3 best retrieved outputs source x to create 4 training examples:",
"This is similar to NFR, which then uses for test, the input x [SEP] y (cid:48) 1 if it exists, and only x otherwise.",
"We use both sss+ed and Lucene to compare how retrieval impacts translation quality.",
"For Gigaword, we train with 10 retrieved outputs as opposed to 30 retrieved by (Cao et al., 2018), and for testing we use 30 retrieved outputs.",
"As a baseline, we also train a Transformer without retrieval.",
"The MT results in Table 1 show that for both EN-NL and EN-HU, the Transformer without retrieval slightly outperforms the LSTM based NFR which includes retrieval.",
"Replacing LSTM with Transformer in NFR (Tr + sss+ed) gives roughly a 4 point increase in BLEU.",
"Replacing sss+ed with Lucene further increases BLEU by 2 points.",
"Generating from x concatenated with the best pre-ranked output further improves performance, System EN-NL EN-HULSTM 51.45 40.47 NFR 58.91 48.24 Transformer (Tr) 59.88 49.61 Tr + sss+ed (NFR equivalent) 62.86 52.74 Tr + Lucene + x [SEP] y (cid:48) 1 64.92 55.16 Tr + Lucene + pre-rank 65.20 55.36 Tr + Lucene + post-rank (ours) 65.43 55.73 Table 1: BLEU scores on the MT datasets.",
"and the best results are obtained by post-ranking, for which we use the highest scored output according to the model.",
"Overall, our retrieve-edit-rerank system with Transformer, Lucene, and a simple but effective post-ranking function obtains a BLEU score increase of 6.52 on EN-NL and 7.49 on EN-HU over the current state of art NFR model.",
"Results on Gigaword are shown in Table",
"2. The Transformer baseline obtains more than a 2 point increase in ROUGE over the LSTM baseline, and it achieves comparable performance to Re 3 Sum which is LSTM based and uses retrieval.",
"While pre-ranking before editing hurts performance, with post-ranking, our model is able to outperform the Transformer baseline and Re 3 Sum, obtaining between 0.55-1.24 improvement in ROUGE scores.",
"Our model comes slightly short of the retrieve-and-edit based state-of-art BiSET (Wang et al., 2019).",
"However, BiSET uses more complex pre-ranking and editing stages which could also incorporated into our model.",
"We leave this exploration to future work as it is largely orthogonal to post-ranking, which is the focus of our efforts.",
"Overall, with retrieve-edit-rerank, our model outperforms comparable systems which use retrieve-and-edit but no post-generation ranking, demonstrating that a simple post-ranking can boost the performance across two challenging tasks.",
"We report a more detailed analysis on Gigaword, which strongly suggests performance can be further improved by using better post-ranking methods.",
"For this purpose, we use an Oracle that has access to the gold target outputs.",
"Using this Oracle, we find the N -best generated candidate outputs (out of 30 total generated) in terms of ROUGE-1 similarity to the target.",
"We vary N from 1 to 30, and for each N , we randomly select one of the N Figure 2: Comparison with Oracle-based post-ranking methods in Gigaword.",
"best Oracle-chosen outputs.",
"The ROUGE-1 scores obtained for each N are shown in Figure",
"2. We also provide lower bounds which show the performance obtained with the candidate from the best N that is least similar to the target.",
"Figure 2 shows that our post-generation ranker, which selects the most-frequent candidate output, performs better than choosing a random candidate output ( N =30).",
"We also observe that randomly choosing from one of the 1st 26th best (out of 30) generated outputs surpasses the summarization performance achieved with our post-ranking function.",
"Moreover, choosing any of the 12-best candidates is a feasible strategy that outperforms our ranking function.",
"These observations suggest that many of the 30 retrieved outputs are useful for effective summary generation, and hence, there is a large room for improving by designing new post-generation ranking algorithms.",
"Similar analysis on MT shows that a ranker that always selects the optimal of the three candidate outputs gets about 3-5 BLEU points improvement over our post-ranking based models, leaving room for further performance gains.",
"To analyze the impact of post-ranking, we compare various outputs from our models for the Gigaword test set, as shown in Table",
"3. For the sample 3A, when augmenting the source with y (cid:48) 1 or the pre-ranked y (cid:48) , the model simply copies the retrieved text and ignores important details from the source.",
"However, the Transformer output indicates that most of the salient information can be obtained from the source itself.",
"By generating multiple outputs with multiple augmented inputs and then choosing the most-frequent output, our post-ranking function helps to lessen the sensitivity of the model to certain retrieved outputs.",
"For sample 3B, post-ranking chooses the output generated using y (cid:48) 1 which is also the actual target.",
"However, due to a poor retrieval, pre-ranking forces the model to generate an output that largely differs from the target.",
"We also found some examples where both the retrieve-only y (cid:48) 1 and the pre-ranked y (cid:48) were the same, and they were copied verbatim to generate the candidate output.",
"However, several of these copied retrieved outputs were too general summaries, and since the source was ignored during generation, the generated candidate output was missing some article specific information present in the target summary.",
"In many of these cases, simply using the source without any retrieval in the input resulted in an output more representative of the target summary, and also post-ranking helped select this better output.",
"These examples highlight the cases where simply relying on the best retrieval or on the pre-ranking can hurt results since the output generated using only the source without any retrieval is the same as the higher quality post-ranked output.",
"Overall, these examples demonstrate the flexibil-ity offered by our post-ranking module.",
"It allows the framework to choose between combinations of generations ignoring retrieval, generations using the closest retrieved output and generations using the pre-ranked output.",
"The post-ranking function also acts like a voting scheme, helping to convey the salient information from the inputs to the output while ignoring noise in the inputs.",
"In this paper, we presented a retrieve-edit-rerank framework for seq2seq text generation.",
"We used Lucene for retrieval, a Transformer model for editing, and simple task-specific post-generation ranking techniques.",
"We applied the framework on two MT datasets and the Gigaword summarization dataset.",
"Our results show that our simple ranking functions are effective in helping our model outperform the comparable retrieve-and-edit based methods for these datasets.",
"By performing analysis on Gigaword, we find that there exists room to improve summarization performance with better post-ranking algorithms, a promising direction for future research.",
"This is in line with our overall goal, which is not to find the best possible way to do the post-ranking, but only to show that gains are possible by editing multiple candidates and then comparing the results.",
"Moving forward, we would like to apply this framework to other retrieve-and-edit based generation scenarios such as dialogue, conversation, and code generation."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"method",
"objective",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"result",
"objective",
"method"
] |
[
"Though word embeddings and topics are complementary representations, several past works have only used pretrained word embeddings in (neural) topic modeling to address data sparsity in short-text or small collection of documents.",
"This work presents a novel neural topic modeling framework using multi-view embedding spaces: (1) pretrained topic-embeddings, and (2) pretrained word-embeddings (context-insensitive from Glove and context-sensitive from BERT models) jointly from one or many sources to improve topic quality and better deal with polysemy.",
"In doing so, we first build respective pools of pretrained topic (i.e., TopicPool ) and word embeddings (i.e., WordPool ).",
"We then identify one or more relevant source domain(s) and transfer knowledge to guide meaningful learning in the sparse target domain.",
"Within neural topic modeling, we quantify the quality of topics and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains.",
"Introducing the multi-source multi-view embedding spaces, we have shown state-of-the-art neural topic modeling using 6 source (high-resource) and 5 target (low-resource) corpora.",
"Probabilistic topic models, such as LDA (Blei et al., 2003), Replicated Softmax (RSM) (Salakhutdi-nov and Hinton, 2009) and Document Neural Autoregressive Distribution Estimator (DocNADE) (Larochelle and Lauly, 2012) are often used to extract topics from text collections and learn latent document representations to perform natural language processing tasks, such as information retrieval (IR).",
"Though they have been shown to be powerful in modeling large text corpora, the topic * : equal contribution Topic Topic Words Topic Label Z 1 ( S 1 ) profit, growth, stocks, apple , fall, Trading consumer, buy, billion, shares Z 2 ( S 2 ) smartphone, ipad, apple , app, Product Line iphone, devices, phone, tablet Z 3 ( S 3 ) microsoft, mac, linux, ibm, ios, Operating System apple , xp, windows, software Z 4 ( T ) apple , talk, computers, shares, ?",
"modeling (TM) still remains challenging especially in the sparse-data setting, especially for the cases where word co-occurrence data is insufficient, e.g., on short-text or a corpus of few documents.",
"It leads to a poor quality of topics and representations.",
"To address data sparsity issues, several works (Das et al., 2015; Nguyen et al., 2015; Gupta et al., 2019a, 2020) have introduced external knowledge in traditional topic models, e.g., incorporating word embeddings obtained from Glove (Pen-nington et al., 2014) or word2vec (Mikolov et al., 2013a).",
"However, no prior work in topic modeling has employed multi-view embedding spaces: (1) pretrained topics , i.e., topical embeddings obtained from large document collections, and (2) pretrained contextualized word embeddings from large-scale language models like BERT (Devlin et al., 2019).",
"Though topics and word embeddings are complementary in how they represent the meaning, they are distinctive in how they learn from word occurrences observed in text corpora.",
"A topic model (Blei et al., 2003) is a statistical tool to infers topic distributions across a collection of documents and assigns a topic to each word occurrence, where the assignment is equally dependent on all other words appearing in the same document.",
"Therefore, a topic has a global view representing semantic structures hidden in document collection.",
"On other hand, word embeddings have primarily local view in the sense that they are learned based on local collocation pattern in a text corpus, where the representation of each word often depends on a local context window (Mikolov et al., 2013b) or is a function of its sentence(s) (Peters et al., 2018).",
"Consequently, they are not aware of the thematic structures underlying the document collection.",
"Additionally, recent studies (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019) have shown a reasonable success in several NLP applications by employing pretrained contextualized word embeddings, where the representation of a word is different in different contexts (i.e., context-sensitive).",
"In context of this work, the representations due to global and local (context-sensitive or context-insensitive) views together are referred as multi-view embeddings.",
"For example in Table 1, consider four topics ( Z 1 Z 4 ) of different domains where the topics ( Z 1 Z 3 ) are respectively obtained from three different high-resource source ( S 1 S 3 ) domains whereas Z 4 from a low-resource target domain T (especially in the data-sparsity settings).",
"Observe that the topics about Trading ( Z 1 ), Product Line ( Z 2 ) and Operating System ( Z 3 ) are coherent and and represent meaningful semantics at document-level via lists of topic words.",
"However in sparse-data settings, the topic Z 4 discovered is incoherent (noisy) and it is difficult to infer meaningful document semantics.",
"Unlike the topics, word embeddings (context-insensitive) encode syntactic and semantic relatedness in fine-granularity and therefore, do not capture thematic structures.",
"For instance, the top-5 nearest neighbors (NN) of apple (below) in word embedding (Mikolov et al., 2013b) space suggest that it refers to a fruit and do not express any topical information (e.g., Trading , Product Line or Health ) in the corpora.",
"Similarly given the NN of the word fall , it is difficult to infer its association with document-level semantics, e.g., Trading as expressed by Z 1 in topic-embedding space.",
"fall NN == falling, falls, drop, tumble, rise, plummet",
"Therefore, topic and word embedding spaces encode complementary semantics.",
"Different to context-insensitive word embeddings, the word apple is referring to an organization and contextualized by different topical semantics respectively in the three sources S 1 -S 3 .",
"Thus, it arises the need for context-sensitive embeddings in topic modeling.",
"beddings : To alleviate the data sparsity issues, it is the first work in unsupervised neural topic modeling (NTM) within transfer learning paradigm that employs multi-view embedding spaces via:",
"(a) Global-view Transfer ( GVT ): Pretrained topic embeddings instead of using word embeddings exclusively, and",
"(b) Multi-view Transfer ( MVT ): Pretrained topic and word embeddings (context-insensitive from Glove (Pennington et al., 2014) and context-sensitive from large-scale language models such as BERT (Devlin et al., 2019) jointly to address data sparsity and polysemy issues.",
"Contribution (2) Multi-source Multi-view Neural Topic Modeling : A single source of prior knowledge is often insufficient due to incomplete and non-overlapping domain information required by a target domain.",
"Therefore, there is a need to leverage multiple sources of prior knowledge, dealing with domain-shifts (Cao et al., 2010) among the target and sources.",
"In doing so, we first learn word and topic representations on multiple source domains to build WordPool and TopicPool , respectively and then perform multi-view and multi-source transfer learning in neural topic modeling by jointly using the complementary representations.",
"We evaluate the effectiveness of multi-source neural topic modeling in multi-view embedding spaces using 7 (5 low-resource and 2 high-resource) target and 5 (high-resource) source corpora from news and medical domains, consisting of short-text, long-text, small and large document collections.",
"We have shown state-of-the-art results with significant gains quantified by generalization (perplexity), interpretability (topic coherence) and text retrieval.",
"The code is available at https://github.com/YatinChaudhary/ Multi-view-Multi-source-Topic-Modeling .",
"Consider a sparse target domain T and a set of source domains S , we first prepare two knowledge bases (KBs) of representations (or embeddings) from document collections of each of the |S| sources: (1) WordPool : a KB of pretrained word embeddings matrices { E 1 , ..., E |S| } , where E k RE K , and (2) TopicPool : a KB of pretrained latent topic embeddings { Z 1 , ..., Z |S| } , where Z k RH K encodes a distribution over a vocabulary of K words.",
"Here, k [1 , ..., |S| ] in superscript indicates knowledge of k th source, and E and H are word embedding and latent topic dimensions, respectively.",
"While topic modeling on T , we introduce the two types of knowledge transfers from one or many sources: Local ( LVT ) and Global ( GVT ) View Transfer using the two KBs of pretrained word (i.e., WordPool ) and topic (i.e., TopicPool ) embeddings, respectively.",
"Specially, we employ a neural autoregressive topic model, i.e., DocNADE as backbone in building the pools and realizing the multi-source multi-view framework.",
"Table 2 describes the notations used.",
"Notice that the superscript used in notations indicates a source.",
"DocNADE (Larochelle and Lauly, 2012) is an unsupervised neural-network based generative topic model that is inspired by the benefits of NADE (Larochelle and Murray, 2011) and RSM (Salakhut-dinov and Hinton, 2009) architectures.",
"Specifically, DocNADE factorizes the joint probability distribution of words in a document as a product of conditional distributions and efficiently models each conditional via a feed-forward neural network (ff-net), following reconstruction mechanism.",
"DocNADE Formulation : For a document v = ( v 1 , ..., v D ) of size D , each word index v i takes value in { 1 , ..., K } of vocabulary size K .",
"DocNADE learns topics in a language modeling fashion (Bengio et al., 2003) and decomposes the joint distribution p ( v ) = (cid:81) Di =1 p ( v i | v <i ) such that each autoregressive conditional p ( v i | v <i ) is modeled by a ff-net using preceding words v <i in the sequence: h i ( v <i ) = g ( c + (cid:88) q<i W : ,v q ) and g = { sigmoid , tanh } p ( v i = w | v <i ) = exp( b w + U w, : h i ( v <i )) (cid:80) w (cid:48) exp( b w (cid:48) + U w (cid:48) , : h i ( v <i )) for each word i { 1 , ..., D } where v <i is the subvector consisting of all v q such that q < i i.e., v <i { v 1 , ..., v i 1 } , g ( ) is a non-linear activation function, W RH K and U RK H are weight matrices, c RH and b RK are bias parameter vectors.",
"H is the number of hidden units (the number of topics to be discovered).",
"Figure 1 (left) (except WordPool ) describes the DocNADE architecture for the i th autoregressive step, where the parameter W is shared in the feed-forward networks and h i encodes latent document-topic proportion.",
"The value of each unit j in the hidden vector signifies contribution of the j th topic in the proportion.",
"Importantly, the topic-word matrix W has a property that the column vector W : ,v i corresponds to embedding of the word v i , whereas the row vector W j, : encodes latent features for the j th topic (i.e., topic-word distribution).",
"We leverage this property to introduce external knowledge via word and topic embeddings.",
"Algorithm 1 (for DocNADE, set both LVT and GVT to False ) demonstrates the computation of log p ( v ) and loss (i.e., negative log-likelihood) L ( v ) that is minimized using stochastic gradient descent.",
"Moreover, computing each h i is efficient (linear complexity) due to NADE architecture that leverages the pre-activation a i 1 of ( i 1) th step in computing a i for the i th step (line #6).",
"See Larochelle and Lauly (2012) for further details.",
"Why DocNADE backbone : It has shown outperforming traditional models such as LDA and RSM.",
"Additionally, Gupta et al. (2019a,b) have extended DocNADE on short texts by introducing context-insensitive word embeddings; however, based on a single-source transfer.",
"Thus, we adopt DocNADE.",
"We describe our transfer learning framework in topic modeling that jointly exploits the complementary prior knowledge accumulated in ( WordPool , TopicPool ), obtained from large document collections (DCs) from several sources.",
"In doing so, we first apply the DocNADE to generate a topic-word matrix for each of the DCs, where its column-vector and row-vector generate E k and Z k , respectively for the k th source.",
"See appendix for the mechanics of extracting word and topic embeddings from the topic-word matrix of a source.",
"LVT+MST Formulation for Multi-source Word Embedding Transfer : As illustrated in Figure 1 (left) and Algorithm 1 (with LVT being True , line #7), we perform transfer learning on a target T using the WordPool of pretrained word embeddings { E 1 , ..., E |S| } from several sources S (i.e., multi-source) under the two schemes: scheme",
"(i) : Using a domain-relevance factor for every source in the WordPool such that the hidden vector h i encodes document-topic distribution, augmented with prior knowledge in form of pretrained word embeddings from several sources: h i ( v <i ) = g ( c + (cid:88) q<i W : ,v q + (cid:88) q<i |S| (cid:88) k =1 k E k : ,v q ) Here, k refers to the k th source and k is a weight for E k that controls the amount of knowledge transferred in T , based on cross-domain overlap.",
"scheme",
"(ii) : Using a projection matrix P RH P with P = E |S| in order to align word-embedding spaces of the target and all source domains for all D words in the document v such that: For q { i, ...D } : e q = concat ( E 1: ,v q , ..., E k : ,v q ) h i ( v <i ) = g ( c + (cid:88) q<i W : ,v q + (cid:88) q<i P e q ) Unlike scheme",
"(i), the second schema allows us to automatically determine shifts in the target and source domains, identify and transfer relevant prior knowledge from many sources without configuring for every source.",
"To better guide TM, we also introduce pre-trained contextualized word embedding from BERT, concatenating with e q .",
"GVT+MST Formulation for Multi-source Topic Embedding Transfer : Next, we perform knowledge transfer exclusively using the TopicPool of pretrained topic embeddings (e.g., Z k ) from one or several sources, S .",
"In doing so, we add a regularization term to the loss function L ( v ) and require DocNADE to minimize the overall loss in a way that the (latent) topic features in W simultaneously inherit relevant topical features from each of the source domains S , and thus, it generates meaningful representations for the target T in order to address data-sparsity.",
"The overall loss L ( v ) due to GVT+MST configuration in DocNADE is: L ( v ) = log p ( v ) + |S| (cid:88) k =1 k H (cid:88) j =1 || A kj, : W Z kj, : || 22 Target Domain Corpora Source Domain Corpora ID Data Train Val Test K L C ID Data Train Val Test K L C T 1 20NSshort 1.3k 0.1k 0.5k 1.4k 13.5 20 S 1 20NS 7.9k 1.6k 5.2k 2k 107.5 20 T 2 20NSsmall 0.4k 0.2k 0.2k 2k 187.5 20 S 2 R21578 7.3k 0.5k 3.0k 2k 128 90 T 3 TMNtitle 22.8k 2.0k 7.8k 2k 4.9 7 S 3 TMN 22.8k 2.0k 7.8k 2k 19 7 T 4 R21578title 7.3k 0.5k 3.0k 2k 7.3 90 S 4 AGNews 118k 2.0k 7.6k 5k 38 4 T 5 Ohsumedtitle 8.3k 2.1k 12.7k 2k 11.9 23 S 5 PubMed 15.0k 2.5k 2.5k 3k 254.8 T 6 Ohsumed 8.3k 2.1k 12.7k 3k 159.1 23 Table 3: Data statistics: Short/long texts and/or small/large corpora in target and source domains.",
"T 1 T 2 T 3 T 4 T 5 T 6 S 1 I I R D D D S 2 D D D I D D S 3 R R I D D D S 4 R R R D D D S 5 D D D D -Table 4: Domain overlap in source-target corpora.",
"Here, A k RH H aligns latent topics in the target T and k th source, and k governs the degree of imitation of topic features Z k by W in T .",
"Consequently, the generative process of learning meaningful topics in W of the target domain T is guided by relevant topic features { Z } |S| 1 TopicPool .",
"Algorithm 1 (line #11) describes the computation of the loss, when GVT = True and LVT = False .",
"Moreover, Figure 1 (right) illustrates the need for topic alignments between target and source(s).",
"Here, j indicates the topic (i.e., row) index in a topic matrix, e.g., Z k .",
"Observe that the first topic (gray curve), i.e., Z 1 j =1 Z 1 of the first source aligns with the first row-vector (i.e., topic) of W (of target).",
"However, the other two topics Z 1 j =2 , Z 1 j =3 Z 1 need alignment with the target.",
"MVT+MST Formulation for Multi-source Word and Topic Embeddings Transfer : When LVT and GVT are True (Algorithm 1) for many sources, the two complementary representations are jointly used in transfer learning using WordPool and TopicPool , and therefore, the name multi-view and multi-source transfers.",
"Computational complexity of NTM : For DocNADE, the complexity of computing all hidden layers h i ( v <i ) is in O ( DH ) and all p ( v | v <i ) in O ( KDH ) .",
"Thus, the overall complexity of DocNADE is in O ( DH + KDH ) .",
"Within the proposed transfer learning framework, the complexity of computing all hidden layers ( LVT+MST in scheme",
"(i)) and topic-embedding transfer term ( GVT+MST ) is in O ( DH + |S| DH ) and O ( |S| KH ) , respectively.",
"Since |S| <<H , thus the overall complexity of DocNADE with MVT+MST is in O ( DH + KDH + KH ) .",
"Datasets : Table 3 describes the datasets used in high-resource source and low-and high-resource target domains for our experi-Baselines",
"ments.",
"The target domain T consists of four short-text corpora ( 20NSshort , TMNtitle , R21578title and Ohsumedtitle ), one small corpus ( 20NSsmall ) and two large corpora ( TMN and Ohsumed ).",
"However in source S , we use five large corpora ( 20NS , R21578 , TMN , AGnews and PubMed ) in different label spaces (i.e, domains).",
"Here, the corpora ( T 5 , T 6 and S 5 ) belong to medical and others to news .",
"Additionally, Table 4 suggests domain overlap (label match) in the target and source corpora, where we define 3 types of overlap: I (identical) if all labels match, R (related) if some labels match, and D (distant) if a very few or no labels match.",
"Note, our approaches are completely unsupervised and do not use the data labels ( appendix ).",
"Reproducibility : We follow the experimental setup similar to DocNADE (Larochelle and Lauly, 2012) and DocNADEe (Gupta et al., 2019a), where the number of topics ( H ) is set to 200 .",
"While DocNADEe requires the dimension (i.e., E ) of word embeddings be the same as the latent topic (i.e., H ), we follow scheme",
"(ii) (Algorithm 1) to introduce KBs from Model Scores on Target Corpus ( in sparse-data and sufficient-data settings ) Source or Transfer 20NSshort TMNtitle R21578title 20NSsmall TMN Corpus Type PPL COH IR PPL COH IR PPL COH IR PPL COH IR PPL COHB a s e li n e s Baseline TMNVDM 1047 .736 .076 973 .740 .190 372 .735 .271 957 .515 .090 833 .673 without WordProdLDA 923 .689 .062 1527 .744 .170 480 .742 .200 1181 .394 .062 1519 .577 Embeddings DocNADE 646 .667 .290 706 .709 .521 192 .713 .657 594 .462 .270 584 .636 P r o p o s e d 20NS LVT 630 .673 .298 705 .709 .523 194 .708 .656 594 .455 .288 582 .649 GVT 646 .690 .303 718 .720 .527 184 .698 .660 594 .500 .310 590 .652 MVT 638 .690 .314 714 .718 .528 188 .715 .655 600 .499 .311 588 .650 TMNLVT 649 .668 .296 655 .731 .548 187 .703 .659 593 .460 .273 -GVT 661 .692 .294 689 .728 .555 191 .709 .660 596 .521 .276 -MVT 658 .687 .297 663 .747 .553 195 .720 .660 599 .507 .292 -R21578 LVT 656 .667 .292 704 .715 .522 186 .715 .676 593 .458 .267 581 .636 GVT 654 .672 .293 716 .719 .526 194 .706 .672 595 .485 .279 591 .646 MVT 650 .670 .296 716 .720 .528 194 .724 .676 599 .490 .280 589 .650 AGnews LVT 650 .677 .297 682 .723 .533 185 .710 .659 592 .458 .260 564 .668 GVT 667 .695 .300 728 .735 .534 190 .717 .663 598 .563 .282 601 .684 MVT 659 .696 .290 718 .740 .533 189 .727 .659 599 .566 .279 592 .686 MSTLVT 640 .678 .308 663 .732 .547 182 .739 .673 594 .542 .277 568 .674 GVT 658 .705 .305 704 .746 .550 192 .727 .673 599 .585 .326 602 .680 MVT 656 .740 .314 680 .752 .569 188 .745 .685 600 .637 .285 600 .690 Gain%(vs DocNADE) 2.48 10.9 8.28 7.22 6.06 9.21 5.20 4.49 4.26 0.34 37.9 20.7 3.42 8.50 Table 6: State-of-the-art comparisons with TMs: Perplexity (PPL), topic coherence (COH) and precision@recall (IR) at retrieval fraction 0 .",
"pre-trained word embeddings from Glove, FastText ( E =300) (Bojanowski et al., 2017) and BERT-base ( E =768) models.",
"See appendix for the experimental setup, hyperparameters and optimal values of k [0 . 1 , 0 . 5 , 1 . 0] and k [0 . 1 , 0 . 01 , 0 . 001] .",
"Baselines (Related Works) : (1) Topic Models without Transfer Learning that learn topics in isolation using the given target corpus only.",
"We employ LDA-based variant, i.e., ProdLDA (Srivastava and Sutton, 2017) and neural network-based variants, i.e., DocNADE (autoregressive) and NVDM (non-autoregressive) (Miao et al., 2016).",
"(2) Topic Models with Transfer Learning that leverages pre-trained context-insensitive word embeddings (Pennington et al., 2014).",
"We consider topic models based on both LDA, i.e., Gauss-LDA (Das et al., 2015) and glove-GMM (Nguyen et al., 2015), and neural networks, i.e., DocNADEe (Gupta et al., 2019a).",
"They do not leverage pretrained topic-embeddings (i.e., GVT ), contextualized word-embedding and MST-MVT techniques.",
"(3) Unsupervised Document Representation to quantify the quality of document representations.",
"We use 3 strategies: doc2vec (Le and Mikolov, 2014), EmbSum-Glove and EmbSum-BERT (rep-resent a document by summing the pre-trained embeddings of it's words from Glove and BERT).",
"transfer learning capabilities of the proposed framework, where we build (train) a TM using all source corpora and evaluate on the target corpus T , and (5) Data-augmentation that first augments the",
"target corpus with all the source corpora and then",
"builds a TM to evaluate transfer learning on T .",
"Table 5 summarizes the comparison of this work with the aforementioned baselines.",
"Tables 6 and 7 employ baseline TMs without and with transfer learning, respectively.",
"To evaluate generative performance of DocNADE-based NTM, we compute average held-out perplexity per word: PPL = exp (cid:0) 1 N (cid:80) Nt =1 1 | v t | log p ( v t ) (cid:1) , where N and | v t | are the number of documents and words in a document v t , respectively.",
"Tables 6 and 7 quantitatively show PPL scores on the five target corpora using one or four sources.",
"In Table 6 using TMN (as a single source) for LVT, GVT and MVT transfer types on the target TMNtitle , we see improved (reduced) PPL scores: ( 655 vs 706 ), ( 689 vs 706 ) and ( 663 vs 706 ) respectively in comparison to DocNADE.",
"We also observe gains due to MST+LVT, MST+GVT and MST+MVT configurations on TMNtitle .",
"Similarly in MST+LVT for R21578title , we observe a gain of 5.2% (182 vs 192), suggesting that multi-source transfer learning using pretrained KBs from Model Scores on Target Corpus ( in sparse-data and sufficient-data settings ) Source or Transfer 20NSshort TMNtitle R21578title 20NSsmall TMN Corpus Type PPL COH IR PPL COH IR PPL COH IR PPL COH IR PPL COHB a s e li n e s doc2vec -.090 -.190 -.518 -.200 -EmbSum-Glove -.236 -.513 -.587 -.214 -EmbSum-BERT -.261 -.499 -.594 -.262 -Baseline TM Gauss-LDA -.080 -.408 -.367 -.090 -with Word-glove-DMM -.512 .183 -.633 .445 -.364 .273 -.578 .090 -.705",
"word and topic embeddings (jointly) helps improving TM, and it also verifies domain relatedness (e.g., in TMNTMNtitle and AGnews -TMN ).",
"Similarly, Table 7 reports gains in PPL (e.g., on TMNtitle , R21578title , etc.) compared to the baseline DocNADEe.",
"PPL scores due to BERT can be not computed since its embeddings are aware of both preceding and following contexts.",
"In Table 8, we show PPL scores on 2 medical target corpora: Ohsumtitle and Ohsumed using 2 sources: AGnews ( news ) and PubMed ( medical ) to perform cross-domain and in-domain transfers.",
"We see that using PubMed for LVT on both the targets improves generalization.",
"Overall, we report a gain of 17.3% ( 1268 vs 1534 ) on Ohsumedtitle and 8.55% ( 1497 vs 1637 ) on Ohsumed datasets, compared to DocNADEe.",
"While PPL is used for model selection, Chang et al. (2009) showed in some cases humans preferred TMs (based on the semantic quality of topics) with higher (worse) perplexities.",
"Therefore, we also estimate the quality of topics.",
"We follow Rder et al. (2015) and Gupta et al. (2019a) to compute COH of the top 10 words in each topic.",
"Essentially, the higher scores imply the coherent topics.",
"Tables 6 and 7 (under COH column) demonstrate that our approaches (GVT, MVT and MST) show noticeable gains and thus improve topic quality.",
"For instance in Table 6, when AGnews is used as a single source for 20NSsmall datatset, we observe a gain in COH due to GVT (.563 vs .462) and MVT (.566 vs .462).",
"Additionally, noticeable gains are reported due to MST+LVT (.542 vs .462), MST+GVT (.585 vs .462) and MST+MVT (.637 vs .462), compared to DocNADE.",
"Importantly, we find a trend MVT > GVT > LVT in COH scores for both the single-source and multi-source transfers.",
"Similarly, Table 7 show noticeable gains (e.g., 40.7%, 10.4%, 7.08%, etc.) in COH due to MST+MVT+Glove +FastText+BERT setting.",
"Moreover, Table 8 shows gains in COH due to GVT on Ohsumedtitle and Ohsumed , using pretrained knowledge from PubMed .",
"Overall, the GVT, MVT and MST boost COH for all the five target corpora compared to the baseline TMs (i.e., DocNADE and DocNADEe).",
"The improvements suggest that the approaches scale across domains.",
"We further evaluate the quality of document representations and perform an IR task using the label information only to compute precision.",
"We follow the experimental setup similar to Gupta et al. (2019a).",
"See the details in appendix .",
"Tables 6 and 7 report precision scores at retrieval fraction 0 .",
"02 where the configuration MST+MVT outperforms both the DocNADE and DocNADEe for all 4 targets.",
"We observe large gains in precision:",
"(a) Table 6: 20.7% (.326 vs .270) on 20NSsmall , 9.21% (.569 vs .521) on 0.001 0.002 0.005 0.01 0.02 0.05 0.1 0 .",
"TMNtitle , etc.,",
"(b) Table 7: 11.9% (.604 vs .540) on TMNtitle and 9.5% (.322 vs .294) on 20NSshort , etc.,",
"(c) Table 8: 14.4% (.183 vs .160) on Ohsumedtitle .",
"Additionally, Figures 2a, 2b, 2c and 2d illustrate precision-recall curves on 20NSshort , 20NSsmall , TMNtitle and R21578title respectively, where MST+MVT and MST+GVT consistently outperform the baselines at all fractions.",
"Figures 2a, 2b, 2c and 2d show precision in the zero-shot (source-only training) and data-augmentation (source+target training) configurations.",
"Observe that the latter helps in learning meaningful representations and performs better than zero-shot; however, it is outperformed by MST+MVT, suggesting that a naive (data space) augmentation does not add sufficient prior or relevant information to the sparse target.",
"Thus, we find that it is beneficial to augment training data in feature space (e.g., LVT, GVT and MVT) especially for unsupervised topic models using WordPool and TopicPool .",
"Moreover in the few-shot setting, we first split the training data of TMNtitle into several sets: 20%, 40%, 60%, 80% of the training set and then retrain DocNADE, DocNADEe and Doc-NADE+MST+MVT on each as a sparse target.",
"We demonstrate transfer learning in such sparse-data settings using the KBs: WordPool and TopicPool jointly.",
"Figure 2e plots precision at retrieval fraction 0 .",
"02 and validates that the proposed modeling consistently outperforms both the baselines: DocNADE and DocNADEe.",
"Beyond IR, we further investigate computing topic coherence (COH) for the zero-shot and data-augmentation baselines, where the COH scores in Figure 2f suggest that MST+MVT outperforms DocNADEe, zero-shot and data-augmentation.",
"For topic level inspection, we first extract topics using the rows of W of source and target corpora.",
"Table 9 shows the topics (top-5 words) from source and target domains.",
"Observe that the target topics become more coherent after transfer learning (i.e., +GVT) from one or more sources.",
"The blue color signifies that a target topic has imitated certain topic words from the source.",
"We also show a topic (the last) improved due to multi-source transfer.",
"Observe that the NNs in the target become more meaningful by gaining knowledge mainly from 20NS source.",
"We have presented a state-of-the-art neural topic modeling framework using multi-view embedding spaces: pretrained topic-embeddings and word-embeddings (context-sensitive and context-insensitive) from one or many sources to improve quality of topics and document representations.",
"This research was supported by the Federal Ministry for Economic Affairs and Energy (Bundeswirtschaftsministerium: bmwi.de), grant 01MD19003E (PLASS: Platform for Analytical Supply Chain Mangement Services, plass.io) at Siemens AG (TechnologyMachine Intelligence), Munich Germany."
] | [
"abstain",
"objective",
"objective",
"method",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other"
] |
[
"Round-trip Machine Translation (MT) is a popular choice for paraphrase generation, which leverages readily available parallel corpora for supervision.",
"In this paper, we formalize the implicit similarity function induced by this approach, and show that it is susceptible to non-paraphrase pairs sharing a single ambiguous translation.",
"Based on these insights, we design an alternative similarity metric that mitigates this issue by requiring the entire translation distribution to match, and implement a relaxation of it through the Information Bottleneck method.",
"Our approach incorporates an adversarial term into MT training in order to learn representations that encode as much information about the reference translation as possible, while keeping as little information about the input as possible.",
"Paraphrases can be generated by decoding back to the source from this representation, without having to generate pivot translations.",
"In addition to being more principled and efficient than round-trip MT, our approach offers an adjustable parameter to control the fidelity-diversity trade-off, and obtains better results in our experiments.",
"Paraphrase generation aims to generate alternative surface forms expressing the same semantic content as the original text (Madnani and Dorr, 2010), with applications in language understanding and data augmentation (Zhou and Bhat, 2021).",
"One popular approach is to use an MT system to translate the input text into a pivot language and back (Wiet-ing and Gimpel, 2018; Mallinson et al., 2017; Roy and Grangier, 2019).",
"While it intuitively makes sense that translating to another language and back should keep the meaning of a sentence intact while changing its surface form, it is not clear what exactly would be considered a paraphrase by such a system.",
"round-trip MT system can be naturally decomposed as P ( x p | x s ) = P ( x p ) S ( x p , x s ) , where S is a symmetric similarity metric over the paraphrase space and P ( x p ) the probability of x p .",
"We argue that this similarity function is not appropriate in the general case, as it can assign a high score to sentence pairs that share an ambiguous translation despite not being paraphrases of each other.",
"This phenomenon is illustrated in Figure 2, where x s and x p share a confounding translation without gender marker.",
"So as to address this issue, we design an alternative similarity function that requires the entire translation distribution to match, and develop a relaxation of it through the Information Bottleneck (IB) method.",
"We implement this approach using an adversarial learning system depicted in Figure",
"1. 1621 He will go to school l ir a la escuela x s y j y i He will go to school x s x p She will go to school Ir a la escuela Figure 2: Confounding translation problem in round-trip MT. Ir a la escuela does not mark the gender of the subject due to ellipsis, and it is thus a valid translation of both He will go to school and She will go to school .",
"Our model combines an encoder that, for a given sentence, removes the information that is not relevant to predict its translation, and a decoder that reconstructs a paraphrase from this encoding.",
"In addition to being more principled, our approach is more efficient than round-trip MT at inference, can be tuned to favor fidelity or diversity, and achieves a better trade-off between the two.",
"Our code is freely available 1 .",
"We next review the paraphrase generation literature (2.1), and describe the information bottleneck method (2.2), which is the basis of our proposal.",
"Early work on paraphrasing focused on retrieval methods, either extracting plausible sentences from large corpora for generation (Barzilay and McKe-own, 2001; Bannard and Callison-Burch, 2005), or identifying paraphrase pairs from weakly aligned corpora to create paraphrase datasets (Coster and Kauchak, 2011; Dolan et al., 2004).",
"More recently, neural approaches for paraphrase generation have dominated the field.",
"We classify these methods according to the type of supervision they use.",
"Monolingual corpora.",
"These systems are trained in an unsupervised fashion using unlabeled monolingual corpora.",
"They usually employ an information bottleneck, with the goal of encoding semantic information in the latent space.",
"Approaches include Variational Autoencoders (VAE) (Bowman et al., 2016), VAEs with Vector Quantization (Roy and Grangier, 2019), and latent bag-of-words models (Fu et al., 2019).",
"Huang and Chang (2021) disentangle semantic and syntactic content in the latent space through a bag of words 1 https://github.com/aitorormazabal/ paraphrasing-from-parallel representation, which allows for syntactically controllable generation.",
"Parallel corpora.",
"These systems are trained on pairs of parallel sentences in two languages.",
"Most of these methods are based on round-trip MT, where a sentence is translated to a pivot language and back in order to obtain a paraphrase.",
"Hu et al. (2019) add lexical constraints to the MT decoding procedure to obtain better paraphrases.",
"Mallinson et al. (2017) generate not one but multiple pivot sentences and use a fusion-in-decoder strategy.",
"Paraphrase corpora.",
"These systems are trained in a supervised manner over pairs or clusters of paraphrases.",
"When such data is available, training a regular sequence-to-sequence model is a strong baseline (Egonmwan and Chali, 2019).",
"Kumar et al. (2019) add submodular optimization to improve paraphrase diversity.",
"Some VAE-based methods also leverage paraphrase clusters to learn a latent representation that disentangles meaning and form (Iyyer et al., 2018; Kumar et al., 2020; Hosking and Lapata, 2021; Chen et al., 2019).",
"Most of these methods require a syntactic exemplar for generation, and assume that all surface forms are valid for all sentences.",
"Hosking and Lapata (2021) do away with this assumption in the context of question paraphrasing, predicting a valid syntactic embedding from a discrete set at test time.",
"While it is paraphrase corpora that offers the strongest supervision, such data is hard to obtain and usually restricted to narrow domains like Quora Question Pairs, WikiAnswers and Twitter (Hosking and Lapata, 2021; Kumar et al., 2019; Egonmwan and Chali, 2019).",
"In contrast, parallel corpora is widely available, while offering a stronger training signal than monolingual corpora.",
"For that reason, round-trip MT is a common choice when paraphrases are needed for downstream tasks (Xie et al., 1622 2020; Artetxe et al., 2020), as well as a common baseline in the paraphrasing literature (Hosking and Lapata, 2021; Roy and Grangier, 2019).",
"2 Our work focuses on this class of systems, identifying the limitations of round-trip MT and proposing a more principled alternative.",
"Given two random variables X, Y , the Information Bottleneck (IB) method (Tishby et al., 1999) seeks to learn a representation T ( X ) that minimizes the Mutual Information (MI) between T and X , while preserving a minimum MI between T and Y .",
"That is, the objective I ( X, T ) s.t. I ( T, Y ) is minimized.",
"Since the MI is usually impossible to calculate exactly for neural representations, a common approach is to use variational methods, that turn the estimation problem into an optimization one.",
"This can be done by adding a neural decoder on top of the representation, and training the entire system end-to-end (Poole et al., 2019).",
"This is the approach we follow in this work.",
"Let X be a random variable representing a sequence in the source language, and Y be a random variable representing its translation into a pivot language.",
"3 Given an input sequence x s X , we can use round-trip MT to generate a paraphrase x p X by translating x s into the pivot language and back, according to the forward and backward translation models P ( y | x s ) and P ( x p | y ) .",
"As such, we can formulate the probability of round-trip MT generating a particular paraphrase x p by marginalizing over the set of possible pivot translations: P ( x p | x s ) = (cid:88) y YP ( y | x s ) P ( x p | y ) (1) In what follows, we will characterize the paraphrases produced by this approach, i.e. the properties that x p needs to meet in relation to x s for P ( x p | x s ) to be high.",
"4 2 Round-trip MT has also been used to generate synthetic paraphrase corpora (Wieting and Gimpel, 2018).",
"3 For convenience, we will also use X and Y to refer to the set of source and target language sequences, and abbreviate probabilities of the form P ( X = x ) as P ( x ) .",
"4 Some round-trip MT systems do not consider all possible translations into the pivot language, but only a subset of them (Mallinson et al., 2017).",
"In that case, the sum in Eq.",
"1 goes over y { y 1 , ..., y K } , and we need to introduce a partition Z = (cid:80) y { y 1 ,...,y K } P ( y | x s ) to normalize the probabilities.",
"However, the fundamental analysis in this section still applies.",
"Refer to Appendix A for more details.",
"By applying Bayes' rule, we can rewrite Eq.",
"1 as follows: P ( x p | x s ) = P ( x p ) (cid:88) y YP ( y | x s ) P ( y | x p ) P ( y ) (cid:124) (cid:123)(cid:122) (cid:125) SMT ( x p ,x s ) (2) The sum on the right hand side can be interpreted as a symmetric similarity function, SMT ( x p , x s ) = SMT ( x s , x p ) = (cid:80) y P ( y | x s ) P ( y | x p ) P ( y ) , which measures the likelihood of two sentences to be actual paraphrases.",
"The probability of x p given x s then becomes P ( x p | x s ) = P ( x p ) SMT ( x p , x s ) , which is the similarity between x s and x p , weighted by the marginal probability of x p .",
"But when are x s and x p considered similar according to the above definition of SMT ( x s , x p ) ?",
"Intuitively, SMT is a measure of the overlap between the conditional distributions that x s and x p induce over Y .",
"This will be highest when P ( y | x s ) P ( y | x p ) is as large as possible for as many y as possible.",
"At the same time, P ( y | x s ) P ( y | x p ) will be high when both P ( y | x s ) and P ( y | x p ) are high, that is, when y is a probable translation of both x s and x p .",
"This captures the intuition that two sentences are similar when they can be translated into the same text in the pivot language.",
"But what if x s and x p have one particular high-probability translation y j in common, but differ in the rest?",
"As illustrated in Figure 2, this can happen when y j is ambiguous in the target language and can mean both x s and x p , even if x s and x p are not equivalent (e.g., when x s uses the masculine form, x p the feminine form, and y j does not mark the gen-der).",
"In this case, the sum (cid:80) y P ( y | x s ) P ( y | x p ) P ( y ) will be dominated by P ( y j | x s ) P ( y j | x p ) P ( y j ) , which will be high when both P ( y j | x s ) and P ( y j | x p ) are high.",
"We can thus conclude that the implicit similarity function underlying round-trip MT is flawed, as it assigns a high score to a pair of sequences ( x s , x p ) that have an ambiguous translation in common.",
"As a consequence, round-trip MT will generate x p as a paraphrase of x s with a high probability, even if the two sequences have a different meaning.",
"As shown in the previous section, the implicit similarity function induced by round-trip MT is not adequate in the general case, as it assigns a high score to pairs of sequences that share a single translation, despite differing in the rest.",
"So as to address 1623 this, we can define an alternative similarity function that requires the entire translation distribution to match: S ( x p , x s ) = (cid:40) 1 P ( y | x p ) = P ( y | x s ) y Y 0 otherwise (3) and use it to replace SMT in Eq.",
"2 so that P ( x p | x s ) P ( x p ) S ( x p , x s ) .",
"However, this definition is too strict, as it is virtually impossible that P ( y | x p ) and P ( y | x s ) are exactly the same for all y Y .",
"5 In 4.1, we define a relaxation of it through the IB method, which introduces an adjustable parameter to control how much we deviate from it.",
"In 4.2, we characterize the paraphrases generated by this approach, showing that they are less susceptible to the problem of confounding translations described in the previous section.",
"So as to implement the similarity function in Eq.",
"3, we will use the IB method to learn an encoding T for X such that the following holds: S ( x p , x s ) = P ( x p | T ( x s )) P ( x p ) Z ( x s ) (4) where Z ( x s ) is a normalizer that does not depend on the paraphrase candidate x p .",
"As seen in 2.2, given a source variable X and a target variable Y , the IB method seeks to find an encoding T ( X ) that minimizes the MI with X (maximizing compression), while preserving a certain amount of information about Y : min TI ( X, T ) s.t I ( T, Y ) .",
"This constrained minimization is achieved by introducing a Lagrange multiplier and minimizing",
"As , all the information about Y is preserved and the IB method learns a minimal sufficient statistic T , that is, an encoding that satisfies I ( T, Y ) = I ( X, Y ) while achieving the lowest I ( T, X ) possible.",
"The following theorem states that such a minimal sufficient statistic T induces the similarity function in Eq.",
"3 (proof in Appendix C): 5 One reason is that we use empirical estimates of P ( y | x p ) and P ( y | x s ) , which will deviate from the ground truth.",
"Theorem",
"1. Suppose the random variable X represents a sentence in the source language, Y represents its translation, and T is a minimal sufficient statistic of X with respect to Y. Let x p and x s be a pair of sentences in the source language.",
"Then, P ( x p | T ( x s )) = P ( x p ) S ( x p ,x s ) Z ( x s ) , where S is given by Equation 3, and Z is a normalizing factor that does not depend on x p .",
"Thus, as the IB method approximates the similarity metric S .",
"In practice, when is set to a fixed finite number, losing some information about the target variable is allowed, and a relaxation of the metric S is learned instead.",
"We will next analyze the relaxation of S induced by the IB method.",
"We will characterize what kind of sentences are considered paraphrases by it, showing that it is less susceptible to the problem of confounding translations found in round-trip MT (3).",
"Derivations for the results in this section, as well as alternative bounds and broader discussion can be found in Appendix B. As seen in 4.1, we define paraphrase probabilities given an encoding T as P ( x p | T ( x s )) = P ( X = x p | T ( X ) = T ( x s )) , which can only be non-zero if T ( x p ) = T ( x s ) .",
"This means that the encoding T will partition the source space into a collection of paraphrase clusters according to its value.",
"Mathematically, given the equivalence relation x 1 x 2 T ( x 1 ) = T ( x 2 ) , only sentence pairs within the same equivalence class will have non-zero paraphrase probabilities.",
"We then have the following theorem: Theorem",
"2. Suppose T is a solution of the IB optimization problem min TI ( X, T ) s.t I ( T, Y ) , and = I ( X, Y ) .",
"If A is the partition on X induced by T , we have: (cid:88) A A max x 1 ,x 2 AP ( x 1 ) P ( x 2 ) 2( P ( x 1 ) + P ( x 2 )) D 1 ( PY | x 1 , PY | x 2 ) 2 , (7) where D 1 is the L 1 norm distance.",
"It is easy to see that, when = 0 , corresponding to = I ( X, Y ) and , this forces all distances to be zero.",
"In that case, only sentences with identical translation distributions are considered paraphrases, in accordance with Theorem",
"1. In the general case, Theorem 2 states that the L 1 distance between the translation distributions of 1624 sentences that are considered paraphrases cannot be high, as it will be bounded by a function of .",
"While the SMT metric in 3 can be dominated by a high-probability term and effectively ignore differences in probability for the less likely translations, the L 1 norm gives equal importance to differences in probability for every translation candidate.",
"Thanks to this, the resulting system will be less susceptible to the problem of confounding translations.",
"In this section, we describe a practical implementation of the IB-based paraphrasing approach defined theoretically in 4.",
"As illustrated in Figure 1, our system can be seen as an extension of a regular encoder-decoder MT architecture with an additional adversarial decoder, which is trained with an auto-encoding objective to reconstruct the original sentence x s from the encoder representation T ( x s ) .",
"The encoder is trained to minimize the cross-entropy loss of the MT decoder, while maximizing the loss of the adversarial decoder.",
"This way, the encoder is encouraged to remove as much information about x s as possible, while retaining the information that is necessary to predict its reference translation y .",
"Thanks to this, T ( x s ) should capture the semantic content of x s (which is relevant to predict y ), without storing additional surface information (which is not relevant to predict y ).",
"Once the model is trained, the adversarial decoder can be used to generate paraphrases of x s from this representation T ( x s ) .",
"This adversarial architecture can be interpreted as an implementation of the IB method as follows.",
"Following Poole et al. (2019), we start by adding a decoder q ( y | t ) on top of the encoder T ( x ) , and rewrite the I ( T, Y ) term as: I ( T, Y ) = EP ( y,t ) (cid:20) log q ( y | t ) P ( y ) (cid:21) + EP ( t ) [ KL ( P ( y | t ) || q ( y | t ))] EP ( y,t ) (cid:20) log q ( y | t ) (cid:21) + h ( Y ) , (8) where equality will hold if q is the true conditional distribution q ( y | t ) = P ( y | t ) , and h is the differential entropy.",
"If we parametrize T and q by a neural network encoder-decoder architecture the EP ( y,t ) (cid:20) log q ( y | t ) (cid:21) term in Eq.",
"8, can be rewritten as EP ( y,x ) (cid:20) log q ( y | T ( x )) (cid:21) , which is precisely the log likelihood of the data distribution of X, Y given by P .",
"In other words, by training the encoder-decoder to maximize Eq.",
"8, we are implicitly maximizing the mutual information I ( T, Y ) .",
"I ( X, T ) EP ( x,t ) (cid:20) log q ( x | t ) (cid:21) + h ( X ) = EP ( x ) (cid:20) log q ( x | T ( x )) (cid:21) + h ( X ) , (9)",
"where equality will hold when q is the true conditional distribution and q ( x | T ( x )) = P ( x | T ( x )) .",
"Thus, given an ideal decoder q that perfectly approximates the conditional distributions q ( x | T ( x )) and q ( y | T ( x )) , the IB minimization problem is equivalent to minimizing E p ( x ) (cid:20) log q ( x | T ( x )) (cid:21) EP ( y,t ) (cid:20) log q ( y | t ) (cid:21) = EP ( x,y ) (cid:20) log q ( x | T ( x )) log q ( y | T ( x )) (cid:21) .",
"(10)",
"In practice, we parametrize both the encoder T and the decoder q with transformer neural networks, and learn them from a parallel corpus.",
"Since log q ( y | T ( x )) is a lower bound of I ( T, Y ) h ( Y ) , maximizing this term is theoretically sound.",
"Minimizing EP ( x ) (cid:20) log q ( x | T ( x )) (cid:21) , on the other hand, amounts to minimizing a lower bound, which, while not as theoretically solid, is common practice in the variational optimization literature (Chen et al., 2018; Kim and Mnih, 2018).",
"L ( T, q ) = EP ( x,y ) [ log q ( y | T ( x )) +(1 ) log q ( x | T ( x ))] = LMT ( T, q ) (1 ) L Adv ( T ) , (11)",
"where LMT is the regular MT loss of cross-entropy with the translation target, and L Adv is the cross-entropy with the source sentence (see Figure 1).",
"6 We thus observe that the proposed adversarial training architecture approximates the IB method.",
"The 6 We make the adversarial term a function of T only in the minimization objective, as the gradient from the adversarial term is only propagated to the encoder.",
"The adversarial decoder is independently trained to predict the source from the encoded representation.",
"setting corresponds to 1 , where the optimal solution is a minimal sufficient statistic.",
"During training, the expectation in Eq.",
"11 is approximated by sampling batches from the training data.",
"Care must be taken when optimizing the loss, as we do not want to propagate gradients of the adversarial loss to the adversarial decoder.",
"If we did, a trivial way to minimize (1 ) log q ( x | T ( x )) would be to make the decoder bad at recovering x , which would not encourage T ( x ) to encode as little information as possible.",
"To prevent this, we use a percentage K of the batches to learn the adversarial decoder log q ( x | T ( x )) , where the encoder is kept frozen.",
"The rest of the batches are used to optimize the full term log q ( y | T ( x )) + (1 ) log q ( x | T ( x )) , but the gradients for the second term are only propagated to the encoder.",
"We experiment with the following systems:",
"Proposed.",
"Our system described in 5.",
"We share the weights between the MT decoder and the adversarial decoder, indicating the language that should be decoded through a special language ID token.",
"Unless otherwise indicated, we use = 0 .",
"73 and K = 0 .",
"7 , which performed best in the development set.",
"7 Round-trip MT. A baseline that uses two separate MT models to translate into a pivot language and back (see 3).",
"We use mBART (Liu et al., 2020) to initialize both our proposed system and round-trip MT, and train them using the same hyper-parameters as in the original work.",
"8 In both cases, we use the English-French WMT14 dataset (Bojar et al., 2014) as our parallel corpus for training.",
"9 We report results for two decoding strategies: beam search with a beam size of 5, and top-10 sampling with a temperature of 0.9 (optimized in the development set).",
"10 7 We performed a grid search, where { 0 .",
"7 , 0 .",
"73 , 0 .",
"8 } and K { 0 .",
"7 , 0 .",
"8 } , and chose the checkpoint with best iBLEU with = 0 .",
"7 .",
"3 e 5 maximum learning rate, and 100 K total steps.",
"9 We filter the dataset by removing sentence pairs with a source/target length ratio that exceeds 1 .",
"5 or are longer than 250 words.",
"We consider two axes when evaluating paraphrases: fidelity (the extent to which the meaning of the input text is preserved) and diversity (the extent to which the surface form is changed).",
"Following common practice, we use a corpus of gold paraphrases to automatically measure these.",
"More concretely, given the source sentence s , the reference paraphrase r and the candidate paraphrase c , we use BLEU( c, r ) as a measure of fidelity, and BLEU( c, s )known as self-BLEUas a measure of diversity.",
"An ideal paraphrase system would give us a high BLEU, with as low a self-BLEU as possible.",
"Given that there is generally a tradeoff between the two, we also report iBLEU = BLEU (1 ) self-BLEU, which combines both metrics into a single score (Mallinson et al., 2017).",
"Following Hosking and Lapata (2021), we set = 0 .",
"7 .",
"For development, we extracted 156 paraphrase pairs from the STS Benchmark dataset (Cer et al., 2017), taking sentence pairs with a similarity score above 4.5.",
"For our final evaluation, we used the Multiple Translations Chinese (MTC) corpus (Huang et al., 2002), which comprises three sources of Chinese journalistic text translated into English by multiple translation agencies.",
"We extract the translations of the first two agencies to obtain an test set of 993 paraphrase pairs, where one is the source and the other the reference paraphrase.",
"The third sentence if kept as an additional paraphrase for estimating human performance.",
"We report our main results in Table",
"1. As it can be seen, our proposed system outperforms all baselines in terms of iBLEU, indicating that it achieves 1626 l = 0.7 l = 0.7 l = 0.73 l = 0.73 l = 0.8 l = 0.8 MT Beam MT Samp 15 20 25 20 30 40 50 SelfBLEU (diversity) BLEU ( f i de li t y ) Methods MT Beam MT Samp Ours Beam Ours Samp Figure 3: Effect of varying the parameter on the development set.",
"a better trade-off between diversity and fidelity.",
"This advantage comes from a large improvement in diversity as measured by self-BLEU, at a cost of a small drop in fidelity as measured by BLEU.",
"Both for round-trip MT and our proposed system, beam search does better than sampling in terms of fidelity, at the cost of sacrificing in diversity.",
"Finally, the human reference scores show ample room for improvement in both axes.",
"While our proposed system achieves the best combined score, our results also show that different approaches behave differently in terms of diversity and fidelity.",
"In practice, it would be desirable to have a knob to control the trade-off between the two, as one may want to favor diversity or fidelity depending on the application.",
"One additional advantage of our approach over round-trip MT is that it offers an adjustable parameter to control the trade-off between these two axes.",
"So as to understand the effect of this parameter, we tried different values of it in the development set, and report the resulting curve in Figure 3 together with the MT baselines.",
"BLEU and Self-BLEU scores of the best checkpoints for each (0.7,0.73,0.8) and plot the results together with the MT baselines for our systems in Figure",
"3. As expected, higher values of yield systems that tend to copy more, being more faithful but less diverse.",
"Consistent with our test results, we find that, for a given value of , beam search does better than sampling in terms of fidelity, but worse in terms of diversity, yet both decoding strategies can 0 20 40 60 80 Diversity Fidelity Fluency MT Beam MT Samp Ours Beam Ours Samp Figure 4: Human evaluation results (larger is better).",
"be adjusted to achieve a similar trade-off.",
"More importantly, we observe that both curves are above round-trip MT, the gap being largest for the sampling variant.",
"We can thus conclude that our proposed approach does better than round-trip MT for a comparable trade-off between diversity and fidelity, while offering a knob to adjust this trade-off as desired.",
"So as to better understand the behavior of our approach in comparison with round-trip MT, we carried out a human evaluation through Amazon Mechanical Turk.",
"Following the setup of Hosking and Lapata (2021), we sample 200 sentences from the MTC corpus and generate a pair of paraphrases for each of them, randomly choosing two systems to generate them.",
"We then ask human evaluators to compare the two sentences according to three criteria: diversity, fidelity and fluency.",
"More details about the judging criteria can be found in Appendix D. Figure 4 reports the percentage of head-to-head comparisons that each system has won.",
"The results that we obtain are consistent with the trends observed in 7.1.",
"More concretely, we observe that the beam search variant of round-trip MT achieves the best results in terms of fluency and fidelity, but does worst in diversity, indicating a tendency to copy.",
"Our method with beam search does slightly better than the sampling MT variant in terms of diversity and slightly worse in terms of fidelity indicating a tendency to favor diversity over fidelity while also being more fluent.",
"Finally, the sampling variant of our method achieves the best diversity, but has the worst fidelity and fluency.",
"ally analyzed some paraphrases, 11 and report some examples in Table",
"2. Just in line with our previous results, we observe that the beam search variant of round-trip MT tends to deviate the least from the original sentence, while the sampling variant of our method generates the most diverse paraphrases (e.g., changing sales of vehicles were not included to vehicles were not included in sales ).",
"At the same time, we observe that this tendency to improve diversity can cause artifacts like paraphrasing named entities (e.g., changing National Youth League to National League of Youth ), which can partly explain the drop in fidelity.",
"In this work, we have shown that the implicit similarity function present in round-trip MT is not appropriate in the general case, as it considers sentence pairs that share a single ambiguous translation to be paraphrases.",
"We address this issue by designing an alternative similarity function that requires the entire translation distribution to match, and develop a relaxation of it through the IB method, which we prove to be less susceptible to the problem of confounding translations.",
"We 11 We randomly sampled 20 sentences from MTC and chose four illustrative examples for Table",
"implement this approach through adversarial learning, training an encoder to preserve as much information as possible about the reference translation, while encoding as little as possible about the source.",
"Not only is our approach more principled than round-trip MT, but it is also more efficient at inference, as it does not need to generate an intermediate translation.",
"In addition, it offers a knob to adjust the fidelity-diversity trade-off through the parameter, and obtains strong results in our experiments, outperforming round-trip MT. Acknowledgments Aitor Ormazabal, Gorka Labaka, Aitor Soroa and Eneko Agirre were supported by the Basque Government (excellence research group IT1343-19 and DeepText project KK-2020/00088) and the Spanish MINECO (project DOMINO PGC2018-102041-B-I00 MCIU/AEI/FEDER, UE).",
"Aitor Ormazabal was supported by a doctoral grant from the Spanish MECD.",
"Computing infrastructure funded by UPV/EHU and Gipuzkoako Foru Aldundia."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"method",
"result",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"objective",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain"
] |
[
"Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks.",
"The most existing pre-training methods mainly adopt a two-step training procedure, which firstly employs a pre-trained object detector to extract region-based visual features, then concatenates the image representation and text embedding as the input of Transformer to train.",
"However, these methods face problems of using task-specific visual representation of the specific object detector for generic cross-modal understanding, and the computation inefficiency of two-stage pipeline.",
"In this paper, we propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation, namely E2E-VLP, where we build a unified Transformer framework to jointly learn visual representation, and semantic alignments between image and text.",
"We incorporate the tasks of object detection and image captioning into pretraining with a unified Transformer encoder-decoder architecture for enhancing visual learning.",
"An extensive set of experiments have been conducted on well-established vision-language downstream tasks to demonstrate the effectiveness of this novel VLP paradigm.",
"Self-supervised pre-training has achieved great success in a wide range of natural language understanding (Devlin et al., 2018; Liu et al., 2019; Wang et al., 2019; Lan et al., 2019) and generation tasks (Song et al., 2019; Lewis et al., 2019; Bi et al., 2020).",
"Recent studies (Li et al., 2019; Lu et al., 2019; Chen et al., 2019; Tan and Bansal, 2019; Li et al., 2020b; Yu et al., 2020) have also witnessed the progress of self-supervised pretraining on vision-and-language tasks, which learns corresponding author general cross-modal representations from massive image-text pairs, and fine-tunes vision-language pre-training (VLP) models on task-specific data achieving state-of-the-art results on various downstream V+L tasks.",
"Most existing mainstream VLP models adopt a two-step training method, which firstly extracts semantic visual features using a pre-trained object detection model, and then combines the derived object-centric representation of the image and text embedding as the input of Transformer (Vaswani et al., 2017) for cross-modal pre-training.",
"Despite the superior performance brought by the large-scale image-text pairs, the two-stage solution suffers from the following weaknesses:",
"1) the object detection model in the first step is trained on specific visual dataset such as Visual Genome dataset (Kr-ishna et al., 2017), and the visual representation is not optimized towards a more generic cross-modal understanding in the second step.",
"It may suffer from an error propagation problem when the object detection model fails to recognize certain important information.",
"2) extracting region features with an object detection model is so time-consuming that most state-of-the-art models are directly trained and evaluated on cached visual features.",
"This practice not only imposes unnecessary constraints on model designs, but also confronts the run-time inference inefficiency in the prediction phase.",
"Recently, several studies such as (Jiang et al., 2020) have begun to revisit the grid features for cross-modal understanding and found the grid features can also work surprisingly well, while making the model design and training process much simpler.",
"One pioneering work Pixel-BERT (Huang et al., 2020) explores to pre-train with grid features in an end-to-end fashion directly from pixels.",
"It removes all the fine-grained visual pre-training tasks, which proves to be important for V+L pre-training.",
"(Zhang et al., 2021) also demonstrates that visual features provided by the object detection model matter significantly in VLP models.",
"To address the limitations, we propose a new end-to-end paradigm for pixel-level vision-language pre-training, namely E2E-VLP, by enhancing with fine-grained visual learning.",
"During pre-training, E2E-VLP jointly learns the visual region features and the cross-modal representation in a unified Transformer encoder-decoder architecture directly from image pixels.",
"In addition to the typical pre-training tasks of Masked Language Modeling and Image-Text Matching, we enhance the vision-language pre-training with fine-grained visual semantic learning.",
"Specifically, two end-to-end pretraining tasks are further incorporated:",
"1) Object Detection : inspired from DETR (Carion et al., 2020), we view the object detection as a direct set prediction problem.",
"The cross-modal Transformer encoder and image encoder are joint learnt to fuse the cross-modal data from pixels, while the decoder is used to capture fine-grained visual information via bipartite matching between predicted and ground-truth objects;",
"2) Image-Text Generation : to better understand the semantics within the image, we also use the paired text to guide the learning of image features.",
"We use the encoder network to represent the image and a left-to-right decoder to generate the caption text.",
"The standard auto-regressive language model objective is used to maximize the data probability.",
"These two tasks can help learn high-quality visual representations (Zhang et al., 2021; Desai and Johnson, 2020).",
"Detection task can learn object-level visual semantics, while the image caption task can capture text-aligned visual semantics.",
"These two kinds of visual semantics matter significantly in VLP cross-modal fusion.",
"During fine-tuning, E2E-VLP can be flexi-bly applied to vision-language understanding tasks with the encoder module, and vision-language generation tasks with the encoder-decoder module.",
"We evaluate E2E-VLP on a variety of representative vision-language tasks, including visual question answering, natural language visual reasoning, cross-modal retrieval and image captioning.",
"With the new end-to-end pre-training paradigm, we can obtain surprising good performance across different V+L tasks and greatly decrease the online inference time with the new one-stage solution.",
"We make the following major contributions in this paper: We propose the first end-to-end vision-language pre-trained model for both V+L understanding and generation, namely E2E-VLP, which can achieve comparable or superior performance with faster online inference speedup.",
"E2E-VLP is the first model that incorporates fine-grained visual pre-training in an encoder-decoder architecture, which paves a new way for designing advanced vision and language pretraining tasks.",
"We enhance cross-modal feature fusion by visual learning of object detection and image caption, which has empirically shown to be effective for vision-language pre-training.",
"Self-supervised pre-training has substantially advanced the performance across a variety of natural language understanding (Devlin et al., 2018; Liu et al., 2019; Wang et al., 2019; Lan et al., 2019) and text generation tasks (Song et al., 2019; Lewis et al., 2019; Bi et al., 2020).",
"Inspired by language model pre-training, several researchers propose Vision-language pre-training(VLP) models on large-scale image-text pairs, which has proved effective for a wide range of vision-language (VL) tasks, such as VQA (Antol et al., 2015), NLVR (Young et al., 2014), Cross-modal Retrieval (Suhr et al., 2018).",
"The current VLP models mainly take two-step training pipeline, which consists of extracting semantic visual features by object detector and training the cross-modal pre-training model to align text and visual features.",
"In this kind of method, there are mainly two broad directions to conduct vision-language pre-training.",
"The first line uses a single-stream transformer architecture (Vaswani et al., 2017) to model both image and text representations in a unified semantic space such as VLBERT (Su et al., 2019), UNITER (Chen et al., 2019) and OSCAR (Li et al., 2020b).",
"In contrast, the other line adopts a two-stream Transformer architecture that first encodes the image and text modalities separately, and then fuses the cross-modal representations with another Transformer network, such as LXMERT (Tan and Bansal, 2019) and ERNIE-ViL (Yu et al., 2020).",
"Besides, SemVLP (Li et al., 2021) is pre-trained iteratively with two prevalent fashions.",
"These methods are directly trained and evaluated on cached visual features, which imposes unnecessary constraints on model designs and makes it hard to enable an end-to-end vision-language pre-training.",
"Furthermore, Pixel-BERT Figure 1: The overall framework of E2E-VLP.",
"(Huang et al., 2020) represents the first and only work to pre-train with grid features in an end-to-end fashion.",
"However, due to the characteristics of learnt grid features, the end-to-end pre-training is conducted without object-level visual tasks, which is important in aligning the semantics between cross-modal representations.",
"In this paper, we focus on enhancing the end-to-end vision-language pre-training with more fine-grained visual semantic learning.",
"The object detection task and image caption task are incorporated into the pre-training stage for further improving the fine-grained visual-language understanding and generation abilities.",
"The architecture of E2E-VLP is shown in Figure 1.",
"Inspired by the recent breakthrough of using Transformer on computer vision tasks such as DETR (Carion et al., 2020) and ViT Transformer (Dosovitskiy et al., 2020), we propose to use a Transformer encoder-decoder framework (Vaswani et al., 2017) for cross-modal learning, and a simple CNN backbone module is used as the image encoder for extracting visual representations from pixels so as to allow for more flexible network design.",
"We jointly train the whole framework in an end-to-end fashion, so as to learn the generic visual representations and high-level cross-modal alignment simultaneously.",
"Different V+L pre-training tasks are designed to further enhance the cross-modal understanding and generation abilities.",
"Next, we describe each component of this model in detail.",
"The input to E2E-VLP is an image and its related text (e.g. caption text).",
"We first introduce the way to represent the text sequence and raw image pixels as input to the Transformer.",
"Sentence Embeddings Each sentence is first split into a sequence of sub-words { w 1 , ..., w m } by WordPiece tokenizer.",
"Then, similar to BERT (De-vlin et al., 2018), each token w i is assigned three kinds of embeddings: token, segment and position embeddings.",
"The three embeddings are summed and layer-normalized to represent input sentence representations as a sequence of embedding vectors E emb = { e CLS , e 1 , ..., e m , e SEP } , where [ CLS ] and [ SEP ] are special tokens in BERT.",
"Image Representations For image feature representation, the most existing VLP models follow Bottom-Up and Top-Down Attention (Anderson et al., 2018) to extract region features by Faster R-CNN (Ren et al., 2015) trained on Visual Genome dataset.",
"The detector extracts region features by first detecting regions under pre-defined categories, and then uses the features before the final classifier as the output.",
"These methods are limited to the task-specific visual representation of the specific object detector, which may hinder the generic cross-modal understanding.",
"To improve the generalization of the image representation, we learn from pixels to represent an image instead of using bounding boxes.",
"The pixel features are learned by a CNN visual backbone such as ResNet (He et al., 2016).",
"Starting from the initial image v img R 3 H 0 W 0 (with 3 color channels), a conventional CNN backbone generates a lower-resolution activation map f img RC H W using the typical values as in DETR (Carion et al., 2020): C = 2048 and H = H 0 32 , W = w 0 32 .",
"Then, we take a 1 1 convolution to reduce the channel dimension of the high-level activation map f from C to a smaller dimension d , creating a new feature map z img R d H W .",
"The encoder expects a sequence as input, hence we collapse the spatial dimensions of z img into one dimension, resulting in a HW d feature map Z img .",
"Since the transformer architecture is permutation-invariant, we supplement the feature maps with fixed positional encodings (Par-mar et al., 2018) that are added to the input of each attention layer.",
"Finally, the sequential image representation Z img = { o 1 , ..., o HW } can be seen as a HW length of d -dimensional vector.",
"Given the embeddings of the tokens for the sentence { e i } mi =1 and the sequential image representations { o j } nj =1 , we adopt the Transformer encoder to learn cross-modal attention between image grid features and language tokens.",
"The encoder is a stacked model with L standard blocks, where the l -th block consists of a multi-head self-attention module and a feed forward network (FFN).",
"To allow a fine-grained feature-level semantic fusion, we directly concatenate the derived image features and text embeddings to construct the input sequence, which is formulated as: { e CLS , e 1 , ..., e m , e SEP , o 1 , ..., o HW } .",
"The CNN backbone for visual representation learning and the Transformer for cross-modal semantic fusion is combined into a single model, which is end-to-end trainable.",
"In this way, the learnt visual feature representation can be more suitable for the pre-training tasks of generic cross-modal understanding.",
"To facilitate cross-modal understanding, we follow (Tan and Bansal, 2019; Chen et al., 2019; Huang et al., 2020) and conduct two popular pre-training tasks in encoder side, including Masked Language Modeling (MLM) and Image-Text Matching (ITM).",
"Masked Language Modeling The task setup is basically the same as in BERT (Devlin et al., 2018), we randomly mask 15% tokens in the text and the model is asked to predict these masked words with the output text and visual representations.",
"Different from MLM task in BERT that only relies on the surrounding text of textual modality for prediction, the masked words will be predicted with the help of image feature map from visual modality so as to resolve ambiguity.",
"Image-Text Matching We randomly sample 50% mismatched image-text pairs and 50% matched pairs, and train an classifier to predict whether an image and a sentence match each other on the representation of token [CLS] in the last encoder layer h LCLS .",
"Due to that the CNN feature map has no object-level semantics, it is difficult to directly align the cross-modal semantics between CNN feature map and the language embeddings.",
"Therefore, we further add a Transformer decoder to help capture the fine-grained semantics of the visual features, where two specific pre-training tasks of object detection and image-caption generation are incorporated.",
"The decoder adopts the standard architecture of the transformer with multi-headed self-attention followed by cross-attention and a feed forward network (FFN).",
"Both tasks share the same attention parameters of decoder, while using different linear head for the two tasks.",
"The object detection task focuses more on understanding the fine-grained object information within image, while image captioning task helps guide the learning of visual features regarding the textual semantics.",
"Enhanced by Object Detection Following the one-stage detection model DETR (Carion et al., 2020), we define object detection task as the direct set prediction problem, and use a set-based global loss that forces unique predictions via bipartite matching with the Transformer encoder-decoder architecture.",
"Let us denote by y the ground truth set of objects and y = { y i } Ni =1 .",
"The set-based loss of bipartite matching is to search for a permutation of N elements LN with the lowest cost: = arg min NN (cid:88) i L match ( y i , y ( i ) ) (1) where L match ( y i , y ( i ) ) is a pair-wise matching cost between ground truth y i and a prediction with index ( i ) .",
"The Hungarian algorithm (Stewart et al., 2016) is used to efficiently compute the optimal assignment.",
"Different from the original DETR for single-modal learning, our cross-modal pre-training with object detection differs in two aspects.",
"In encoder side, we combine both the visual representation and language embedding as input and reuse the Transformer encoder for cross-modal fusion.",
"In decoder side, we take the learned positional embeddings as the input to multiple L Transformer decoder layers, and detects the N objects in parallel at each decoder layer.",
"In addition to the tasks of box coordinate regression and class category prediction, we also incorporate an object attribute prediction task for Visual Genome Dataset so as to enhance the learning of fine-grained semantics.",
"The model is trained with a negative log-likelihood loss for attribute, class prediction and a box regression loss defined as follows: L v ( y, y ) = N (cid:88) i =1 [ log p ( i ) ( a i ) log p ( i ) ( c i ) + + L box ( b i , b ( i ) ( i ))] where p ( i ) ( a i ) , p ( i ) ( c i ) is the attribute and class probability, L box ( b i , b ( i ) ( i )) is a normalized bounding boxes regression loss as in (Carion et al., 2020).",
"Enhanced by Image Captioning To guide the learning of visual features in regards to the textual semantics, we use semantically dense captions to learn vision representations with sequence-to-sequence (Seq2Seq) image-to-text generation task.",
"The decoder is pre-trained to auto-regressively generate the target text based on the contextual representations from the image encoder.",
"The pretraining loss for the decoder is defined as: L dec = (cid:88) ( x,y ) ( X , Y ) log n (cid:89) t =1 P ( y t | y <t , x ) (2) where X represents the sequence of vision context, Y represents the set of text to be generated and n is the length of tokens in output text y .",
"We pre-train E2E-VLP with all the encoder and decoder pre-training tasks (i.e., Masked Language Modeling, Image-Text Matching, Object Detection, Image-to-Text Generation) jointly by minimizing the four loss functions as:",
"L = L mlm + L itm + L v + L dec (3)",
"We pre-train our E2E-VLP on two in-domain image-text datasets: MS-COCO (Lin et al., 2014) and Visual Genome (Krishna et al., 2017).",
"We utilize the object detection and image caption annotations in MS-COCO, and object detection, region description annotations in Visual Genome.",
"The total amount of the dataset is 6.01M image-and-sentence pairs on 180K distinct images.",
"The maximum sequence length for the sentence is set as 40.",
"We use scale augmentation, and resize the input images so that the shortest side is at least 480 and at most 800 pixels while the longest is at most 1333 (Carion et al., 2020).",
"For the model architecture, we pre-train E2E-VLP with 6 and 12 layers of Transformer encoder respectively, while the decoder is fixed as 6 layers.",
"Each layer block has 256 hidden units and 12 self-attention heads, the intermediate layer size is 1,024.",
"The visual backbone is selected as ResNet with different sizes (He et al., 2016) from torchvision with frozen batch-norm layers.",
"We pre-train E2E-VLP model with a total batch size of 32 for 200 epoches on 8 V100 GPUs.",
"We use the AdamW optimizor (Loshchilov and Hutter, 2018) for both the Transformer and ResNet.",
"The initial learning rate is set as 10 4 for Transformer and 10 5 for ResNet.",
"The weight decay is set as 10 4 .",
"We compare E2E-VLP model against other competitive VLP models of the comparable model size",
"on the following downstream V+L tasks.",
"VQA v2.0 (Antol et al., 2015): The VQA task requires the model to answer natural language questions given an image.",
"We conduct experiments on the widely-used VQA v2.0 Models Params VQA NLVR2 COCO Caption Test-dev Test-std Dev Test-P BLEU4 CIDEr Single-stream VisualBERT 110M 70.80 71.00 --VLP 110M 70.5 70.7 -36.5 116.9 VLBERT 110M 71.16 --Unicoder-VL 110M ---UNITER 110M 72.70 72.91 77.14 77.87 -OSCAR 110M 73.16 73.61 78.07 78.36 36.5 123.7 Two-stream ViLBERT 221M 70.55 70.92 67.40 67.00 -12-in-1 221M 73.15 --LXMERT 183M 72.42 72.54 74.90 74.50 -ERNIE-ViL 210M 72.62 72.85 --End2End PixelBERT 142M 71.35 71.42 71.7 72.4 -Our Model E2E-VLP 94M 73.25 73.67 77.25 77.96 36.2 117.3 Table 1: Evaluation Results on VQA, NLVR2 and Image Caption.",
"dataset (Antol et al., 2015), which contains 204K images and 1.1M questions about these images.",
"Following (Anderson et al., 2018), we treat VQA as a multi-label classification task by picking an answer from a shared set consisting of 3,129 answers.",
"To fine-tune VQA task, we use a binary cross-entropy loss to train a multi-label classifier, we train with a batch size of 32 for 12 epochs.",
"We set an initial learning rate of 1e-4 which decays by 0.1 at the end of epoch 6 and epoch 9.",
"NLVR2 (Suhr et al., 2018): NLVR2 (Suhr et al., 2018) is a challenging task for visual reasoning.",
"The goal is to determine whether a natural language statement is true about a pair of images.",
"It consists of 86K/7K data for train-ing/development.",
"Since each data example in NLVR2 has two natural images img 0 , img 1 and one language statement s , we concatenate the given sentence and each image to build two sequences, and then train a binary classifier based on the concatenation of the two outputs.",
"We fine-tune NLVR model with a batch size of 32 for 12 epochs, and set an initial learning rate of 1e-4 which decays by 0.1 at the end of epoch 6 and epoch 9.",
"Image Caption : A visual generation task that requires the model to generate the content of an image.",
"To fine-tune Image Caption task, we use the seq2seq loss with label smoothing(Szegedy et al., 2016).",
"During inference, we use beam search (i.e., beam size=4), and set = 0 .",
"9 for the length penalty (Wu et al., 2016).",
"We set initial learning rate of 1e-4 which decays by 0.1 at the end of epoch 6 and epoch 9.",
"We report our results on the COCO image captioning dataset (Chen et al., 2015).",
"Image-Text Retrieval : The image-text retrieval task consists of two sub-tasks: image retrieval and text retrieval, depending on which modality is used as the retrieval target.",
"We conduct experiments on Flickr30K dataset (Young et al., 2014), which contains 31,000 images collected from Flickr website and each image has 5 captions.",
"We follow the same split in (Lee et al., 2018) for training and evaluation.",
"During fine-tuning, we follow the method in UNITER (Chen et al., 2019) and formulate it as a ranking problem.",
"We use the hidden state of h LCLS to compute the similarity scores for the sampled positive and negative pairs, and maximize the margin between them through circle loss (Sun et al., 2020) as ERNIE-ViL (Yu et al., 2020).",
"We fine-tune our model with a batch size of 64 and a learning rate of 5e-5 for 4 epochs.",
"We compare our E2E-VLP model with all the three prevalent VLP architectures: i.e., single-stream and two-stream architectures of two-step pipeline framework and end-to-end one-step solution.",
"Single-stream architecture uses a unified Transformer to encode the vision-language inputs, including the state-of-the-art methods such as OS-CAR(Li et al., 2020b), UNITER(Chen et al., 2019), Unicoder-VL (Li et al., 2020a), VLBERT (Su et al., 2019) and VLP (Zhou et al., 2020).",
"Image and text are separately encoded firstly and then fused together in two-stream architecture, including the state-of-the-art methods such as ERNIE-VIL(Yu et al., 2020), LXMERT (Tan and Bansal, 2019), ViLBERT (Lu et al., 2019, 2020).",
"These two architectures both adopt the region-based visual features, where a object detector is first used to obtain the object-level feature representations.",
"We also compare with the only end-to-end solution PixelBERT (Huang et al., 2020).",
"PixelBERT adopts a random pixel sampling strategy to conduct the cross-modal pre-training, while it has no visual semantic understanding tasks for pre-training which is very important in V+L tasks.",
"The results on the downstream V+L tasks are shown in Table 1.",
"It can be observed that:",
"1) with less parameters and only in-domain pre-training data (MS-COCO and Visual Genome), E2E-VLP can consistently achieve comparable performance against two-step region feature-based methods such as OSCAR and ERNIE-VIL.",
"It shows the effectiveness of our end-to-end grid feature-based method, which can offer new perspectives to address the cross-modal pre-training and conduct fusion at a more fine-grained level.",
"It has the potential of removing the complex procedure of region feature ex-Model VQA NLVR2 E2E-VLP 70.76 72.12 -Image-to-Text Generation 70.20 71.59 -Attribute Prediction 69.92 70.92 -Object Detection 68.85 70.38 Table 3: Ablation tests for different visual pre-training tasks of E2E-VLP (6 layer encoder, and ResNet50 backbone) on development set.",
"traction, and facilitate deeper interaction between visual feature and text data in an end-to-end fashion.",
"2) Our E2E-VLP method can significantly improve upon the end-to-end method PixelBERT, which demonstrates the advantages of our method for enhancing the fine-grained visual learning with object detection and image captioning, 5.4 Importance of Visual Learning To further investigate the importance of each component in our method, we conduct ablation studies to assess the impact of different visual learning tasks on the VQA and NLVR2 development set.",
"Table 3 shows the result.",
"We can see that:",
"1) all the three visual pre-training tasks contribute to the final performance gain, and removing each of them can decrease the performance on both tasks.",
"The object detection and attribute prediction tasks can help capture fine-grained object-level semantics within the image, which is consistent with the previous two-step solutions that using region features from the detection can help improve the performance for cross-modal understanding.",
"The image-to-text generation task can help guide the learning of visual features in regards to the textual semantics, which has the same conclusion as VirTex (Desai and Johnson, 2020).",
"2) Among the different visual pre-training tasks, the Object Detection and Attribute Prediction tasks are more important than the Image-to-Text Generation task, this may be due to the fact that the typical cross-modal downstream tasks such as VQA and NLVR2 focus more on the fine-grained semantics of the objects within image.",
"One of the biggest advantages of end-to-end VLP method is the inference efficiency with one single stage.",
"Therefore, we further examine the online inference efficiency of E2E-VLP, compared with the two-step region-based models (UNITER and LXMERT) and the existing end-to-end VLP model (PixelBERT).",
"We examine the average inference Model Parameters Avg Time VQA NLVR2 (ms) LXMERT 183M 496 72.42 72.54 UNITER 110M 501 72.70 77.14 Pixel-BERT 142M 201 71.35 71.7 E2E-VLP 94M 192 73.25 77.25 Table 4: Results of the inference comparison of different pre-trained model architectures on the VQA and NLVR2 dataset.",
"time (per query) of different models on the VQA dataset.",
"The result is shown in Table 4.",
"We can see that:",
"1) the end-to-end methods can be much more efficient in online inference (2-3 times speedup) than the two-step model.",
"We further analyze the inference time of different components of two-step models and find that among the total cost of 500ms per image-text pair, about 80% of the total time is used to extract region-based features using Faster R-CNN (Ren et al., 2015).",
"It takes much time for region selection and this will happen twice when extracting the final regions, and it contains many complicated post-processing procedures.",
"2) Our E2E-VLP model can achieve comparable results on both the VQA and NLVR2 datasets by saving about 3.5 times running time.",
"Besides, we can also use a smaller image size to further improving the inference speed.",
"Compared with PixelBERT, E2E-VLP can also obtain some speed-ups due to the reason that the Transformer hidden size of E2E-VLP is only 256, which makes E2E-VLP more light-weight and flexible.",
"Our end-to-end solution can significantly improve the performance upon PixelBERT, because there are no visual pretraining tasks for PixelBERT and we enhance the pre-training of E2E-VLP with both the fine-grained Object Detection and Image Captioning tasks.",
"Since our whole framework contains both the visual backbone and Transformer network as a whole, we further study the importance of different model",
"architectures by changing the number of Transformer encoder layers and the different ResNet visual backbone layers.",
"We expect to further examine whether the visual backbone or Transformer network is more important for the cross-modal understanding and fusion.",
"From Table 5, we can see that both adding more Transformer encoder layers and using more complicated visual backbones can contribute to the final performance gain, which proves the importance of both modules for cross-modal understanding.",
"Learning better visual features and conducting more deeply interacted visual-language fusion are both important for V+L tasks.",
"Besides, we can see that using a more strong visual backbone (such as ResNet 152) can give more benefit to the final performance than just increasing the number of Transformer encoder layers from 6 to 12.",
"This may be due to the fact that visual semantic understanding is rather important in V+L tasks and that is also why we design more fine-grained visual pre-training tasks for further enhancing the learning of E2E-VLP.",
"As mentioned in Section 3.1.1, the sequence length of the visual features is determined by the image size HW .",
"Therefore, the final sequence length of the input to the transformer also largely depends on the image size, which can in turn influence the inference speed of our whole framework.",
"We further analyze the impact of input image size to the efficiency and effectiveness of E2E-VLP.",
"The results of E2E-VLP with different image sizes as input are shown in Table 6.",
"From the results, we can see that E2E-VLP benefits from larger images as input, and for larger images, the sequence length of the visual representation is longer and more information is embedded in the visual representation.",
"The cross-modal Transformer is capable of learning more fine-grained vision-language fusion for better performance.",
"Moreover, down-sampling the image to a smaller size can significantly improve the inference speed of E2E-VLP model, while the model accuracy only decreases a little.",
"For example, when changing the input size from (800, 1333) to (448, 448), the inference can be about 5 times faster while the performance only decreases about 2%-3%.",
"fine-grained semantics by visual learning.",
"Therefore, we encode both the image content and caption text with E2E-VLP, and directly fine-tune it on MSCOCO object detection benchmark dataset with the decoder as in DETR(Carion et al., 2020).",
"Table 7 shows the detection result.",
"We can see that our E2E-VLP model can also support the Object Detection task based on text-image pairs and perform surprising well compared with the original DETR model.",
"This phenomenon may also demonstrate that E2E-VLP well captures the fine-grained semantics within image and can appropriately fuse the multi-modal information for conducting visual-only task.",
"In this paper, we propose a new end-to-end paradigm for pixel-level vision-language pretraining, to jointly learn visual representation, and semantic alignments between image and text.",
"Different from the previous methods using the region features in a two-stage pipeline, we propose to use the more flexible and efficient image grid features for vision-language pre-training.",
"We further incorporate the tasks of object detection and image captioning into pre-training with a unified Transformer encoder-decoder architecture for enhancing visual learning.",
"The experiments on well-established vision-language downstream tasks demonstrate the effectiveness and efficiency of our E2E-VLP model.",
"We hope that this study can potentially offer new perspectives and guide for end-to-end vision-language pre-training.",
"layer, and incorporate more advanced vision and language pre-training tasks for further improving the performance."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain"
] |
[
"This paper presents a new challenging information extraction task in the domain of materials science.",
"We develop an annotation scheme for marking information on experiments related to solid oxide fuel cells in scientific publications, such as involved materials and measurement conditions.",
"With this paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus consisting of 45 open-access scholarly articles annotated by domain experts.",
"A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested named entity recognition and slot filling tasks as well as high annotation quality.",
"We also present strong neural-network based models for a variety of tasks that can be addressed on the basis of our new data set.",
"On all tasks, using BERT embeddings leads to large performance gains, but with increasing task complexity, adding a recurrent neural network on top seems beneficial.",
"Our models will serve as competitive baselines in future work, and analysis of their performance highlights difficult cases when modeling the data and suggests promising research directions.",
"The design of new experiments in scientific domains heavily depends on domain knowledge as well as on previous studies and their findings.",
"However, the amount of publications available is typically very large, making it hard or even impossible to keep track of all experiments conducted for a particular research question.",
"Since scientific experiments are often time-consuming and expensive, effective knowledge base population methods for finding promising settings based on the published research would be of great value (e.g., Auer et al., 2018; Manica et al., 2019; Strotgen et al., 2019; Mrdjenovich et al., 2020).",
"While such real-life information extraction tasks have received consid-The corresponding [SOFCDEVICE ] with [Pt MATERIAL ] / [SmNiO3 MATERIAL ] / [Pt MATERIAL ] geometry [ demonstrated EXPERIMENT ] dramatic power output of [225 mW cm2 VALUE ] at [500 CVALUE ].",
"erable attention in the biomedical domain (e.g., Cohen et al., 2017; Demner-Fushman et al., 2018, 2019), there has been little work in other domains (Nastase et al., 2019), including materials science (with the notable exception of the work by Mysore et al., 2017, 2019).",
"In this paper, we introduce a new information extraction use case from the materials science domain and propose a series of new challenging information extraction tasks.",
"We target publications about solid oxide fuel cells (SOFCs) in which the interdependence between chosen materials, measurement conditions and performance is complex (see Figure 1).",
"For making progress within natural language processing (NLP), the genre-domain combination presents interesting challenges and characteristics, e.g., domain-specific tokens such as material names and chemical formulas.",
"We provide a new corpus of open-access scientific publications annotated with semantic frame information on experiments mentioned in the text.",
"The annotation scheme has been developed jointly with materials science domain experts, who subsequently carried out the high-quality annotation.",
"We define an Experiment-frame and annotate sentences that evoke this frame with a set of 16 possible slots, including among others AnodeMaterial , FuelUsed and WorkingTemperature , reflecting the role the referent of a mention plays in an experiment.",
"Frame information is annotated on top of the text as graphs rooted in the experiment-evoking element (see Figure 1).",
"In addition, slot-filling phrases are assigned one of the types MATERIAL , VALUE , and DEVICE .",
"The task of finding experiment-specific information can be modeled as a retrieval task (i.e., finding relevant information in documents) and at the same time as a semantic-role-labeling task (i.e., identifying the slot fillers).",
"We identify three sub-tasks: (1) identifying sentences describing relevant experiments, (2) identifying mentions of materials, values, and devices, and (3) recognizing mentions of slots and their values related to these experiments.",
"We propose and compare several machine learning methods for the different sub-tasks, including bidirectional long-short term memory (BiLSTM) networks and BERT-based models.",
"In our results, BERT-based models show superior performance.",
"However, with increasing complexity of the task, it is beneficial to combine the two approaches.",
"With the aim of fostering research on challenging information extraction tasks in the scientific domain, we target the domain of SOFC-related experiments as a starting point.",
"Our findings based on this sample use case are transferable to similar experimental domains, which we illustrate by applying our best model configurations to a previously existing related corpus (Mysore et al., 2019), achieving state-of-the-art results.",
"We sum up our contributions as follows: We develop an annotation scheme for marking information on materials-science experiments on scientific publications (Section 3).",
"We provide a new corpus of 45 materials-science publications in the research area of SOFCs, manually annotated by domain experts for information on experimental settings and results (Section 4).",
"Our corpus is publicly available.",
"1 Our inter-annotator agreement study provides evidence for high annotation quality (Section 5).",
"We identify three sub-tasks of extracting experiment information and provide competitive baselines with state-of-the-art neural network approaches for them (Sections 4, 6, 7).",
"We show the applicability of our findings to modeling the annotations of another materials-science corpus (Mysore et al., 2019, Section 7).",
"Information extraction for scientific publications.",
"Recently, several studies addressed information extraction and knowledge base construction in the scientific domain (Augenstein et al., 2017; Luan et al., 2018; Jiang et al., 2019; Buscaldi et al., 2019).",
"We also aim at knowledge base construction but target publications about materials science experiments, a domain understudied in NLP to date.",
"Information extraction for materials science.",
"The work closest to ours is the one of Mysore et al. (2019) who annotate a corpus of 230 paragraphs describing synthesis procedures with operations and their arguments, e.g., The resulting [solid products Material ] were ... [dried Operation ] at [120 Number ][ celsius ConditionUnit ] for [8 Number ] [h ConditionUnit ].",
"Operation-evoking elements (dried) are connected to their arguments via links, and with each other to indicate temporal sequence, thus resulting in graph structures similar to ours.",
"Their annotation scheme comprises 21 entity types and 14 relation types such as Participant-material , Apparatus-of and Descriptor-of .",
"Kononova et al. (2019) also retrieve synthesis procedures and extract recipes, though with a coarser-grained label set, focusing on different synthesis operation types.",
"Weston et al. (2019) create a dataset for named entity recognition on abstracts of materials science publications.",
"In contrast to our work, their label set (e.g., Material , Application , Property ) is targeted to document indexing rather than information extraction.",
"A notable difference to our work is that we perform full-text annotation while the aforementioned approaches annotate a pre-selected set of paragraphs (see also Kim et al., 2017).",
"Mysore et al. (2017) apply the generative model of Kiddon et al. (2015) to induce action graphs for synthesis procedures of materials from text.",
"In Section 7.1, we implement a similar entity extraction system and also apply our algorithms to the dataset of Mysore et al. (2019).",
"Tshitoyan et al. (2019) train word2vec (Mikolov et al., 2013) embeddings on materials science publications and show that they can be used for recommending materials for functional applications.",
"Other works adapt the BERT model to clinical and biomedical domains (Alsentzer et al., 2019; Sun and Yang, 2019), or generally to scientific text (Beltagy et al., 2019).",
"Neural entity tagging and slot filling.",
"The neural-network based models we use for entity tagging and slot filling bear similarity to state-of-the-art models for named entity recognition (e.g., Huang et al., 2015; Lample et al., 2016; Panchendrarajan and Amaresan, 2018; Lange et al., 2019).",
"Other related work exists in the area of semantic role labeling (e.g., Roth and Lapata, 2015; Kshir-sagar et al., 2015; Hartmann et al., 2017; Adel et al., 2018; Swayamdipta et al., 2018).",
"In this section, we describe our annotation scheme and guidelines for marking information on SOFC-related experiments in scientific publications.",
"We treat the annotation task as identifying instances of a semantic frame (Fillmore, 1976) that represents SOFC-related experiments.",
"We include (1) cases that introduce novel content; (2) descriptions of specific previous work; (3) general knowledge that one could find in a textbook or survey; and also (4) suggestions for future work.",
"We assume that a frame is introduced to the discourse by words that evoke the frame.",
"While we allow any part-of-speech for such frame-evoking elements, in practice, our annotators marked almost only verbs, such as test, perform, and report with the type EXPERIMENT .",
"In the remainder of this paper, we treat all sentences containing at least one such annotation as experiment-describing.",
"In a second annotation layer, annotators mark spans with one of the following entity types.",
"The annotations are marked only on experiment-describing sentences as well as several additional sentences selected by the annotator.",
"MATERIAL .",
"We use the type MATERIAL to annotate text spans referring to materials or elements.",
"They may be specified by a particular composition formula (e.g., La 0 . 75 Sr 0 . 25 Cr 0 . 5 Mn 0 . 5 O 3 ) or just by a mention of the general class of materials, such as oxides or hydrocarbons. 2 2 If the material is referenced by a common noun or by a pronoun and a more specific mention occurs earlier in the text, we indicate this coreference with the aim of facilitating oracle information extraction experiments in future work.",
"The above two steps of recognizing relevant sentences and marking coarse-grained entity types are in general applicable to a wide range of experiment types within the materials science domain.",
"We now define a set of slot types particular to experiments on SOFCs.",
"During annotation, we mark these slot types as links between the experiment-evoking phrase and the respective slot filler (entity mention), see Figure 1.",
"As a result, experiment frames are represented by graphs rooted in the node corresponding to the frame-evoking element.",
"Our annotation scheme comprises 16 slot types relevant for SOFC experiments.",
"Here we explain a few of these types for illustration.",
"A full list of these slot types can be found in Supplementary Material Table 11; detailed explanations are given in the annotation guidelines published along with our corpus.",
"AnodeMaterial , CathodeMaterial : These slots are used to mark the fuel cell's anode and cathode, respectively.",
"Both are entity mentions of type MATERIAL .",
"In some cases, simple surface information indicates that a material fulfills such a role.",
"Other cases require specific domain knowledge and close attention to the context.",
"PowerDensity , Resistance , WorkingTemperature : These slots are generally filled by mentions of type VALUE , i.e., a numerical value plus a unit.",
"Our annotation guidelines give examples for relevant units and describe special cases.",
"This enables any materials scientist, even if he/she is not an expert on SOFCs, to easily understand and apply our annotation guidelines.",
"Difficult cases.",
"We also found sentences that include enumerations of experimental settings such as in the following example: It can be seen that the electrode polarization resistances in air are 0 . 027 cm 2 , 0 . 11 cm 2 , and 0 . 88 cm 2 at 800 C , 700 C and 600 C , respectively. 3 We decided to simply link all slot fillers (the various resistance and temperature values) to the same frame-evoking element, leaving disentangling and grouping of this set of parameters to future work.",
"We instruct our annotators to always link slot fillers to the syntactically closest EXPERIMENT mention.",
"If the description of an experiment spans more than one clause, we link the two relevant EXPERIMENT s using the relation same exp .",
"We use exp variation to link experiments done on the same cell, but with slightly different operating conditions.",
"The link type exp variation can also relate two frame-evoking elements that refer to two measurements performed on different materials/cells, but in the same experimental conditions.",
"In this case, the frame-evoking elements usually convey an idea of comparison, e.g., increase or reach from ... to. 4 Corpus Statistics and Task Definitions In this section, we describe our new corpus and propose a set of information extraction tasks that can be trained and evaluated using this dataset.",
"SOFC-Exp Corpus.",
"Our corpus consists of 45 open-access scientific publications about SOFCs and related research, annotated by domain experts.",
"For manual annotation, we use the InCeption annotation tool (Klie et al., 2018).",
"Table 1 shows the key statistics for our corpus.",
"Sentence segmentation was performed automatically.",
"4 As a preparation for experimenting with the data, we manually remove all sentences belonging to the Acknowledgment and References sections.",
"We propose the experimental setting of using the training data in a 5-fold cross validation setting for development and tuning, and finally applying the model(s) to the independent test set.",
"Task definitions.",
"Our rich graph-based annotation scheme allows for a number of information extraction tasks.",
"In the scope of this paper, we address the following steps of (1) identifying sentences that describe SOFC-related experiments, (2) 3 See [PMC4673446].",
"recognizing and typing relevant named entities, and (3) extracting slot fillers from these sentences.",
"The originally annotated graph structures would also allow for modeling as relations or dependency structures.",
"We leave this to future work.",
"The setup of our tasks is based on the assumption that in most cases, one sentence describes a single experiment.",
"The validity of this assumption is supported by the observation that in almost all sentences containing more than one EXPERIMENT , experiment-evoking verbs actually describe variations of the same experiment.",
"(For details on our analysis of links between experiments, see Supplementary Material Section B.)",
"In our automatic modeling, we treat slot types as entity-types-in-context, which is a valid approximation for information extraction purposes.",
"We leave the tasks of deciding whether two experiments are the same ( same exp ) or whether they constitute a variation ( exp variation ) to future work.",
"While our dataset provides a good starting point, tackling these tasks will likely require collecting additional data.",
"We here present the results of our inter-annotator agreement study, which we perform in order to estimate the degree of reproducibility of our corpus and to put automatic modeling performance into perspective.",
"Six documents (973 sentences) have been annotated independently both by our primary annotator, a graduate student of materials science, and a second annotator, who holds a Ph.D. in physics and is active in the field of materials science.",
"The label distribution in this subset is similar to the one of our overall corpus, with each annotator choosing EXPERIMENT about 11.8% of the time.",
"Identification of experiment-describing sentences.",
"Agreement on our first task, judging whether a sentence contains relevant experimental information, is 0.75 in terms of Cohen's (Cohen, 1968), indicating substantial agreement according to Landis and Koch (1977).",
"The observed agreement, corresponding to accuracy, is 94.9%; expected agreement amounts to 79.2%.",
"Table 2 shows precision, recall and F1 for the doubly-annotated subset, treating one annotator as the gold standard and the other one's labels as predicted.",
"Our primary annotator identifies 119 out of 973 sentences as experiment-describing, our secondary annotator 111 sentences, with an overlap of 90 sentences.",
"These statistics are helpful to gain further intuition of how well a human can reproduce another anno-tator's labels and can also be considered an upper bound for system performance.",
"Entity mention detection and type assignment.",
"As mentioned above, relevant entity mentions and their types are only annotated for sentences containing experiment information and neighboring sentences.",
"Therefore, we here compute agreement on the detection of entity mention and type assignment on the subset of 90 sentences that both annotators considered as containing experimental information.",
"We again look at precision and recall of the annotators versus each other, see Table",
"3. The high precision indicates that our secondary annotator marks essentially the same mentions as our primary annotator, but recall suggests a few missing cases.",
"The difference in marking EXPERIMENT can be explained by the fact that the primary annotator sometimes marks several verbs per sentence as experiment-evoking elements, connecting them with same exp or exp variation , while the secondary annotator links the mentions of relevant slots to the first experiment-evoking element (see also Supplementary Material Section B).",
"Overall, the high agreement between domain expert annotators indicates high data quality.",
"Identifying experiment slot fillers.",
"We compute agreement on the task of identifying the slots of an experiment frame filled by the mentions in a sentence on the subset of sentences that both annotators marked as experiment-describing.",
"Slot fillers are the dependents of the respective edges starting at the experiment-evoking element.",
"Table 4 shows F1 scores for the most frequent ones among those categories.",
"See Supplementary Material Section C for all slot types.",
"Overall, our agreement study provides support for the high quality of our annotation scheme and validates the annotated dataset.",
"In this section, we describe a set of neural-network based model architectures for tackling the various information extraction tasks described in Section",
"Experiment detection.",
"The task of experiment detection can be modeled as a binary sentence classification problem.",
"It can also be conceived as a retrieval task, selecting sentences as candidates for experiment frame extraction.",
"We implement a bidirectional long short-term memory ( BiLSTM ) model with attention for the task of experiment sentence detection.",
"Each input token is represented by a concatenation of several pretrained word embeddings, each of which is fine-tuned during training.",
"We use the Google News word2vec embeddings (Mikolov et al., 2013), domain-specific word2vec embeddings (mat2vec, Tshitoyan et al., 2019, see also Section 2), subword embeddings based on byte-pair encoding (bpe, Heinzerling and Strube, 2018), BERT (Devlin et al., 2019), and SciBERT (Beltagy et al., 2019) embeddings.",
"For BERT and SciBERT, we take the embeddings of the first word piece as token representation.",
"The embeddings are fed into a BiLSTM model followed by an attention layer that computes a vector for the whole sentence.",
"Finally, a softmax layer decides whether the sentence contains an experiment.",
"In addition, we fine-tune the original (uncased) BERT (Devlin et al., 2019) as well as SciBERT (Beltagy et al., 2019) models on our dataset.",
"SciBERT was trained on a large corpus of scientific text.",
"We use the implementation of the BERT sentence classifier by Wolf et al. (2019) that uses the CLS token of BERT as input to the classification layer.",
"5 Finally, we compare the neural network models with traditional classification models, namely a support vector machine ( SVM ) and a logistic regression classifier.",
"For both models, we use the following set of input features: bag-of-words vectors indicating which 1to 4-grams and part-of-speech tags occur in the sentence.",
"6 Entity mention extraction.",
"For entity and concept extraction, we use a sequence-tagging approach similar to (Huang et al., 2015; Lample et al., 2016), namely a BiLSTM model .",
"We use the same input representation (stacked embeddings) as above, which are fed into a BiLSTM.",
"The subsequent conditional random field (CRF, Lafferty et al., 2001) output layer extracts the most probable label sequence.",
"To cope with multi-token entities, we convert the labels into BIO format.",
"We also fine-tune the original BERT and SciBERT sequence tagging models on this task.",
"Since we use BIO labels, we extend it with a CRF output layer to enable it to correctly label multi-token mentions and to enable it to learn transition scores between labels.",
"As a non-neural baseline, we train 5 https://github.com/huggingface/ transformers 6 We use sklearn, https://scikit-learn.org .",
"a CRF model using the token, its lemma, part-of-speech tag and mat2vec embedding as features.",
"7 Slot filling.",
"As described in Section 4, we approach the slot filler extraction task as fine-grained entity-typing-in-context, assuming that each sentence represents a single experiment frame.",
"We use the same sequence tagging architectures as above for tagging the tokens of each experiment-describing sentence with the set of slot types (see Table 11).",
"Future work may contrast this sequence tagging baseline with graph-induction based frame extraction.",
"In this section, we present the experimental results for detecting experiment-describing sentences, entity mention extraction and experiment slot identification.",
"For tokenization, we employ ChemDataEx-tractor, 8 which is optimized for dealing with chemical formulas and unit mentions.",
"We tune our models in a 5-fold cross-validation setting.",
"We also report the mean and standard deviation across those folds as development results.",
"For the test set, we report the macro-average of the scores obtained when applying each of the five models to the test set.",
"To put model performance in relation to human agreement, we report the corresponding statistics obtained from our inter-annotator agreement study (Section 5).",
"Note that these numbers are based on a subset of the data and are hence not directly comparable.",
"Hyperparameters and training.",
"The BiLSTM models are trained with the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-3.",
"For fine-tuning the original BERT models, we follow the configuration published by Wolf et al. (2019) and use AdamW (Loshchilov and Hutter, 2019) as optimizer and a learning rate of 4e-7 for sentence classification and 1e-5 for sequence tagging.",
"When adding BERT tokens to the BiLSTM, we also use the AdamW optimizer for the whole model and learning rates of 4e-7 or 1e-5 for the BERT part and 1e-3 for the remainder.",
"For regularization, we employ early stopping on the development set.",
"We use a stacked BiLSTM with two hidden layers and 500 hidden units for all tasks with the exception of the experiment sentence de-7 We use sklearn-pycrfsuite, https://pypi.org/ project/sklearn-pycrfsuite .",
"tection task, where we found one BiLSTM layer to work best.",
"The attention layer of the sentence detection model has a hidden size of 100.",
"Experiment sentence detection.",
"Table 5 shows our results on the detection of experiment-describing sentences.",
"The neural models with byte-pair encoding embeddings or BERT clearly outperform the SVM and logistic regression models.",
"Within the neural models, BERT and SciBERT add the most value, both when using their embeddings as another input to the BiLSTM and when fine-tuning the original BERT models.",
"Note that even the general-domain BERT is strong enough to cope with non-standard domains.",
"Nevertheless, models based on SciBERT outperform BERT-based models, indicating that in-domain information is indeed beneficial.",
"For performance reasons, we use BERT-base in our experiments, but for the sake of completeness, we also run BERT-large for the task of detecting experiment sentences.",
"Because it did not outperform BERT-base in our cross-validation based development setting, we did not further experiment with BERT-large.",
"However, we found that it resulted in the best F1-score achieved on our test set.",
"In general, SciBERT-based models provide very good performance and seem most robust across dev and test sets.",
"Overall, achieving F1-scores around 67.0-68.6, such a retrieval model may already be useful in production.",
"However, there certainly is room for improvement.",
"Entity mention extraction.",
"Table 6 provides our results on entity mention detection and typing.",
"Models are trained and results are reported on the subset of sentences marked as experiment-describing in the gold standard, amounting to 4,590 entity mentions in total.",
"9 The CRF baseline achieves comparable or better results than the BiLSTM with word2vec and/or mat2vec embeddings.",
"However, adding subword-based embeddings (bpe and/or BERT) significantly increases performance of the BiLSTM, indicating that there are many rare words.",
"Again, the best results are obtained when using BERT or SciBERT embeddings or when using the original SciBERT model.",
"It is relatively easy for all model variants to recognize VALUE as these mentions usually consist of a number and unit which the model can easily memorize.",
"Recognizing the types MATERIAL and DEVICE , in contrast, is harder and may profit from using gazetteer-based extensions.",
"Experiment slot filling.",
"Table 7 shows the macro-average F1 scores for our different models on the slot identification task.",
"10 As for entity typing, we train and evaluate our model on the subset of sentences marked as experiment-describing, which contain 4,263 slot instances.",
"Again, the CRF baseline outperforms the BiLSTM when using only 9 The SOFC-Exp gold standard marks all entity mentions that correspond to one of the four relevant types occurring in these sentences, regardless of whether the mention fills a slot in an experiment or not.",
"10 We evaluate on the 16 slot types as listed in Table 11.",
"When training our model, we use the additional types experiment evoking word and Thickness , which are not frame slots but related annotations present in our data, see guidelines.",
"mat2vec and/or word2vec embeddings.",
"The addition of BERT or SciBERT embeddings improves performance.",
"However, on this task, the BiLSTM model with (Sci)BERT embeddings outperforms the fine-tuned original (Sci)BERT model.",
"Compared to the other two tasks, this task requires more complex reasoning and has a larger number of possible output classes.",
"We assume that in such a setting, adding more abstraction power to the model (in the form of a BiLSTM) leads to better results.",
"For a more detailed analysis, Table 8 shows the slot-wise results for the non-neural CRF baseline and the model that performs best on the development set: BiLSTM with SciBERT embeddings.",
"As in the case of entity mention detection, the models do well for the categories that consist of numeric mentions plus particular units.",
"In general, model performance is also tied to the frequency of the slot types in the dataset.",
"Recognizing the role a material plays in an experiment (e.g., AnodeMaterial vs. CathodeMaterial ) remains challenging, possibly requiring background domain knowledge.",
"This type of information is often not stated explicitly in the sentence, but introduced earlier in the discourse and would hence require document-level modeling.",
"As described in Section 2, the data set curated by Mysore et al. (2019) contains 230 synthesis procedures annotated with entity type information.",
"11 We apply our models to this entity extraction task in order to estimate the degree of transferability of our findings to similar data sets.",
"To the best of 11 See https://github.com/olivettigroup/ annotated-materials-syntheses BiLSTM CRF SciBERT count AnodeMaterial 25.0 19.0 280 CathodeMaterial 11.8 28.9 259 Device 59.3 67.6 381 ElectrolyteMaterial 20.0 47.2 219 FuelUsed 45.9 55.5 159 InterlayerMaterial 0.0 10.7 51 OpenCircuitVoltage 43.5 84.3 44 PowerDensity 69.0 97.6 175 Resistance 64.5 93.9 136 WorkingTemperature 72.5 90.3 414 Table 8: Experiments: slot identification.",
"our knowledge, there have not yet been any publications on the automatic modeling of this data set.",
"We hence compare to the previous work of Mysore et al. (2017), who perform action graph induction on a similar data set.",
"12 Our implementation of BiLSTM-CRF mat2vec+word2vec roughly corresponds to their BiLSTM-CRF system.",
"Table 9 shows the performance of our models when trained and evaluated on the synthesis procedures dataset.",
"Detailed scores by entity type can be found in the Supplementary Material.",
"We chose to use the data split suggested by the authors for the NER task, using 200 documents for training, and 15 documents for each dev and test set.",
"Among the non-BERT-based systems, the BiLSTM variant using both mat2vec and word2vec performs best, indicating that the two pre-trained embeddings contain complementary information with regard to this 12 According to correspondence with authors.",
"task.",
"The best performance is reached by the BiLSTM model including word2vec, mat2vec, bpe and SciBERT embeddings, with 92.2 micro-average F1 providing a strong baseline for future work.",
"We have presented a new dataset for information extraction in the materials science domain consisting of 45 open-access scientific articles related to solid oxide fuel cells.",
"Our detailed corpus and inter-annotator agreement studies highlight the complexity of the task and verify the high annotation quality.",
"Based on the annotated structures, we suggest three information extraction tasks: the detection of experiment-describing sentences, entity mention recognition and typing, and experiment slot filling.",
"We have presented various strong baselines for them, generally finding that BERT-based models outperform other model variants.",
"While some categories remain challenging, overall, our models show solid performance and thus prove that this type of data modeling is feasible and can lead to systems that are applicable in production settings.",
"Along with this paper, we make the annotation guidelines and the annotated data freely available.",
"Outlook.",
"In Section 7.1, we have shown that our findings generalize well by applying model architectures developed on our corpus to another dataset.",
"A natural next step is to combine the datasets in a multi-task setting to investigate to what extent models can profit from combining the information annotated in the respective datasets.",
"Further research will investigate the joint modeling of entity extraction, typing and experiment frame recognition.",
"In addition, there are also further natural language processing tasks that can be researched using our dataset.",
"They include the detection of events and sub-events when regarding the experiment-descriptions as events, and a more linguistically motivated evaluation of the frame-semantic approach to experiment descriptions in text, e.g., moving away from the one-experiment-per-sentence and one-sentence-per-experiment assumptions and modeling the graph-based structures as annotated.",
"We thank Jannik Strotgen, Felix Hildebrand, Dragan Milchevski and everyone else involved in the Bosch MatKB project for their support of this research.",
"We also thank Stefan Grunewald, Sherry Tan, and the anonymous reviewers for their insightful comments related to this paper."
] | [
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"method",
"result",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Jin 1 , Andrzej Cichocki 2 3 , Qibin Zhao 4 , Wanzeng Kong Machine Collaborative Intelligence of Zhejiang Province, and Technology, Hangzhou Dianzi University, China Science and Technology, Moscow, Russia Institute, Polish Academy of Science, Warsaw, Poland Advanced Intelligence Project, RIKEN hdutangjiajia, Jxuanyu599 } @163.com } @hdu.edu.cn } @riken.jp",
"{ a.cichocki, Abstract Multimodal sentiment analysis is the challenging research area that attends to the fusion of multiple heterogeneous modalities.",
"The main challenge is the occurrence of some missing modalities during the multimodal fusion procedure.",
"However, the existing techniques require all modalities as input, thus are sensitive to missing modalities at predicting time.",
"In this work, the coupled-translation fusion network (CTFN) is firstly proposed to model bi-direction interplay via couple learning, ensuring the robustness in respect to missing modalities.",
"Specifically, the cyclic consistency constraint is presented to improve the translation performance, allowing us directly to discard decoder and only embraces encoder of Transformer.",
"This could contribute to a much lighter model.",
"Due to the couple learning, CTFN is able to conduct bi-direction cross-modality intercorrelation parallelly.",
"Based on CTFN, a hierarchical architecture is further established to exploit multiple bi-direction translations, leading to double multimodal fusing embeddings compared with traditional translation methods.",
"Moreover, the convolution block is utilized to further highlight explicit interactions among those translations.",
"For evaluation, CTFN was verified on two multimodal benchmarks with extensive ablation studies.",
"The experiments demonstrate that the proposed framework achieves state-of-the-art or often competitive performance.",
"Additionally, CTFN still maintains robustness when considering missing modality.",
"Sentiment analysis has witnessed many significant advances in the artificial intelligence community, in which text (Yadollahi et al., 2017), visual (Kahou et al., 2016), and acoustic (Luo et al., 2019) modalities are primarily employed to the related research Equal contribution",
"respectively, allowing to exploit the human emotional characteristic and intention effectively (Deng et al., 2018).",
"Intuitively, due to the consistency and complementarity among different sources, the joint representation attend to reason about multimodal messages, which are capable of boosting the performance of the specific task (Pan et al., 2016; Gebru et al., 2017; Al Hanai et al., 2018).",
"Multimodal fusion procedure is to incorporate multiple knowledge for predicting a precise and proper outcome (Baltrusaitis et al., 2018).",
"Historically, the existing fusion has been done generally by leveraging the model-agnostic process, considering the early fusion, late fusion, and hybrid fusion technique (Poria et al., 2017a).",
"Among those, early fusion focussed on the concatenation of the unimodal presentation (D'mello and Kory, 2015).",
"On the contrast, late fusion performs the integration at the decision level, by voting among all the model results (Shutova et al., 2016).",
"As to the hybrid fusion, the output comes from the combination of the early fusion and unimodal prediction (Lan et al., 2014).",
"Nevertheless, multimodal sentiment sequences often consists of unaligned properties, and the traditional fusion manners are failed to take the heterogeneity and misalignment into account carefully, which raises a question on investigating the more sophisticated models and estimating emotional information.",
"(Tsai et al., 2020; Niu et al., 2017).",
"Recently, Transformer-based multimodal fusion framework has been developed to address the above issues with the help of multi-head attention mechanism (Rahman et al., 2020; Le et al., 2019; Tsai et al., 2019).",
"By introducing the standard Transformer network (Vaswani et al., 2017) as the basis, Tsai et al. (Tsai et al., 2019) captured the integrations directly from unaligned multimodal streams in an end-to-end fashion, latently adapted streams from one modality to another with the cross-modal 5302 (cid:1787)(cid:1815)(cid:1804)(cid:1801)(cid:1812)(cid:1809)(cid:1820)(cid:1825) (cid:2778) Cycle ConsistencyConstraint Cycle ConsistencyConstraint (cid:1779)(cid:1814)(cid:1803)(cid:1815)(cid:1804)(cid:1805)(cid:1818) Fusion Embedding (cid:1779)(cid:1814)(cid:1803)(cid:1815)(cid:1804)(cid:1805)(cid:1818) Fusion Embedding (cid:1787)(cid:1815)(cid:1804)(cid:1801)(cid:1812)(cid:1809)(cid:1820)(cid:1825) (cid:2779)",
"attention module, regardless of the need for alignment.",
"Furthermore, Wang et al. (Wang et al., 2020) proposed a parallel Transformer unit, allowing to explore the correlation between multimodal knowledge effectively.",
"However, the decoder component of standard Transformer is employed to improve the translation performance, which may lead to some redundancy.",
"Moreover, the explicit interaction among cross-modality translations were not considered.",
"Essentially, compared to our CTFN, their architecture require access to all modalities as inputs for exploring multimodal interplay with the sequential fusion strategy, thus are rather sensitive in the case of multiple missing modalities.",
"In this paper, CTFN is proposed to model bidirectional interplay based on coupled learning, ensuring the robustness in respect to missing modalities.",
"Specifically, the cyclic consistency constraint is proposed to improve the translation performance, allowing us directly to discard decoder and only embrace encoder of Transformer.",
"This could contribute to a much lighter model.",
"Thanks to the couple learning, CTFN is able to conduct bi-direction cross-modality intercorrelation parallelly.",
"Take CTFN as a basis, a hierarchical architecture is established to exploit modality-guidance translation.",
"Then, the convolution fusion block is presented to further explore the explicit correlation among the above translations.",
"Importantly, based on the parallel fusion strategy, our CTFN model still provides flexibility and robustness when considering only one input modality.",
"For evaluation, CTFN was verified on two multimodal sentiment benchmarks, CMU-MOSI (Zadeh et al., 2016) and MELD (Poria et al., 2019).",
"The experiments demonstrate that CTFN could achieve the state-of-the-art or even better performance compared to the baseline models.",
"We also provide several extended ablation studies, to investigate intrinsic properties of the proposed model.",
"The off-the-shelf multimodal sentiment fusion architecture comprises two leading groups:",
"translation-based and non-translation based model.",
"Non-translation based: Recently, RNN-based models, considering GRU and LSTM, have received significant advances in exploiting the context-aware information across the data (Yang et al., 2016; Agarwal et al., 2019).",
"bc LST M (Poria et al., 2017b) and GME LST M (Chung et al., 2014) presented a LSTM-based model to retrieve contextual information, where the unimodal features are concatenated into a unit one as the input information.",
"Similarly, MELD base (Po-ria et al., 2019) leveraged the concatenation of audio and textual features on the input layer, and employed GRU to model sentimental context.",
"In contrast, CHF usion (Majumder et al., 2018) employed the RNN-based hierarchical structure to draw fine-grained local correlations among the modalities, and the empirical evidence illustrates superior advances compared to the simple concatenation of unimodal presentation.",
"On the basis of RNN, MMMU BA (Ghosal et al., 2018) further employed multimodal attention block to absorb the contribution of all the neighboring utterances, which demonstrates that the attention mechanism 5303 Reconstruction Error (cid:1865)(cid:1861)(cid:1866)(cid:513) (cid:2180) (cid:2204) (cid:3398)(cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1775)(cid:1372)(cid:1796) (cid:4666)(cid:2180) (cid:3028) (cid:1314)(cid:4667) (cid:513) BiGRU & Dense Layer 1 Layer L Layer L Layer 1 BiGRU & Dense (cid:2180) (cid:3028) (cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1775)(cid:1372)(cid:1796) (cid:4666)(cid:2180) (cid:3028) (cid:1314)(cid:4667) (cid:2180) (cid:3049) (cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1796)(cid:1372)(cid:1775) (cid:4666)(cid:2180) (cid:3049) (cid:1314)(cid:4667) (cid:2180) (cid:3028) (cid:1314) (cid:2180) (cid:3049) (cid:1314) (cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1775)(cid:1372)(cid:1796) (cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1796)(cid:1372)(cid:1775) Layer (cid:2191)(cid:4666)(cid:2235)(cid:1372) (cid:2236)(cid:4667) LayerNorm Multi-head LayerNorm Position Feed-forward (cid:2173) (cid:2235) (cid:2167) (cid:2235) (cid:2178) (cid:2235) Addition Addition (cid:2180) (cid:2235) (cid:2171)(cid:2203)(cid:2202)(cid:2198)(cid:2203)(cid:2202) (cid:2235)(cid:1372)(cid:2236) Reconstruction Error (cid:1865)(cid:1861)(cid:1866)(cid:513) (cid:2180) (cid:2183) (cid:3398)(cid:1794)(cid:1818)(cid:1801)(cid:1814) (cid:1796)(cid:1372)(cid:1775) (cid:4666)(cid:2180) (cid:3049) (cid:1314)(cid:4667) (cid:513) Discriminator (cid:1865)(cid:1861)(cid:1866)(cid:513) (cid:2180) (cid:2183) (cid:3398)(cid:2180) (cid:2183) (cid:1314) (cid:513) Discriminator (cid:1865)(cid:1861)(cid:1866)(cid:513) (cid:2180) (cid:2204) (cid:3398)(cid:2180) (cid:2204) (cid:1314) (cid:513) Figure 2: CTFN: X a and X v refer to the features of modality audio and video respectively.",
"can utilize the neighborhood contribution for integrating the contextual information.",
"However, all these methods are suitable for the low-level presentation within the single modality with a non-translation manner, which may be easily sensitive to the noisy terms and missing information in the sources.",
"Translation-based model: Inspired by the re-cent success of sequence to sequence (Seq2Seq) models (Lin et al., 2019; ? ) in machine translation, (Pham et al., 2019) and (Pham et al., 2018) presented multimodal fusion model via the essential insight that translates from a source modality to a target modality, which is able to capture much more robust associations across multiple modalities.",
"MCT N model incorporated a cyclic translation module to retrieve the robust joint representation between modalities in a sequential manner, e.g., the language information firstly associated with the visual modality, and latently translated into the acoustic modality.",
"Compared with the MCT N , Seq 2 Seq 2 Sent introduced a hierarchical fusion model using the Seq2Seq methods.",
"For the first layer, the joint representation of a modality pair is treated as an input sequence for the next Seq2Seq layer in an attempt to decode the third modality.",
"Inspired by the success of the Transformer-based model, Tsai et al. introduced a directional cross-modality attention module to extend the standard Transformer network.",
"Follow the basic idea of Tsai et al.",
", Wang et al. provided a novel multimodal fusion cell which is comprised of two standard Transformers, embracing the association with a modality pair during the forward and backward translation implicitly.",
"However, all existing models adopt sequential multimodal fusion architecture, which requires all modalities as input, therefore they can be sensitive to the case of multiple missing modalities.",
"Moreover, the explicit interactions among cross-modality translations were not considered.",
"In this section, we firstly present CTFN (Figure 2), which is capable of exploring bi-direction cross-modality translation via couple learning.",
"On the basis of CTFN, a hierarchical architecture is established to exploit multiple bi-direction translations, leading to double multimodal fusing embeddings (Figure 4).",
"Then, the convolutional fusion block (Figure 3) is applied to further highlight explicit correlation among cross-modality translations.",
"The two benchmarks consist of three modalities, audio, video and textual modality.",
"Specifically, the above utterance-level modalities are denoted as X a RT a d a , X v RT v d v and X t RT t d t , respectively.",
"The number of utterances is presented as T i ( i { a, v, t } ) , and d i ( i { a, v, t } ) stands for the dimension of the unimodality features.",
"For simplicity, we consider two unimodality presentation X a and X v explored from audio (A) and video (V), respectively.",
"In the primal process of CTFN, we focus on learning a directional translator T ran A V ( X a , X v ) for translating the modality audio to video.",
"Then, the dual process aims to learn an inverse directional translator T ran V A ( X v , X a ) , allowing for the translation from modality video to audio.",
"Inspired by the suc-Tran (cid:2157)(cid:1374)(cid:2176) Tran (cid:2157) (cid:1374) (cid:2178) Concatenate (cid:1839) (cid:3028)(cid:3047) (cid:1839) (cid:3028)(cid:3049) Modality T Modality V Modality AT e m p o r a l D o m a i n Convolutional Fusion Block Feature Domain Figure 3: Multimodal convolutional fusion block: M at RT F at and M av RT F av refer to the cross-modality translations, where T and F are the size of time and feature domain respectively.",
"cess of Transformer in Natural Language Processing, the encoder of Transformer is introduced to our model as the translation block, which is an efficient and adaptive manner for retrieving the long-range interplay along the temporal domain.",
"Importantly, the cyclic consistency constraint is presented to improve the translation performance.",
"And due to the couple learning, CTFN is able to combine primal and dual process into a coupled structure, ensuring the robustness in respect to missing modalities.",
"For the primal task, X a RT a d a is firstly delivered to a densely connected layer for receiving a linear transformation X a RT a L a , where L a is the output dimension of the linear layer.",
"And the corresponding query matrix, key matrix and value matrix are denoted as Q a = X a WQ a RT a L a , K a = X a WK a RT a L a , V a = X a WV a RT a L a , where WQ a RL a L a , WK a RL a L a and WV a RL a L a are weight matrixes.",
"The translation from modality A to V is performed as X v , = T ran A V ( X a , X v ) RT a L v , where X v , refers to the fake X v , and L a is the scale coefficient.",
"Note that the input X a is directly delivered to the translation process, while the input X v is used to analyze the difference between real data X v and fake output X v , .",
"Subsequently, X v , is passed through the T ran V A , leading to the reconstruct output X a , = T ran V A ( X v , , X a ) , and the X a is only used to calculate the diversity between the real and reconstruct data.",
"RT v d v , X a , = T ran V A ( X v , X a ) RT a L a , and reconstructed representation X v , = T ran A V ( X a , , X v ) RT v L v .",
"Essentially, T ran A V and T ran V A are implemented by several sequential encoder layers.",
"During the translation period, we hypothesize that intermediate encoder layer contains the cross-modality fusion information and effectively balance the contribution of two modalities.",
"Hence, the output of the middle encoder layer T ran A V [ L/ 2] and T ran V A [ L/ 2] stand for the multimodal fusion knowledge, where L refers to the number of layers, and when L is odd number, then L = L + 1 .",
"As for the model reward, the primal process has an immediate reward r p = (cid:4) X a T ran V A ( X v , ) (cid:4) F , and the dual step related reward is r d = (cid:4) X v T ran A V ( X a , ) (cid:4) F , indicating the similarity between the real data and the reconstructed output of the translator.",
"For simplicity, a linear transformation module is adopted to combine the primal and dual step reward into a total model reward, e.g., r all = r p + (1 ) r d , where is employed to balance the contribution between dual and primal block.",
"Additionally, the loss functions utilized in the coupled-translation multimodal fusion block are defined as follows: l A V ( X a , X v ) = (cid:3) Tran A V ( X a , X v ) X v (cid:3) F + (cid:3) Tran A V ( X a , , X v ) X v (cid:3) F l V A ( X v , X a ) = (cid:3) Tran V A ( X v , X a ) X a (cid:3) F + (cid:3) Tran V A ( X v , , X a ) X a (cid:3) F l A V = l A V ( X a , X v ) + (1 ) l V A ( X v , X a ) , (2) where l A V ( X a , X v ) and l V A ( X v , X a ) refer to the training loss of the primal and dual translator respectively, and l A V stands for the loss of bi-directional translator unit.",
"Essentially, when the training process of all coupled-translation blocks 5305 V i d e o T e x t It is very interesting!",
"are finished, our model only needs one input modality at predicting time, without the help of target modalities.",
"Indeed, l A V indicates the cycle-consistency constraint in our couple learning model.",
"The cycle-consistency is well-known, which refers to combination of forward and backward cycle-consistency.",
"However, our goal is to solve missing modality problem in multi-modal learning, which cannot be achieved by applying cycle-consistency straightforward.",
"This is because that introducing this strict cycle-consistency to CTFN fail to effectively associate primal task with dual task of the couple learning model.",
"To solve this problem, we relaxed constraint of original cycle-consistency by using a parameter ' to balance the contribution of forward and backward cycle-consistency, leading to a much more flexible cycle-consistency.",
"Thanks to the great flexibility of new proposed cycle-consistency, we could adaptively and adequately associate primal with dual task, resulting in much more balanced consistency among modalities.",
"Based on CTFN, each modality is treated as the source moment for ( M 1) times, which means that each modality holds ( M 1) directional translations, { T ran modality source modality m } Mm =1 , where M refers to the total number of modalities.",
"For instance, given modality audio, we can retrieve the following two modality-guidance translations: [ T ran a vL/ 2 , video , ] = T ran a v ( audio, video ) [ T ran a tL/ 2 , text , ] = T ran a t ( audio, text ) .",
"(3) Note that audio plays a key role in different cross-modality translations, and provides the strong guidance for capturing various cross-modality interplay.",
"For blending the contribution of source modality (audio) effectively, a convolution fusion block is incorporated to explore explicit and local correlation among modality-guidance translations.",
"Initially, the two cross-modality intermediate correlations T ran audio vedioL/ 2 and T ran audio textL/ 2 are concatenated along the temporal domain into a unit representation, where the size of time sequence is equal ( T a = T v = T t ), thus the concatenation is of size T a ( L v + L t ) : Z concat = T ran a vL/ 2 T ran a tL/ 2 .",
"(4) Subsequently, the temporal convolution is employed to further retrieve explicit interactions among cross-modality translations.",
"Specifically, we adopt a 1D temporal convolutional layer to exploit the local patten in a light manner: Z concat = Conv 1 D ( Z concat , K concat ) RT a L d , (5) where K concat is the size of the convolutional kernel, and L d is the length of the cross-modality integration dimension.",
"The temporal kernel is used to perform the convolutional operation along the feature dimension, allowing to further exploit local interplay among cross-modality translations.",
"That is to say, the local interplay fully exploits the contribution from modality-guidance translations.",
"On the basis of CTFN and convolutional multimodal fusion network, a hierarchical architecture was proposed for exploiting multiple bi-direction translations, leading to double multimodal fusing embeddings.",
"For instance, given M modalities, our model could achieve double 5306 Audio Input Modality (cid:1839) (cid:2183)(cid:1372)(cid:2204) (cid:1839) (cid:2204)(cid:1372)(cid:2183) (cid:1839) (cid:2202)(cid:1372)(cid:2204) (cid:1839) (cid:2204)(cid:1372)(cid:2202) (cid:1839) (cid:2202)(cid:1372)(cid:2183) (cid:1839) (cid:2183)(cid:1372)(cid:2202) Multimodal Convolution Fusion (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2157)(cid:1372)(cid:2176) (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2157)(cid:1372)(cid:2178) (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2178)(cid:1372)(cid:2176) (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2176)(cid:1372)(cid:2157) (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2178)(cid:1372)(cid:2157) (cid:2176)(cid:2200)(cid:2183)(cid:2196) (cid:2176)(cid:1372)(cid:2178) (cid:2180) (cid:3028) (cid:2180) (cid:3028) (cid:2180) (cid:3049) (cid:1314) (cid:2180) (cid:3049) (cid:1314) (cid:2180) (cid:3047) (cid:1314) (cid:2180) (cid:3047) (cid:1314) Figure 5: We only employ a single input modality (audio) to do the multimodal fusion task during the predicting period.",
"C 2 M embeddings.",
"As illustrated in Figure 4, the proposed architecture consists of three CTFNs T ran A V , T ran A T and T ran V T .",
"Considering the contribution of the guidance (source) modality, the modality-guidance translations are denoted as T ran T A V = [ T ran L/ 2 A V , T ran L/ 2 A T ] , T ran T V A = [ T ran L/ 2 V T , T ran L/ 2 V A ] , and T ran A T V = [ T ran L/ 2 T A , T ran L/ 2 T V ] , respectively.",
"Similarly, when taking the contribution of target modalities into account, corresponding modality-guidance translations are illustrated as T ran T A V = [ T ran L/ 2 V A , T ran L/ 2 T A ] , T ran T V A = [ T ran L/ 2 T V , T ran L/ 2 A V ] , and T ran A T V = [ T ran L/ 2 A T , T ran L/ 2 V T ] , respectively.",
"Subsequently, the convolutional fusion layer is used to further exploit explicit local interplay among modality-guidance translations associated with the same source/target modality, which can fully leverage the contribution of source/target modality.",
"Essentially, as demonstrated in Figure 4, our model has 12+1 loss constraints in total, which includes 3 CTFNs, each one has 4 training loss (primal & dual translator training loss), and 1 classification loss.",
"However, we do not need to balance these targets together, which is achieved by our training strategy that 3 CTFNs are trained individually.",
"For each CTFN, one hyper-parameter ' is introduced to balance the loss of primal translator and dual translator, and this hyper-parameter is shared among 3 CTFNs.",
"Hence, 3 CTFNs only need 1 hyper-parameter to balance the training loss, which is easy to be tuned.",
"The classification loss is used for training the classifier on the 3 CTFNs's outputs.",
"Datasets.",
"CMU-MOSI consists of 2199 opinion video clips from online sharing websites (e.g., YouTube).",
"Each utterance of the video clip is annotated with a specific sentimental label of positive or negative in the range scale of [ 3 , +3] .",
"The corresponding training, validation, and testing size refer to division set (1284 , 229 , 686) .",
"Additionally, the same speaker will not appear in both training and testing sets, allowing to exploit speaker-independent joint representations.",
"MELD dataset contains 13000 utterances from the famous TV-series F riends .",
"Each utterance is annotated with emotion and sentiment labels, considering 7 classes of emotion tag (anger, disgust, fear, joy, neutral, sadness, and surprise) and 3 sentimental tendency levels (positive, neutral, and negative).",
"Hence, the original dataset can be denoted as MELD (Senti-ment) and MELD (Emotion) with respect to the data annotation, we only verified our model on the MELD (Sentiment).",
"Note that CMU-MOSI and MELD are the public and widely-used datasets which have been aligned and segmented already.",
"Features.",
"For CMU-MOSI dataset, we adopt the same preprocess manner mentioned in MFN (Zadeh et al., 2018) to extract the low-level representation of multimodal data, and synchronized at the utterance level that in consistent with text modality.",
"For MELD benchmark, we follow 5307 Models CMU-MOSI MELD (Sentiment) Bi-modality Tri-modality Bi-modality (video, audio) (text, video) (text, audio) (text, audio, video) (text, audio) GME-LSTM (Chung et al., 2014) 52.90 74.30 73.50 76.50 66.46 bc-LSTM (Poria et al., 2017b) 56.52 78.59 78.86 79.26 66.09 MELD-based (Poria et al., 2019) 54.79 76.60 76.99 79.19 66.68 CHFusion (Majumder et al., 2018) 54.49 74.77 78.54 76.51 65.85 MMMU-BA (Ghosal et al., 2018) 57.45 80.85 79.92 81.25 65.56 SeqSeq2Sent (Pham et al., 2018) 58.00 67.00 66.00 70.00 63.84 MCTN (Pham et al., 2019) 53.10 76.80 76.40 79.30 66.27 TransModality (Wang et al., 2020) 59.97 80.58 81.25 82.71 67.04 CTFN (ours, L=1) 62.20 80.49 81.4 80.18 67.82 CTFN (ours, L=3) 63.11 81.55 82.16 82.77 67.78 CTFN (ours, L=6) 64.48 80.79 81.71 81.10 67.24 Table 1: Comparison of performance results for sentiment analysis on CMU-MOSI and MELD (Sentiment) benchmark using various SOTA models.",
"the related work of MELD, in which the 300-dimensional GloVe (Pennington et al., 2014) text vectors are fed into a 1D-CNN (Chen et al., 2017) layer to extract textual representation, and audio-based descriptors are explored with the popular toolkit openSMILE (Eyben et al., 2010), while visual features were not taken into account for the sentiment analysis.",
"Comparisons.",
"We introduced the translation-based and non-translation based models to this work as the baselines.",
"Translation-based: Multimodal Cyclic Translation Network (MCTN), Sequence to Sequence for Sentiment (Seq2Seq2Sent), Multimodal Sentiment Analysis with Transformer (TransModality).",
"And non-translation based: bidirectional contextual LSTM (bc-LSTM), Gated Embedding LSTM (GME-LSTM), Multimodal EmotionLines Dataset baseline model (MELD-base), Hierarchical Fusion with Context Modeling (CHFusion), Multi-Modal Multi-Utterance -Bi-Modal Attention (MMMU-BA).",
"Performance comparison with state-of-the-art models.",
"Firstly, we analyzed the performance between state-of-the-art baselines and our proposed model.",
"The bottom rows in Table 1 indicate the effectiveness and superiority of our model.",
"Particularly, on CMU-MOSI dataset, CTFN exceeded the previous best TransModality on (video, audio) by a margin of 4.51.",
"Additionally, on MELD (Senti-ment) dataset, the empirical improvement of CTFN was 0.78.",
"It is interesting to note that the improvement of (video, audio) is more significant than (text, video) and (text, audio).",
"This implies that coupled-translation structure is capable of decreasing the risk of interference between video and audio efficiently, and further leverage the explicit consistency between auxiliary features.",
"As for (text, audio, video), CTFN exceeds the previous best TransModality with an improvement of 0.06, leading to a comparable performance.",
"Indeed, for the same tri-modality fusion task, TransModality needs 4 encoders and 4 decoders, while CTFN only requires 6 encoders.",
"It should be emphasized that the cyclic consistency mechanism could contribute to a much lighter model, as well as the more effective bi-directional translation.",
"In addition, compared to the bi-modality setting, the tri-modality case achieved the improvement of 0.61, indicating the benefits brought by hierarchical architecture and convolution fusion.",
"Effect of CTFN with missing modalities.",
"Existing translation-based manners focus only on the join representation between modalities, and ignore the potential occurrence of the missing modalities.",
"Therefore, we analyzed how does missing modality may affect the final performance of CTFN and the sequential translation-based model SeqSeq2Sent.",
"Note that SeqSeq2Sent only employs LSTM to analyze uni-modality rather than the translation-based method.",
"Specifically, we take the hierarchical architecture combined with three CTFNs as 5308 the testing model.",
"From the Table 2, we observe that compared to the setting (text, audio, video), the text-based settings { (audio, video, text), (au-dio, video, text), (audio,video, text) } seem to reach the comparable result with only a relatively small performance drop.",
"On the contrast, when text was missing, the model has a relatively large performance drop, which implies that language modality contains much more discriminative sentimental message than audio and video, leading to the significantly better performance.",
"Essentially, the performance of (audio,video, text) demonstrates that hierarchical CTFN is able to maintain robustness and consistency when considering only a single input modality.",
"In other words, the cyclic consistency mechanism allows CTFN to fully exploit the cross-modality interplay, thus hierarchical CTFN could transmit the single modality to various pre-trained CTFNs for retrieving multimodal fusion message.",
"Effect of the translation direction.",
"In this paper, we propose a coupled-translation block, which aims to embrace fusion messages from the bidirectional translation process.",
"Hence, we are interested to investigate the impact of translation direction.",
"Figure 6 depicts the performance of various translations, considering (audio, text), (au-dio, video), and (text, video) translation.",
"For the (audio, text) instance, the translation text audio achieves better performance than audio text .",
"Similarly, the translation text video surpasses the result of video text.",
"However, the performance of audio video and video audio seems to be quite similar.",
"The superiority of text video and text audio may demonstrate that text modality possesses much more sentimental information.",
"Effect of the translator layer.",
"As each translator is comprised of several sequential encoder layers.",
"In this part, we assume that the output representation of a specific layer may affect the performance of the proposed model.",
"For simplicity, we perform the related task on CMU-MOSI with the setting of (a, v, t), as well as the (t,",
"a) on MELD (Sentiment).",
"Initially, we retrieve the embedding from the specific layer, where the layer ranges from 1 to L (L is the total number of the layer).",
"In Figure 7, it is interesting to note that the model reaches the peak value at layer 5 on CMU-MOSI, which means that the output of the fifth layer embraces the most discriminative fusion message.",
"In comparison, on MELD (Sentiment), the model achieves the best performance at layer 1 , which may imply that the simple translator associated with only one layer is able to capture the joint representation for the simple case (text, audio).",
"In conclusion, the lower encoder layer may involve low-level characteristics of interplay, while the higher encoder layer may embrace the explicit messages.",
"Additionally, the output of the specific layer of the encoder lies on the corresponding task and dataset.",
"We tried also (text, audio) on MOSI, and CTFN maximizes the performance at layer 3 .",
"Compared to (text, audio, video), (text, audio) is the relatively simple case, thus the lower encoder layer may is sufficient to demonstrate the interaction between text and audio.",
"Effect of concatenation strategy of translation.",
"In our work, those translations associated MOSI dataset A cc u r ac y 0 22.5 45 67.5 90 Source-based Target-based [ A T , A V ] [ T A , V A ] [ V A , V T ] [ T V , A V ] [ T A , T V ] [ A T , V T ] MOSI dataset F 1 S c o r e 0 22.5 45 67.5 90 Source-based Target-based [ A T , A V ] [ T A , V A ] [ V A , V T ] [ T V , A V ] [ T A , T V ] [ A T , V T ] Figure 8: Effect of concatenation strategy via source/target modality on MOSI.",
"with the same guidance (source) modality are concatenated along the feature domain.",
"As each modality serves as the source and target modality in turn, we are interested to analyze the impact of the distinct concatenation strategies, e.g., concatenate the translations via the same source or target modality.",
"As shown in Figure 8, it is obvious to find that audio-based target concatenation [(T A) (V A)] performs significantly better than [(A T) (A V)] with a large margin.",
"Analogously, video-based target concatenation [(T V) (A V)] works better than [(V A) (V T)].",
"The above performance may indicate that joint presentation is able to achieve the significantly improved benefits with the help of guidance modality text.",
"In conclusion, when text modality serves as the guidance modality, which may effectively leverage the contribution from audio and video, and further boost the task performance in a robust and consistent way.",
"In this paper, we present a novel hierarchical multimodal fusion architecture using coupled-translation fusion network (CTFN).",
"Initially, CTFN is utilized for exploiting bi-directional interplay via couple learning, ensuring the robustness in respect to missing modalities.",
"Specifically, the cyclic mechanism directly discards the decoder and only embraces the encoder of Transformer, which could contribute to a much lighter model.",
"Due to the couple learning, CTFN is able to conduct bi-direction cross-modality intercorrelation parallelly.",
"Based on CTFN, a hierarchical architecture is further established to exploit multiple bi-direction translations, leading to double multimodal fusing embeddings compared with traditional translation methods.",
"Additionally, a multimodal convolutional fusion block is employed to further explore the complementarity and consistency between cross-modality translations.",
"Essentially, the parallel fusion strategy allows the model maintains robustness and flexibility when considering only one input modality.",
"CTFN was verified on two public multimodal sentiment benchmarks, the experiments demonstrate the effectiveness and flexibility of CTFN, and CTFN achieves state-of-the-art or comparable performance on CMU-MOSI and MELD (Sentiment).",
"For future work, we like to evaluate CTFN on more multimodal fusion tasks.",
"The source code can be obtained from https://github.com/deepsuperviser/CTFN.",
"We sincerely thank the anonymous reviewers for their insightful comments and valuable suggestions.",
"This work was supported by National Key R&D Program of China for Intergovernmental International Science and Technology Innovation Cooperation Project (2017YFE0116800), National Natural Science Foundation of China (U20B2074, U1909202), Science and Technology Program of Zhejiang Province (2018C04012), Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province (2020E10010), JSPS KAKENHI (Grant No. 20H04249), and supported by the Ministry of Education and Science of the Russian Federation (Grant 14.756.31.0001)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"We propose to measure fine-grained domain relevance the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., deep learning) domain.",
"Such measurement is crucial for many downstream tasks in natural language processing.",
"To handle longtail terms , we build a core-anchored semantic graph , which uses core terms with rich description information to bridge the vast remaining fringe terms semantically.",
"To support a fine-grained domain without relying on a matching corpus for supervision, we develop hierarchical core-fringe learning , which learns core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain.",
"To reduce expensive human efforts , we employ automatic annotation and hierarchical positive-unlabeled learning .",
"Our approach applies to big or small domains, covers head or tail terms, and requires little human effort.",
"Extensive experiments demonstrate that our methods outperform strong baselines and even surpass professional human performance.",
"1 1 Introduction With countless terms in human languages, no one can know all terms, especially those belonging to a technical domain.",
"Even for domain experts, it is quite challenging to identify all terms in the domains they are specialized in.",
"However, recognizing and understanding domain-relevant terms is the basis to master domain knowledge.",
"And having a sense of domains that terms are relevant to is an initial and crucial step for term understanding.",
"In this paper, as our problem , we propose to measure fine-grained domain relevance , which is defined as the degree that a term is relevant to a 1 The code and data, along with several term lists with domain relevance scores produced by our methods are available at https://github.com/jeffhj/ domain-relevance .",
"given domain, and the given domain can be broad or narrow an important property of terms that has not been carefully studied before.",
"E.g., deep learning is a term relevant to the domains of computer science and, more specifically, machine learning, but not so much to others like database or compiler.",
"Thus, it has a high domain relevance for the former domains but a low one for the latter.",
"From another perspective, we propose to decouple extraction and evaluation in automatic term extraction that aims to extract domain-specific terms from texts (Amjadian et al., 2018; Hatty et al., 2020).",
"This decoupling setting is novel and useful because it is not limited to broad domains where a domain-specific corpus is available, and also does not require terms must appear in the corpus.",
"A good command of domain relevance of terms will facilitate many downstream applications.",
"E.g., to build a domain taxonomy or ontology, a crucial step is to acquire relevant terms (Al-Aswadi et al., 2019; Shang et al., 2020).",
"Also, it can provide or fil-ter necessary candidate terms for domain-focused natural language tasks (Huang et al., 2020).",
"In addition, for text classification and recommendation, the domain relevance of a document can be measured by that of its terms.",
"We aim to measure fine-grained domain relevance as a semantic property of any term in human languages.",
"Therefore, to be practical, the proposed model for domain relevance measuring must meet the following requirements : 1) covering almost all terms in human languages;",
"2) applying to a wide range of broad and narrow domains; and",
"3) relying on little or no human annotation.",
"However, among countless terms, only some of them are popular ones organized and associated with rich information on the Web, e.g., Wikipedia pages, which we can leverage to characterize the domain relevance of such head terms.",
"In contrast, there are numerous long-tail terms those not as frequently used which lack descriptive information.",
"On the other hand, among possible domains of interest, only those broad ones (e.g., physics, computer science) naturally have domain-specific corpora.",
"Many existing works (Velardi et al., 2001; Amjadian et al., 2018; Hatty et al., 2020) have relied on such domain-specific corpora to identify domain-specific terms by contrasting their distributions to general ones.",
"In contrast, those fine-grained domains (e.g., quantum mechanics, deep learning) which can be any topics of interest do not usually have a matching corpus.",
"As Challenge 2 , how to achieve good performance for a fine-grained domain without assuming a domain-specific corpus?",
"Finally, automatic learning usually requires large amounts of training data.",
"Since there are countless terms and plentiful domains, human annotation is very time-consuming and laborious.",
"As Challenge 3 , how to reduce expensive human efforts when applying machine learning methods to our problem?",
"As our solutions, we propose a hierarchical core-fringe domain relevance learning approach that addresses these challenges.",
"First , to deal with longtail terms, we design the core-anchored semantic graph , which includes core terms which have rich description and fringe terms without that information.",
"Based on this graph, we can bridge the domain relevance through term relevance and include any term in evaluation.",
"Second , to leverage the graph and support fine-grained domains without relying on domain-specific corpora, we propose hierarchical core-fringe learning , which learns the domain relevance of core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain.",
"Third , to reduce human effort, we employ automatic annotation and hierarchical positive-unlabeled learning , which allow to train our model with little even no human effort.",
"Overall, our framework consists of two processes:",
"1) the offline construction process , where a domain relevance measuring model is trained by taking a large set of seed terms and their features as input;",
"2) the online query process , where the trained model can return the domain relevance of query terms by including them in the core-anchored semantic graph.",
"Our approach applies to a wide range of domains and can handle any query, while nearly no human effort is required.",
"To validate the effectiveness of our proposed methods, we conduct extensive experiments on various domains with different settings.",
"Results show our methods significantly outperform well-designed baselines and even surpass human performance by professionals.",
"The problem of domain relevance of terms is related to automatic term extraction, which aims to extract domain-specific terms from texts automatically.",
"Compared to our task, automatic term extraction, where extraction and evaluation are combined, possesses a limited application and has a relatively large dependence on corpora and human annotation, so it is limited to several broad domains and may only cover a small number of terms.",
"Existing approaches for automatic term extraction can be roughly divided into three categories: linguistic, statistical, and machine learning methods.",
"Linguistic methods apply human-designed rules to identify technical/legal terms in a target corpus (Handler et al., 2016; Ha and Hyland, 2017).",
"Statistical methods use statistical information, e.g., frequency of terms, to identify terms from a corpus (Frantzi et al., 2000; Nakagawa and Mori, 2002; Velardi et al., 2001; Drouin, 2003; Meijer et al., 2014).",
"Machine learning methods learn a classifier, e.g., logistic regression classifier, with manually labeled data (Conrado et al., 2013; Fedorenko et al., 2014; Hatty et al., 2017).",
"There also exists some work on automatic term extraction with Wikipedia (Vivaldi et al., 2012; Wu et al., 2012).",
"However, terms studied there are restricted to terms associated with a Wikipedia page.",
"Recently, inspired by distributed representations of words (Mikolov et al., 2013a), methods based on deep learning are proposed and achieve state-of-the-art performance.",
"Amjadian et al. (2016, 2018) design supervised learning methods by taking the concatenation of domain-specific and general word embeddings as input.",
"Hatty et al. (2020) propose a multi-channel neural network model that leverages domain-specific and general word embeddings.",
"The techniques behind our hierarchical core-fringe learning methods are related to research on graph neural networks (GNNs) (Kipf and Welling, 2017; Hamilton et al., 2017); hierarchical text classification (Vens et al., 2008; Wehrmann et al., 2018; Zhou et al., 2020); and positive-unlabeled learning (Liu et al., 2003; Elkan and Noto, 2008; Bekker and Davis, 2020).",
"Definition 1 (Fine-Grained Domain Relevance) The fine-grained domain relevance of a term is the degree that the term is relevant to a given domain, and the given domain can be broad or narrow.",
"The domain relevance of terms depends on many factors.",
"In general, a term with higher semantic relevance, broader meaning scope, and better usage possesses a higher domain relevance regarding the target domain.",
"To measure the fine-grained domain relevance of terms, we propose a hierarchical core-fringe approach, which includes an offline training process and can handle any query term in evaluation.",
"The overview of the framework is illustrated in Figure 1.",
"There exist countless terms in human languages; thus it is impractical to include all terms in a system initially.",
"To build the offline system, we need to provide seed terms, which can come from knowledge bases or be extracted from broad, large corpora by existing term/phrase extraction methods (Handler et al., 2016; Shang et al., 2018).",
"In addition to providing seed terms, we should also give some knowledge to machines so that they can differentiate whether a term is domain-relevant or not.",
"To this end, we can leverage the description information of terms.",
"For instance, Wikipedia contains a large number of terms (the surface form of page titles), where each term is associated with a Wikipedia article page.",
"With this page information, humans can easily judge whether a term is domain-relevant or not.",
"In Section 3.3, we will show the labeling can even be done completely automatically.",
"However, considering the countless terms, the number of terms that are well-organized and associated with rich description is small.",
"How to measure the fine-grained domain relevance of terms without rich information is quite challenging for both machines and humans.",
"Fortunately, terms are not isolated, while complex relations exist between them.",
"If a term is relevant to a domain, it must also be relevant to some domain-relevant terms and vice versa.",
"This is to say, we can bridge the domain relevance of terms through term relevance.",
"Summarizing the observations, we divide terms into two categories: core terms , which are terms associated with rich description information, e.g., Wikipedia article pages, and fringe terms , which are terms without that information.",
"We assume, for each term, there exist some relevant core terms that share similar domains.",
"If we can find the most relevant core terms for a given term, its domain relevance can be evaluated with the help of those terms.",
"To this end, we can utilize the rich information of core terms for ranking.",
"Taking Wikipedia as an example, each core term is associated with an article page, so they can be returned as the ranking results (result term) for a given term (query term).",
"Considering the data resources, we use the built-in Elasticsearch based Wikipedia search engine 2 (Gormley and Tong, 2015).",
"More specifically, we set the maximum number of links as k ( 5 as default).",
"For a query term v , i.e., any seed term, we first achieve the top 2 k Wikipedia pages with exact match.",
"For each result term u in the core, we create a link from u to v .",
"If the number of links is smaller than k , we do this process again without exact match and build additional links.",
"Finally, we construct a term graph, named Core-Anchored Semantic Graph , where nodes are terms and edges are links between terms.",
"In addition, for terms that are not provided initially, we can also handle them as fringe terms and connect them to core terms in evaluation.",
"In this way, we can include any term in the graph.",
"In this section, we aim to design learning methods to learn the fine-grained domain relevance of core and fringe terms jointly.",
"In addition to using the term graph, we can achieve features of both core and fringe terms based on their linguistic and statistical properties (Terryn et al., 2019; Conrado et al., 2013) or distributed representations (Mikolov et al., 2013b; Yu and Dredze, 2015).",
"We assume the labels, i.e., domain-relevant or not, of core terms are available, which can be achieved by an automatic annotation mechanism introduced in Section 3.3.",
"As stated above, if a term is highly relevant to a given domain, it must also be highly relevant to some other terms with a high domain relevance and vice versa.",
"Therefore, to measure the domain relevance of a term, in addition to using its own features, we aggregate its neighbors' features.",
"Specifically, we propagate the features of terms via the term graph and use the label information of core terms for supervision.",
"In this way, core and fringe terms help each other, and the domain relevance is learned jointly.",
"The propagation process can be achieved by graph convolutions (Hammond et al., 2011).",
"We first apply the vanilla graph convolutional networks (GCNs) (Kipf and Welling, 2017) in our framework.",
"The graph convolution operation (GCNConv) at the l -th layer is formulated as the 2 https://en.wikipedia.org/w/index.php?",
"where N i is the neighbor set of node i .",
"c ij is the normalization constant.",
"h ( l ) j R d ( l ) 1 is the hidden state of node j at the l -th layer, with d ( l ) being the number of units; h (0) j = x j , which is the feature vector of node j .",
"W ( l ) c R d ( l +1) d ( l ) is the trainable weight matrix at the l -th layer, and b ( l ) c is the bias vector.",
"( ) is the nonlinearity activation function, e.g., ReLU ( ) = max(0 , ) .",
"Since core terms are labeled as domain-relevant or not, we can use the labels to calculate the loss: L = (cid:88) i V core ( y i log z i + (1 y i ) log(1 z i )) , (2) where y i is the label of node i regarding the target domain, and z i = ( h o i ) , with h o i being the output of the last GCNConv layer for node i and ( ) being the sigmoid function.",
"The weights of the model are trained by minimizing the loss.",
"The relative domain relevance is obtained as s = z .",
"Combining with the overall framework, we get the first domain relevance measuring model, CFL , i.e., C oreF ringe Domain Relevance L earning.",
"CFL is useful to measure the domain relevance for broad domains such as computer science.",
"For domains with relatively narrow scopes, e.g., machine learning, we can also leverage the label information of domains at the higher level of the hierarchy, e.g., CS AI ML, which is based on the idea that a domain-relevant term regarding the target domain should also be relevant to the parent domain.",
"Inspired by related work on hierarchical multi-label classification (Vens et al., 2008; Wehrmann et al., 2018), we introduce a hierarchical learning method considering both global and local information.",
"We first apply l c GCNConv layers according to Eq.",
"(1) and get the output of the last GCNConv layer, which is h ( l c ) i .",
"In order not to confuse, we omit the subscript that identifies the node number.",
"For each domain in the hierarchy, we introduce a hierarchical global activation a p .",
"The activation at the ( l + 1) -th level of the hierarchy is given as a ( l +1) p = ( W ( l ) p [ a ( l ) p ; h ( l c ) ] + b ( l ) p ) , (3) where [ ; ] indicates the concatenation of two vectors; a (1) p = ( W (0) p h ( l c ) + b (0) p ) .",
"The global information is produced after a fully connected layer: z p = ( W ( l p ) p a ( l p ) p + b ( l p ) p ) , (4) where l p is the total number of hierarchical levels.",
"To achieve the local information for each level of the hierarchy, the model first generates the local hidden state a ( l ) q by a fully connected layer: a ( l ) q = ( W ( l ) t a ( l ) p + b ( l ) t ) .",
"(5) The local information at the l -th level of the hierarchy is then produced as z ( l ) q = ( W ( l ) q a ( l ) q + b ( l ) q ) .",
"(6) In our core-fringe framework, all the core terms are labeled at each level of the hierarchy.",
"Therefore, the loss of hierarchical learning is computed as L h = (cid:15) ( z p , y ( l p ) ) + l p (cid:88) l =1 (cid:15) ( z ( l ) q , y ( l ) ) , (7) where y ( l ) denotes the labels regarding the domain at the l -th level of the hierarchy and (cid:15) ( z , y ) is the binary cross-entropy loss described in Eq.",
"(2).",
"In testing, The relative domain relevance s is calculated as s = z p + (1 ) ( z (1) q z (2) q , ..., z ( l p ) q ) , (8) where denotes element-wise multiplication.",
"is a hyperparameter to balance the global and local information ( 0 . 5 as default).",
"Combining with our general framework, we refer to this model as HiCFL , i.e., Hi erarchical CFL .",
"Online Query Process .",
"If seed terms are provided by extracting from broad, large corpora relevant to the target domain, most terms of interest will be already included in the offline process.",
"In evaluation, for terms that are not provided initially, our model treats them as fringe terms.",
"Specifically, when receiving such a term, the model connects it to core terms by the method described in Section 3.1.",
"With its features (e.g., compositional term embeddings) or only its neighbors' features (when features cannot be generated directly), the trained model can return the domain relevance of any query.",
"Automatic Annotation .",
"For the fine-grained domain relevance problem, human annotation is very time-consuming and laborious because the number of core terms is very large regarding a wide range of domains.",
"Fortunately, in addition to building the term graph, we can also leverage the rich information of core terms for automatic annotation.",
"In the core-anchored semantic graph constructed with Wikipedia, each core term is associated with a Wikipedia page, and each page is assigned one or more categories.",
"All the categories form a hierarchy, furthermore providing a category tree.",
"For a given domain, we can first traverse from a root category and collect some gold subcategories.",
"For instance, for computer science, we treat category: subfields of computer science 3 as the root category and take categories at the first three levels of it as gold subcategories.",
"Then we collect categories for each core term and examine whether the term itself or one of the categories is a gold subcategory.",
"If so, we label the term as positive.",
"Otherwise, we label it as negative.",
"We can also combine gold subcategories from some existing domain taxonomies and extract the categories of core terms from the text description, which usually contains useful text patterns like x is a subfield of y.",
"Hierarchical Positive-Unlabeled Learning .",
"According to the above methods, we can learn the fine-grained domain relevance of terms for any domain as long as we can collect enough gold subcategories for that domain.",
"However, for domains at the low level of the hierarchy, e.g., deep learning, a category tree might not be available in Wikipedia.",
"To deal with this issue, we apply our learning methods in a positive-unlabeled (PU) setting (Bekker and Davis, 2020), where only a small number of terms, e.g., 10, are labeled as positive, and all the other terms are unlabeled.",
"We use this setting based on the following consideration: if a user is interested in a specific domain, it is quite easy for her to give some important terms relevant to that domain.",
"Benefiting from our hierarchical core-fringe learning approach, we can still obtain labels for domains at the high level of the hierarchy with the automatic annotation mechanism.",
"Therefore, all the negative examples of the last labeled hierarchy can be used as reliable negatives for the target domain.",
"For instance, if the target domain is deep learning, which is in the CS AI ML DL hierarchy, we consider all the non-ML terms as the reliable negatives for DL.",
"Taking the positively 3 https://en.wikipedia.org/wiki/ Category:Subfields_of_computer_science labeled examples and the reliable negatives for supervision, we can learn the domain relevance of terms by our proposed HiCFL model contextualized in the hierarchy of the domain.",
"In this section, we evaluate our model from different perspectives.",
"1) We compare with baselines by treating some labeled terms as queries.",
"2) We compare with human professionals by letting humans and machines judge which term in a query pair is more relevant to a target domain.",
"3) We conduct intuitive case studies by ranking terms according to their domain relevance.",
"Datasets and Preprocessing .",
"To build the system, for offline processing, we extract seed terms from the arXiv dataset (version",
"6) 4 .",
"As an example, for computer science or its sub-domains, we collect the abstracts in computer science according to the arXiv Category Taxonomy 5 , and apply phrasemachine to extract terms (Handler et al., 2016) with lemmatization and several fil-tering rules: frequency > 10 ; length 6 ; only contain letters, numbers, and hyphen; not a stop-word or a single letter.",
"We select three broad domains, including computer science (CS), physics (Phy), and mathematics (Math); and three narrow sub-domains of them, including machine learning (ML), quantum mechanics (QM), and abstract algebra (AA), with the hierarchies CS AI ML, Phy mechanics QM, and Math algebra AA.",
"Each broad domain and its sub-domains share seed terms because they share a corpus.",
"To achieve gold subcategories for automatic annotation (Section 3.3), we collect subcategories at the first three levels of a root category (e.g., category: subfields of physics ) for broad domains (e.g., physics); or the first two levels for narrow domains, e.g., category: machine learning for machine learning.",
"Table 1 reports the total sizes and the ratios that are core terms.",
"Baselines .",
"Since our task on fine-grained domain relevance is new, there is no existing baseline for model comparison.",
"We adapt the following models on relevant tasks in our setting with additional inputs (e.g., domain-specific corpora): 4 https://www.kaggle.com/ Cornell-University/arxiv 5 https://arxiv.org/category_taxonomy domain #terms core ratio CS ML 113,038 27.7% Phy QM 416,431 12.1% Math AA 103,984 26.4% Table 1: The statistics of the data.",
"Relative Domain Frequency (RDF) : Since domain-relevant terms usually occur more in a domain-specific corpus, we apply a statistical method using freq s ( w ) / freq g ( w ) to measure the domain relevance of term w , where freq s ( ) and freq g ( ) denote the frequency of occurrence in the domain-specific/general corpora respectively.",
"Logistic Regression (LR) : Logistic regression is a standard supervised learning method.",
"We use core terms with labels (domain-relevant or not) as training data, where features are term embeddings trained by a general corpus.",
"Multilayer Perceptron (MLP) : MLP is a standard neural neural-based model.",
"We train MLP using embeddings trained with a domain-specific corpus or a general corpus as term features, respectively.",
"We also concatenate the two embeddings as features (Amjadian et al., 2016, 2018).",
"Multi-Channel (MC) : Multi-Channel (Hatty et al., 2020) is the state-of-the-art model for automatic term extraction, which is based on a multi-channel neural network that takes domain-specific and general corpora as input.",
"Training .",
"For all supervised learning methods, we apply automatic annotation in Section 3.3, i.e., we automatically label all the core terms for model training.",
"In the PU setting, we remove labels on target domains.",
"Only 20 (10 in the case studies) domain-relevant core terms are randomly selected as the positives, with the remaining terms unlabeled.",
"In training, all the negative examples at the previous level of the hierarchy are used as reliable negatives.",
"Implementation Details .",
"Though our proposed methods are independent of corpora, some baselines (e.g., MC) require term embeddings trained from general/domain-specific corpora.",
"For easy and fair comparison, we adopt the following approach to generate term features.",
"We consider each term as a single token, and apply word2vec CBOW (Mikolov et al., 2013a) with negative sampling, where dimensionality is 100 , window size is 5 , and number of negative samples is 5 .",
"The training cor-Computer Science Physics Mathematics ROC-AUC PR-AUC ROC-AUC PR-AUC ROC-AUC PR-AUC RDF SG 0.714 0.417 0.736 0.496 0.694 0.579 LR G 0.802 0.000 0.535 0.000 0.822 0.000 0.670 0.000 0.854 0.000 0.769 0.000 MLP S 0.819 0.003 0.594 0.003 0.853 0.001 0.739 0.004 0.868 0.000 0.803 0.001 MLP G 0.863 0.001 0.674 0.002 0.874 0.001 0.761 0.003 0.904 0.001 0.846 0.002 MLP SG 0.867 0.001 0.667 0.002 0.875 0.001 0.765 0.002 0.904 0.001 0.843 0.003 MC SG 0.868 0.002 0.664 0.006 0.877 0.003 0.768 0.004 0.903 0.001 0.843 0.002 CFL G 0.885 0.001 0.712 0.002 0.905 0.000 0.812 0.002 0.918 0.001 0.870 0.002 CFL C 0.883 0.001 0.708 0.002 0.901 0.000 0.800 0.001 0.919 0.001 0.879 0.002 S and G indicate the corpus used.",
"pus can be a general one (the entire arXiv corpus, denoted as G), or a domain-specific one (the sub-corpus in the branch of the corresponding domain, denoted as S).",
"We also apply compositional GloVe embeddings (Pennington et al., 2014) (element-wise addition of the pre-trained 100d word embeddings, denoted as C) as non-corpus-specific features of terms for reference.",
"For all the neural network-based models, we use Adam (Kingma and Ba, 2015) with learning rate of 0 .",
"01 for optimization, and adopt a fixed hidden dimensionality of 256 and a fixed dropout ratio of 0 .",
"5 .",
"For the learning part of CFL and HiCFL, we apply two GCNConv layers and use the symmetric graph for training.",
"To avoid overfitting, we adopt batch normalization (Ioffe and Szegedy, 2015) right after each layer (except for the output layer) and before activation and apply dropout (Hinton et al., 2012) after the activation.",
"We also try to add reg-ularizations for MLP and MC with full-batch or mini-batch training, and select the best architecture.",
"To construct the core-anchored semantic graph, we set k as 5 .",
"All experiments are run on an NVIDIA Quadro RTX 5000 with 16GB of memory under the PyTorch framework.",
"The training of CFL for the CS domain can finish in 1 minute.",
"To compare with baselines, we separate a portion of core terms as queries for evaluation.",
"Specifically, for each domain, we use 80% labeled terms for training, 10% for validation, and 10% for testing (with automatic annotation).",
"Terms in the validation and testing sets are treated as fringe terms.",
"By doing this, the evaluation can represent the general performance for all fringe terms to some extent.",
"And the model comparison is fair since the rich information of terms for evaluation is not used in training.",
"We also create a test set with careful human annotation on machine learning to support our overall evaluation, which contains 2000 terms, with half for evaluation and half for testing.",
"As evaluation metrics, we calculate both ROC-AUC and PR-AUC with automatic or manually created labels.",
"ROC-AUC is the area under the receiver operating characteristic curve, and PR-AUC is the area under the precision-recall curve.",
"If a model achieves higher values, most of the domain-relevant terms are ranked higher, which means the model has a better measurement on the domain relevance of terms.",
"Table 2 and Table 3 show the results for three broad/narrow domains respectively.",
"We observe our proposed CFL and HiCFL outperform all the baselines, and the standard deviations are low.",
"Compared to MLP, CFL achieves much better performance benefiting from the core-anchored semantic graph and feature aggregation, which demonstrates the domain relevance can be bridged via term relevance.",
"Compared to CFL, HiCFL works better owing to hierarchical learning.",
"In the PU setting the situation when automatic annotation is not applied to the target domain, although only 20 positives are given, HiCFL still achieves satisfactory performance and significantly outperforms all the baselines (Table 4).",
"The PR-AUC scores on the manually created test Machine Learning Quantum Mechanics Abstract Algebra ROC-AUC PR-AUC ROC-AUC PR-AUC ROC-AUC PR-AUC LR G 0.917 0.000 0.346 0.000 0.879 0.000 0.421 0.000 0.872 0.000 0.525 0.000 MLP S 0.902 0.001 0.453 0.009 0.903 0.001 0.545 0.004 0.910 0.000 0.641 0.007 MLP G 0.932 0.001 0.562 0.010 0.922 0.001 0.587 0.014 0.923 0.000 0.658 0.006 MLP SG 0.928 0.001 0.574 0.011 0.923 0.000 0.574 0.007 0.925 0.001 0.673 0.004 MC SG 0.928 0.002 0.554 0.007 0.924 0.001 0.590 0.003 0.924 0.001 0.685 0.005 CFL G 0.950 0.002 0.627 0.013 0.950 0.000 0.678 0.003 0.938 0.001 0.751 0.009 HiCFL G 0.965 0.003 0.645 0.014 0.957 0.001 0.691 0.003 0.942 0.002 0.769 0.006 S and G indicate the corpus used.",
"set without and with the PU setting are reported in Table 5.",
"We observe that the results are generally consistent with results reported in Table 3 and Table 4, which indicates the evaluation with core terms can work just as well.",
"In this section, we aim to compare our model with human professionals in measuring the fine-grained domain relevance of terms.",
"Because it is diffi-cult for humans to assign a score representing do-ML-AI ML-CS AI-CS Human 0.698 0.087 0.846 0.074 0.716 0.115 HiCFL 0.854 0.017 0.932 0.007 0.768 0.023 Table 6: Accuracies of domain relevance comparison.",
"main relevance directly, we generate term pairs as queries and let humans judge which one in a pair is more relevant to machine learning .",
"Specifically, we create 100 ML-AI, ML-CS, and AI-CS pairs respectively.",
"Taking ML-AI as an example, each query pair consists of an ML term and an AI term, and the judgment is considered right if the ML term is selected.",
"The human annotation is conducted by five se-nior students majoring in computer science and doing research related to terminology.",
"Because there is no clear boundary between ML, AI, and CS, it is possible that a CS term is more relevant to machine learning than an AI term.",
"However, the overall trend is that the higher the accuracy, the better the performance.",
"From Table 6, we observe that HiCFL far outperforms human performance.",
"The depth of the background color indicates the domain relevance.",
"The darker the color, the higher the domain relevance (annotated by the authors); * indicates the term is a core term, otherwise it is a fringe term.",
"We interpret our results by ranking terms according to their domain relevance regarding machine learning or deep learning , with hierarchy CS AI ML DL.",
"For CS-ML, we label terms with automatic annotation.",
"For DL, we create 10 DL terms manually as the positives for PU learning.",
"Table 7 and Table 8 show the ranking results (1-10 represents terms ranked 1 st to 10 th).",
"We observe the performance is satisfactory.",
"For ML, important concepts such as supervised learning, unsupervised learning, and deep learning are ranked very high.",
"Also, terms ranked before 1010 th are all good domain-relevant terms.",
"For DL, although only 10 positives are provided, the ranking results are quite impressive.",
"E.g., unlabeled positive terms like artificial neural network, generative adversarial network, and neural architecture search are ranked very high.",
"Besides, terms ranked 101 st to 110 th are all highly relevant to DL, and terms ranked 1001 st to 1010 th are related to ML.",
"We introduce and study the fine-grained domain relevance of terms an important property of terms that has not been carefully studied before.",
"We propose a hierarchical core-fringe domain relevance learning approach, which can cover almost all terms in human languages and various domains, while requires little or even no human annotation.",
"We believe this work will inspire an automated solution for knowledge management and help a wide range of downstream applications in natural language processing.",
"It is also interesting to integrate our methods to more challenging tasks, for example, to characterize more complex properties of terms even understand terms.",
"We thank the anonymous reviewers for their valuable comments and suggestions.",
"This material is based upon work supported by the National Science Foundation IIS 16-19302 and IIS 16-33755, Zhejiang University ZJU Research 083650, IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) a research collaboration as part of the IBM Cognitive Horizon Network, grants from eBay and Microsoft Azure, UIUC OVCR CCIL Planning Grant 434S34, UIUC CSBS Small Grant 434C8U, and UIUC New Frontiers Initiative.",
"Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the funding agencies."
] | [
"objective",
"abstain",
"method",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space.",
"However, different categories often overlap with each other in the representation space at the beginning of the learning process, which poses a significant challenge for distance-based clustering in achieving good separation between different categories.",
"To this end, we propose Supporting Clustering with Contrastive Learning (SCCL) a novel framework to leverage contrastive learning to promote better separation.",
"We assess the performance of SCCL on short text clustering and show that SCCL significantly advances the state-of-the-art results on most benchmark datasets with 3% 11% improvement on Accuracy and 4% 15% improvement on Normalized Mutual Information.",
"Furthermore, our quantitative analysis demonstrates the effectiveness of SCCL in leveraging the strengths of both bottom-up instance discrimination and top-down clustering to achieve better intra-cluster and inter-cluster distances when evaluated with the ground truth cluster labels 1 .",
"Clustering, one of the most fundamental challenges in unsupervised learning, has been widely studied for decades.",
"Long established clustering methods such as K-means (MacQueen et al., 1967; Lloyd, 1982) and Gaussian Mixture Models (Celeux and Govaert, 1995) rely on distance measured in the data space, which tends to be ineffective for high-dimensional data.",
"On the other hand, deep neural networks are gaining momentum as an effective way to map data to a low dimensional and hopefully better separable representation space.",
"Many recent research efforts focus on integrating clustering with deep representation learning 1 We plan to open source our implementation.",
"by optimizing a clustering objective defined in the representation space (Xie et al., 2016; Jiang et al., 2016; Zhang et al., 2017a; Shaham et al., 2018).",
"Despite promising improvements, the clustering performance is still inadequate, especially in the presence of complex data with a large number of clusters.",
"As illustrated in Figure 1, one possible reason is that, even with a deep neural network, data still has significant overlap across categories before clustering starts.",
"Consequently, the clusters learned by optimizing various distance or similarity based clustering objectives suffer from poor purity.",
"On the other hand, Instance-wise Contrastive Learning (Instance-CL) (Wu et al., 2018; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a,b) has recently achieved remarkable success in self-supervised learning.",
"Instance-CL usually optimizes on an auxiliary set obtained by data augmentation.",
"As the name suggests, a contrastive loss is then adopted to pull together samples augmented from the same instance in the original dataset while pushing apart those from different ones.",
"Essentially, Instance-CL disperses different instances apart while implicitly bringing similar instances together to some extent (see Figure 1).",
"This beneficial property can be leveraged to support clustering by scattering apart the overlapped categories.",
"Then clustering, thereby better separates different clusters while tightening each cluster by explicitly bringing samples in that cluster together.",
"To this end, we propose Supporting Clustering with Contrastive Learning (SCCL) by jointly optimizing a top-down clustering loss with a bottom-up instance-wise contrastive loss.",
"We assess the performance of SCCL on short text clustering, which has become increasingly important due to the popularity of social media such as Twitter and Instagram.",
"It benefits many real-world applications, including topic discovery (Kim et al., 2013), recommendation (Bouras and Tsogkas, 2017), and visualization (Sebrechts et al., 1999).",
"However, the weak signal caused by noise and sparsity poses a significant challenge for clustering short texts.",
"Although some improvement has been achieved by leveraging shallow neural networks to enrich the representations (Xu et al., 2017; Hadifar et al., 2019), there is still large room for improvement.",
"Our main contributions are the following: We propose a novel end-to-end framework for unsupervised clustering, which advances the state-of-the-art results on various short text clustering datasets by a large margin.",
"Furthermore, our model is much simpler than the existing deep neural network based short text clustering approaches that often require multistage independent training.",
"We provide in-depth analysis and demonstrate how SCCL effectively combines the top-down clustering with the bottom-up instance-wise contrastive learning to achieve better inter-cluster distance and intra-cluster distance.",
"We explore various text augmentation techniques for SCCL, showing that, unlike the image domain (Chen et al., 2020a), using composition of augmentations is not always beneficial in the text domain.",
"Self-supervised learning Self-supervised learning has recently become prominent in providing effective representations for many downstream tasks.",
"Early work focuses on solving different artificially designed pretext tasks, such as predicting masked tokens (Devlin et al., 2019), generating future tokens (Radford et al., 2018), or denoising corrupted tokens (Lewis et al., 2019) for textual data, and predicting colorization (Zhang et al., 2016), rotation (Gidaris et al., 2018), or relative patch position (Do-ersch et al., 2015) for image data.",
"Nevertheless, the resulting representations are tailored to the specific pretext tasks with limited generalization.",
"Many recent successes are largely driven by instance-wise contrastive learning.",
"Inspired by the pioneering work of Becker and Hinton (1992); Bromley et al. (1994), Instance-CL treats each data instance and its augmentations as an independent class and tries to pull together the representations within each class while pushing apart different classes (Dosovitskiy et al., 2014; Oord et al., 2018; Bachman et al., 2019; He et al., 2020; Chen et al., 2020a,b).",
"Consequently, different instances are well-separated in the learned embedding space with local invariance being preserved for each instance.",
"Although Instance-CL may implicitly group similar instances together (Wu et al., 2018), it pushes representations apart as long as they are from different original instances, regardless of their semantic similarities.",
"Thereby, the implicit grouping effect of Instance-CL is less stable and more data-dependent, giving rise to worse representations in some cases (Khosla et al., 2020; Li et al., 2020; Purushwalkam and Gupta, 2020).",
"Short Text Clustering Compared with the general text clustering problem, short text clustering comes with its own challenge due to the weak signal contained in each instance.",
"In this scenario, BoW and TF-IDF often yield very sparse representation vectors that lack expressive ability.",
"To remedy this issue, some early work leverages neural networks to enrich the representations (Xu et al., 2017; Hadifar et al., 2019), where word embeddings (Mikolov et al., 2013b; Arora et al., 2017) are adopted to further enhance the performance.",
"However, the above approaches divide the learning process into multiple stages, each requiring independent optimization.",
"On the other hand, despite the tremendous successes achieved by contextualized word embeddings (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2018; Reimers and Gurevych, 2019b), they have been left largely unexplored for short text clustering.",
"In this work, we leverage the pretrained transformer as the back-Figure 2: Training framework SCCL.",
"During training, we jointly optimize a clustering loss over the original data instances and an instance-wise contrastive loss over the associated augmented pairs.",
"bone, which is optimized in an end-to-end fashion.",
"As demonstrated in Section 4, we advance the state-of-the-art results on most benchmark datasets with 3% 11% improvement on Accuracy and 4% 15% improvement on NMI.",
"We aim at developing a joint model that leverages the beneficial properties of Instance-CL to improve unsupervised clustering.",
"As illustrated in Figure 2, our model consists of three components.",
"A neural network ( ) first maps the input data to the representation space, which is then followed by two different heads g ( ) and f ( ) where the contrastive loss and the clustering loss are applied, respectively.",
"Please refer to Section 4 for details.",
"Our data consists of both the original and the augmented data.",
"Specifically, for a randomly sampled minibatch B = { x i } Mi =1 , we randomly generate a pair of augmentations for each data instance in B , yielding an augmented batch B a with size 2 M , denoted as B a = { x i } 2 Mi =1 .",
"For each minibatch B , the Instance-CL loss is defined on the augmented pairs in B a .",
"Let i 1 { 1 , . . . , 2 M } denote the index of an arbitrary instance in augmented set B a , and let i 2 { 1 , . . . , 2 M } be the index of the other instance in B a augmented from the same instance in the original set B .",
"We refer to x i 1 , x i 2 B a as a positive pair, while treating the other 2 M -2 examples in B a as negative instances regarding this positive pair.",
"Let z i 1 and z i 2 be the corresponding outputs of the head g , i.e., z j = g ( ( x j )) , j = i 1 , i 2 .",
"Then for x i 1 , we try to separate x i 2 apart from all negative instances in B a by minimizing the following (cid:96) Ii 1 = log exp( sim ( z i 1 , z i 2 ) / ) (cid:80) 2 Mj =1 1 j (cid:54) = i 1 exp( sim ( z i 1 , z j ) / ) .",
"Here 1 j (cid:54) = i 1 is an indicator function and denotes the temperature parameter which we set as 0 .",
"5 .",
"Following Chen et al. (2020a), we choose sim ( ) as the dot product between a pair of normalized outputs, i.e., sim ( z i , z j ) = z Ti z j / (cid:107) z i (cid:107) 2 (cid:107) z j (cid:107) 2 .",
"The Instance-CL loss is then averaged over all instances in B a , L Instance-CL = 2 M (cid:88) i =1 (cid:96) Ii / 2 M .",
"To explore the above contrastive loss in the text domain, we explore three different augmentation strategies in Section 4.3.1, where we find contextual augmenter (Kobayashi, 2018; Ma, 2019) consistently performs better than the other two.",
"We simultaneously encode the semantic categorical structure into the representations via unsupervised clustering.",
"Unlike Instance-CL, clustering focuses on the high-level semantic concepts and tries to bring together instances from the same semantic category together.",
"Suppose our data consists of K semantic categories, and each category is characterized by its centroid in the representation space, denoted as k , k { 1 , . . . , K } .",
"Let e j = ( x j ) denote the representation of instance x j in the original set B .",
"Following Maaten and Hinton (2008), we use the Student's t-distribution to compute the probability of assigning x j to the k th cluster, q jk = (cid:0) 1 + (cid:107) e j k (cid:107) 22 / (cid:1) +12 (cid:80) Kk (cid:48) =1 (cid:0) 1 + (cid:107) e j k (cid:48) (cid:107) 22 / (cid:1) +12 .",
"Here denotes the degree of freedom of the Student's t-distribution.",
"Without explicit mention, we follow Maaten and Hinton (2008) by setting = 1 in this paper.",
"cluster, and we iteratively refine it by leveraging an auxiliary distribution proposed by Xie et al. (2016).",
"Specifically, let p jk denote the auxiliary probability defined as p jk = q 2 jk /f k (cid:80) k (cid:48) q 2 jk /f k (cid:48) .",
"Here f k = (cid:80) Mj =1 q jk , k = 1 , . . . , K can be interpreted as the soft cluster frequencies approximated within a minibatch.",
"This target distribution first sharpens the soft-assignment probability q jk by raising it to the second power, and then normalizes it by the associated cluster frequency.",
"By doing so, we encourage learning from high confidence cluster assignments and simultaneously combating the bias caused by imbalanced clusters.",
"We push the cluster assignment probability towards the target distribution by optimizing the KL divergence between them, (cid:96) Cj = KL [ p j (cid:107) q j ] = K (cid:88) k =1 p jk log p jk q jk .",
"(5) The clustering objective is then followed as L Cluster = M (cid:88) j =1 (cid:96) Cj /M (6) This clustering loss is first proposed in Xie et al. (2016) and later adopted by Hadifar et al. (2019) for short text clustering.",
"However, they both require expensive layer-wise pretraining of the neural network, and update the target distribution (Eq (4)) through carefully chosen intervals that often vary across datasets.",
"In contrast, we simplify the learning process to end-to-end training with the target distribution being updated per iteration.",
"objective is, L = L Instance-CL + L Cluster = M (cid:88) j =1 (cid:96) Cj /M + 2 M (cid:88) i =1 (cid:96) Ii / 2 M .",
"(7) (cid:96) Cj and (cid:96) Ii are defined in Eq (5) and Eq (2), respectively.",
"balances between the contrastive loss and the clustering loss of SCCL, which we set as 10 in Section 4 for simplicity.",
"Also noted that, the clustering loss is optimized over the original data only.",
"Alternatively, we can also leverage the augmented data to enforce local consistency of the cluster assignments for each instance.",
"We discuss this further in Appendix A.3.",
"Implementation We implement our model in Py-Torch (Paszke et al., 2017) with the Sentence Transformer library (Reimers and Gurevych, 2019a).",
"We choose distilbert-base-nli-stsb-mean-tokens as the backbone, followed by a linear clustering head ( f ) of size 768 K with K indicating the number of clusters.",
"For the contrastive loss, we optimize an MLP ( g ) with one hidden layer of size 768, and output vectors of size 128.",
"Figure 2 provides an illustration of our model.",
"The detailed experimental setup is provided in Appendix A.1.",
"We, as in the previous work Xu et al. (2017); Hadifar et al. (2019); Rakib et al. (2020), adopt Accuracy (ACC) and Normalized Mutual Information (NMI) to evaluate different approaches.",
"Datasets We assess the performance of the proposed SCCL model on eight benchmark datasets for short text clustering.",
"Table 2 provides an overview of the main statistics, and the details of each dataset are as follows.",
"SearchSnippets is extracted from web search snippets, which contains 12,340 snippets associated with 8 groups Phan et al. (2008).",
"StackOverflow is a subset of the challenge data published by Kaggle 2 , where 20,000 question titles associated with 20 different categories are selected by Xu et al. (2017).",
"Biomedical is a subset of the PubMed data distributed by BioASQ 3 , where 20,000 paper titles from 20 groups are randomly selected by Xu et al. (2017).",
"AgNews is a subset of news titles (Zhang and LeCun, 2015), which contains 4 topics selected by Rakib et al. (2020).",
"Tweet consists of 2,472 tweets with 89 categories (Yin and Wang, 2016).",
"GoogleNews contains titles and snippets of 11,109 news articles related to 152 events (Yin and Wang, 2016).",
"Following (Rakib et al., 2020), we name the full dataset as GoogleNews-TS , and GoogleNews-T and GoogleNews-S are obtained by extracting the titles and the snippets, respectively.",
"For each dataset, we use Contextual Augmenter (Kobayashi, 2018; Ma, 2019) to obtain the augmentation set, as it consistently outperforms the other options explored in Section 4.3.1.",
"We first demonstrate that our model can achieve state-of-the-art or highly competitive performance on short text clustering.",
"For comparison, we consider the following baselines.",
"STCC (Xu et al., 2017) consists of three independent stages.",
"For each dataset, it first pre-trains a word embedding on a large in-domain corpus using the Word2Vec method (Mikolov et al., 2013a).",
"A convolutional neural network is then optimized to further enrich the representations that are fed into K-means for the final stage clustering.",
"Self-Train (Hadifar et al., 2019) enhances the pretrained word embeddings in Xu et al. (2017) using SIF (Arora et al., 2017).",
"Following Xie et al. (2016), it adopts an autoencoder obtained by layer-wise pretraining (Van Der Maaten, 2009), which is then further tuned with a clustering objective same as that in Section 3.2.",
"Both Xie et al. (2016) and Hadifar et al. (2019) update the target distribution through carefully chosen intervals that vary across datasets, while we update it per iteration yet still achieve significant improvement.",
"HAC-SD (Rakib et al., 2020) 4 applies hierarchical agglomerative clustering on top of a sparse pairwise similarity matrix obtained by zeroing-out similarity scores lower than a chosen threshold value.",
"To demonstrate that our model is robust against the noisy input that often poses a significant chal-4 They further boost the performance via an iterative classification trained with high-confidence pseudo labels extracted after each round of clustering.",
"Since the iterative classification strategy is orthogonal to the clustering algorithms, we only evaluate against with their proposed clustering algorithm for fair comparison.",
"lenge for short text clustering, we do not apply any pre-processing procedures on any of the eight datasets.",
"In contrast, all baselines except BoW and TF-IDF considered in this paper either preprocessed the Biomedical dataset (Xu et al., 2017; Hadifar et al., 2019) or all eight datasets by removing the stop words, punctuation, and converting the text to lower case (Rakib et al., 2020).",
"We report the comparison results in Table",
"1. Our SCCL model outperforms all baselines by a large margin on most datasets.",
"Although we are lagging behind Hadifar et al. (2019) on Biomedical, SCCL still shows great promise considering the fact that Biomedical is much less related to the general domains on which the transformers are pretrained.",
"In contrast, Hadifar et al. (2019) learn the word embeddings on a large in-domain biomedical corpus, followed by a layer-wise pretrained autoencoder to further enrich the representations.",
"Rakib et al. (2020) also shows better Accuracy on Tweet and GoogleNews-T, for which we hypothesize two reasons.",
"First, both GoogleNews and Tweet have fewer training examples with much more clusters.",
"Thereby, it's challenging for instance-wise contrast learning to manifest its advantages, which often requires a large training dataset.",
"Second, as implied by the clustering perfer-mance evaluated on BoW and TF-IDF, clustering GoogleNews and Tweet is less challenging than clustering the other four datasets.",
"Hence, by applying agglomerative clustering on the carefully selected pairwise similarities of the preprocessed data, Rakib et al. (2020) can achieve good performance, especially when the text instances are very short, i.e., Tweet and GoogleNews-T.",
"We also highlight the scalability of our model to large scale data, whereas agglomerative clustering often suffers from high computation complexity.",
"We discuss this further in Appendix A.5.",
"To better validate our model, we run ablations in this section.",
"For illustration, we name the clustering component described in Section 3.2 as Clustering.",
"Besides Instance-CL and Clustering, we also evaluate SCCL against its sequential version (SCCL-Seq) where we first train the model with Instance-CL, and then optimize it with Clustering.",
"As shown in Figure 3, Instance-CL also groups semantically similar instances together.",
"However, this grouping effect is implicit and data-dependent.",
"In contrast, SCCL consistently outperforms both Instance-CL and Clustering by a large margin.",
"Furthermore, SCCL also achieves better performance than its sequential version, SCCL-Seq.",
"The result validates the effectiveness and importance of the proposed joint optimization framework in leveraging the strengths of both Instance-CL and Clustering to compliment each other.",
"To further investigate what enables the better performance of SCCL, we track both the intra-cluster distance and the inter-cluster distance evaluated in the representation space throughout the learning process.",
"For a given cluster, the intra-cluster distance is the average distance between the centroid and all samples grouped into that cluster, and the inter-cluster distance is the distance to its closest neighbor cluster.",
"In Figure 4, we report each type Dataset Accuracy NMI WNet Para Ctxt WNet Para Ctxt AgNews 86.6 86.5 88.2 66.0 65.2 68.2 SearchSnippets 78.1 83.7 85.0 61.9 68.1 71.0 StackOverflow 69.1 73.3 75.5 69.9 72.7 74.5 Biomedical 42.8 43.0 46.2 38.0 39.5 41.5 GooglenewsTS 82.1 83.5 89.8 92.1 92.9 94.9 GooglenewsS 73.0 75.3 83.1 86.4 87.4 90.4 GooglenewsT 66.3 67.5 73.9 83.4 83.6 87.5 Tweet 70.6 73.7 78.2 86.2 86.4 89.2 Table 3: Results of SCCL evaluated with different augmentation techniques: WordNet augmenter ( WNet ), paraphrase via back translation ( Para ), and contextual augmenter ( Ctxt ).",
"of distance with its mean value obtained by averaging over all clusters, where the clusters are defined either regarding the ground truth labels (solid lines) or the labels predicted by the model (dashed lines).",
"Figure 4 shows Clustering achieves smaller intra-cluster distance and larger inter-cluster distance when evaluated on the predicted clusters.",
"It demonstrates the ability of Clustering to tight each self-learned cluster and separate different clusters apart.",
"However, we observe the opposite when evaluated on the ground truth clusters, along with poor Accuracy and NMI scores.",
"One possible explanation is, data from different ground-truth clusters often have significant overlap in the embedding space before clustering starts (see upper left plot in Figure 1), which makes it hard for our distance-based clustering approach to separate them apart effectively.",
"Although the implicit grouping effect allows Instance-CL attains better Accuracy and NMI scores, the resulting clusters are less apart from each other and each cluster is more dispersed, as indicated by the smaller inter-cluster distance and larger intra-cluster distance.",
"This result is unsurprising since Instance-CL only focuses on instance discrimination, which often leads to a more dispersed embedding space.",
"In contrast, we leverage the strengths of both Clustering and Instance-CL to compliment each other.",
"Consequently, Figure 4 shows SCCL leads to better separated clusters with each cluster being less dispersed.",
"To study the impact of data augmentation, we explore three different unsupervised text augmentations: (1) WordNet Augmenter 5 transforms an input text by replacing its words with WordNet synonyms (Morris et al., 2020; Ren et al., 2019).",
"(2) Contextual Augmenter 6 leverages the pretrained transformers to find top-n suitable words of the input text for insertion or substitution (Kobayashi, 2018; Ma, 2019).",
"We augment the data via word substitution, and we choose Bertbase and Roberta to generate the augmented pairs.",
"(3) Paraphrase via back translation 7 generates paraphrases of the input text by first translating it to another language (French) and then back to English.",
"When translating back to English, we used the mixture of experts model (Shen et al., 2019) to generate ten candidate paraphrases per input to increase diversity.",
"For both WordNet Augmenter and Contextual Augmenter , we try three different settings by choosing the word substitution ratio of each text instance 5 https://github.com/QData/TextAttack 6 https://github.com/makcedward/nlpaug 7 https://github.com/pytorch/fairseq/ tree/master/examples/paraphraser Figure 5: Impact of using composition of data augmentations.",
"to 10% , 20% , and 30% , respectively.",
"As for Paraphrase via back translation , we compute the BLEU score between each text instance and its ten candidate paraphrases.",
"We then select three pairs, achieving the highest, medium, and lowest BLEU scores, from the ten condidates of each instance.",
"The best results 8 of each augmentation technique are summarized in Table 3, where Contexual Augmenter substantially outperforms the other two.",
"We conjecture that this is due to both Contextual Augmenter and SCCL leverage the pretrained transformers as backbones, which allows Contextual Augmenter to generate more informative augmentations.",
"Figure 5 shows the impact of using composition of data augmentations, in which we explored Contextual Augmenter and CharSwap Augmenter 9 (Mor-ris et al., 2020).",
"As we can see, using composition of data augmentations does boost the performance of SCCL on GoogleNews-TS where the average number of words in each text instance is 28 (see Table 2).",
"However, we observe the opposite on StackOverflow where the average number of words in each instance is 8.",
"This result differs from what has been observed in the image domain where using composition of data augmentations is crucial for contrastive learning to attain good performance.",
"Possible explanations is that generating high-quality augmentations for textual data is more challenging, since changing a single word can invert the semantic meaning of the whole instance.",
"This challenge is compounded when a second round of augmentation is applied on very 8 Please refer to Appendix A.2 for details.",
"9 A simple technique that augments text by substituting, deleting, inserting, and swapping adjacent characters short text instances, e.g., StackOverflow.",
"We further demonstrate this in Figure 5 (right), where the augmented pairs of StackOverflow largely diverge from the original texts in the representation space after the second round of augmentation.",
"We have proposed a novel framework leveraging instance-wise contrastive learning to support unsupervised clustering.",
"We thoroughly evaluate our model on eight benchmark short text clustering datasets, and show that our model either substantially outperforms or performs highly comparably to the state-of-the-art methods.",
"Moreover, we conduct ablation studies to better validate the effectiveness of our model.",
"We demonstrate that, by integrating the strengths of both bottom-up instance discrimination and top-down clustering, our model is capable of generating high-quality clusters with better intra-cluster and inter-clusters distances.",
"Although we only evaluate our model on short text data, the proposed framework is generic and is expected to be effective for various kinds of text clustering problems.",
"In this work, we explored different data augmentation strategies with extensive comparisons.",
"However, due to the discrete nature of natural language, designing effective transformations for textual data is more challenging compared to the counterparts in the computer vision domain.",
"One promising direction is leveraging the data mixing strategies (Zhang et al., 2017b) to either obtain stronger augmentations (Kalantidis et al., 2020) or alleviate the heavy burden on data augmentation (Lee et al., 2020).",
"We leave this as future work."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"We propose a practical instant question answering (QA) system on product pages of e-commerce services, where for each user query, relevant community question answer (CQA) pairs are retrieved.",
"User queries and CQA pairs differ significantly in language characteristics making relevance learning difficult.",
"Our proposed transformer-based model learns a robust relevance function by jointly learning unified syntactic and semantic representations without the need for human labeled data.",
"This is achieved by distantly supervising our model by distilling from predictions of a syntactic matching system on user queries and simultaneously training with CQA pairs.",
"Training with CQA pairs helps our model learning semantic QA relevance and distant supervision enables learning of syntactic features as well as the nuances of user querying language.",
"Additionally, our model encodes queries and candidate responses independently allowing offline candidate embedding generation thereby minimizing the need for real-time transformer model execution.",
"Consequently, our framework is able to scale to large e-commerce QA traffic.",
"Extensive evaluation on user queries shows that our framework significantly outperforms both syntactic and semantic baselines in offline as well as large scale online A/B setups of a popular e-commerce service.",
"Product pages on an e-commerce service (eg. Amazon) are often overloaded with information.",
"Customers wanting to search for a piece of specific information about a product find it difficult to sift This work was done while author was in Community Shopping team.",
"through.",
"To address this issue most services provide an instant QA system on the product pages enabling users to type their query and get instant answers curated from various sources present on the page.",
"Figure 1 shows the QA widget on Amazon, and the three sources viz.",
"Product information (eg: bullet points, technical specifications etc.), Customer Q&A's (where customers/sellers provide an answer to the posted questions by customers, henceforth called community QA or CQA section), and Customer reviews from where a response is generated.",
"In this paper, we focus on retrieving responses Figure 1: Instant QA widget on Amazon from the CQA section.",
"Hence our goal is to learn a robust relevance function between user queries and CQA pairs.",
"Notably, these two domains differ significantly in language characteristics.",
"User queries are typically short, often ill-formed and incomplete, whereas CQA pairs tend to be more complete and well-formed.",
"For example, \"Bettry perfon\" is a user query where the intended question probably was \"how is the battery performance?\".",
"Furthermore, we analyzed CQA section along with 3 months user query logs of a popular e-commerce service and found that the data statistics such as length, vocabulary overlap (between user queries and CQA) indicate that the domains are quite different.",
"Consequently, relevance learning for this task is difficult.",
"Table 1 characterizes these differences for 4 different locales: Canada (CA), Germany (DE), France (FR), and IN (India).",
"Existing QA systems typically work by retrieving a set of candidates for a user query using syntactic features (eg. BM25 that uses bag of words features) followed by a semantic answer selection/re-ranking step (Chen et al., 2017).",
"Some approaches include semantic features in the candidate generation step (Mitra and Craswell, 2019).",
"Syntactic systems fail in two cases: (1) when there are no word overlaps (a likely scenario as user queries have limited vocabulary overlap with CQA pairs), and (2) when the word overlaps are semantically irrelevant.",
"While adding semantic features or semantic re-ranking models mitigate some of the drawbacks, however, training a robust semantic relevance model to match user queries with CQA pairs is difficult due to the lack of human-labeled data.",
"An additional challenge is that the instant QA system needs to provide real-time responses to users and must scale to the very large traffic of mod-ern e-commerce systems.",
"Running deep models online (typical in case of re-ranking) is prohibitive for such a system.",
"In this paper, we present an instant QA system with two main contributions: (1) our framework is able to learn a robust relevance function between user queries and CQA pairs by jointly learning semantic and syntactic features-aware representations without the need for explicit human-labeled data, and (2) our framework minimizes the need for real-time model execution by encoding the CQA pairs offline, enabling large scale online deployment.",
"We chose BERT (Devlin et al., 2019) as our transformer encoder due to its recent success in various natural language understanding (NLU) tasks including QA.",
"To address the lack of labeled training data challenge, we use the QA pairs from the CQA section of each product page as training data.",
"However, as shown in our evaluation (section 4.3), such a model does not work well on the user queries asked on the instant QA system on the product pages.",
"We propose a distillation-based distantly supervised training algorithm where we use the answers retrieved by a syntactic match system on a set of user queries asked on the instant QA system.",
"This training helps the model adapt to the specific task at hand by learning the user query distribution as well as the strengths of a traditional syntactic match system.",
"This coupled with training on CQA pairs helps our model learn a robust semantic model that is task aware.",
"Our training data does not require any explicit human labeling.",
"To make our system work in real-time we train the BERT model in Siamese style (Reimers and Gurevych, 2019) with triplets consisting of query, relevant candidate (+ve sample), and irrelevant candidate (-ve sample).",
"Hence the query and candidate responses are encoded independently using the same transformer encoder enabling embedding computation of all candidates (across all products) offline.",
"At real-time, only the user query needs to be embedded using the heavy semantic model resulting in a significant reduction of online compute cost.",
"In contrast, the common practice of using BERT in QA problems is to concatenate the query and a candidate response and run BERT on the fused input.",
"This would require BERT to run on all query, candidate CQA pairs on product pages real-time making it prohibitive for online deployment.",
"Additionally, we combine the two embeddings (question and answer) in each CQA pair to form one embedding per pair allowing us to reduce the offline storage significantly.",
"We extensively evaluate our framework on user queries asked on the instant QA system at a popular e-commerce system in 4 locales spanning 3 languages.",
"Offline evaluation shows that our proposed framework is able to increase the area under the precision-recall curve (PR-AUC) by up to 12.15% over the existing system.",
"Also in an online A/B test, our system is able to improve coverage by up to 6.92% by complementing the existing system.",
"QA Systems: Question Answering (QA) is a fundamental task in the Natural Language Understanding (NLU) domain.",
"Broadly QA systems can be categorized into open-domain QA and closed-domain QA.",
"Open-domain QA involves answering questions related to all topics from a huge repository of information such as the Web (Voorhees and Tice, 1999), Wikipedia corpus (Yang et al., 2015), Knowledge Bases (Bollacker et al., 2008).",
"Closed-domain QA systems usually deal with a specific domain such as medical, sciences etc.",
"The main steps of a QA system are candidate retrieval followed by answer selection/re-ranking (Chen et al., 2017).",
"Some systems do answer generation (Lewis et al., 2020) instead of selection.",
"Semantic Text Encoders: Recently, QA systems have significantly evolved from syntax based (eg. BM25) systems to leverage the power of semantic text representation models.",
"Recurrent Neural Networks (RNN) such as Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) networks were defacto for semantic text representation.",
"Recently proposed self attention based transformer (Vaswani et al., 2017) models show consistent improvement over RNNs on a multitude of NLU tasks such as Machine Translation (MT) (Vaswani et al., 2017), Machine Reading Comprehension (Rajpurkar et al., 2016), GLUE (Devlin et al., 2019) and Natural Language Generation (NLG) tasks (Radford et al., 2019).",
"E-commerce Product QA Systems: E-commerce Product QA systems are similar to domain specific systems.",
"Recently product QA systems are receiving a lot of attention due to their growing usage and unique characteristics such as the search space being specific to each product.",
"Product QA systems are real-time systems where a user types a query and expect instant answers, the queries of such systems are typically short, prone to errors and even incomplete in nature.",
"This coupled with product specific limited search space, often results in no syntactic match between the query and candidate answers, making semantic matching essential.",
"In contrast, the retrieval set for websearch and traditional IR typically is huge and there are always bag-of-words matches that are used to fil-ter down the candidates before running subsequent deep models.",
"Additionally, search and IR systems in e-commerce/web domains get powerful implicit supervision signals through user clicks, however, instant QA on product pages only show the answer with no option to click making it hard to get user feedback based labels.",
"Finally, QA relevance is different from traditional IR relevance (eg. for query what is the material?, the response made of stainless steel is relevant and doesn't require bag-of-words or even synonym matches) making domain specific semantic matching critical.",
"Kulkarni et al. (Kulkarni et al., 2019) propose an embedding based semantic matching model to find relevant answers.",
"Additionally, it uses a query category classifier and an external ontology graph both of which require human generated labels.",
"There are several proposed works (Zhang et al., 2020b, 2019, 2020a; Chen et al., 2019a; McAuley and Yang, 2016; Burke et al., 1997; Gupta et al., 2019) that improve the QA relevance models (usually learned from CQA pairs) by enriching them using information from reviews of the product and capturing their relation with the CQA pairs.",
"Natural language answer generation models are also used in the context of product QA (Deng et al., 2020; Chen et al., 2019b; Gao et al., 2019; Bi et al., 2019).",
"They are typically encoder-decoder architectures and their variants.",
"These models are hard to generalize and often result in factually incorrect text generation.",
"The aforementioned works use reviews and other product information along with CQA section to guide the models to generate answers.",
"In this paper, we take the approach of answer retrieval (instead of generation).",
"We solve the orthogonal problem of how to adapt the relevance model to be aware of the user query characteristics (significantly different from the well formed questions posted in the CQA section) in the absence of human labeled data.",
"The improvement in relevance models (between user queries and CQA pairs) proposed can be easily complemented with the existing review awareness models.",
"A drawback of the aforementioned models is they comprise of multiple deep neural components, many of which need to be run real-time making online model deployment and computation cost prohibitive for large scale deployment.",
"Our framework only needs to encode the user query realtime, all candidate responses are precomputed stored in an index making it amenable to real-time deployment.",
"In this section, we describe our proposed semantic QA system for e-commerce services.",
"Unlike traditional QA systems where multiple models are used sequentially to surface the final response (eg. candidate retrieval, followed by answer selection/re-ranking, followed by span selection), here we use a semantic index and the top results retrieved from the index are the final answers shown to the users.",
"Below we describe the problem definition followed by individual components of our system: 3.1 Problem Statement Given a set of N products, a user query u q on product p and the set of CQA pairs for all products C = {{ Q, A } p } where p { 1 , N } and { Q, A } p = {{ q, a } 1 p , { q, a } 2 p , ... { q, a } np } are the set of n QA pairs for product p , the goal is to find the relevant QA pairs set R C such that { q, a } C, { q, a } can answer u q .",
"We chose the transformer network (Vaswani et al., 2017) as our core text representation model.",
"Transformers are largely successful in QA systems (eg.",
"BERT for MRC (Devlin et al., 2019)), however, the typical approach to use transformers in a QA setting is to create a single input concatenating both the user query and a candidate response, enabling transformers to leverage a full contextual representation through attention mechanisms.",
"Since transformer models are usually very large (hundreds of millions of parameters), this makes it infeasible to run the model real-time on a large candidate set.",
"Our goal in this work is to leverage the strengths of the deep representational power of transformers while being able to scale to a real-time system with large candidate sets.",
"Hence we propose to use transformers in a Siamese network setting similar to Sentence BERT (Reimers and Gurevych, 2019) to embed the query and the candidate responses independently.",
"The same transformer encoder is used to encode both the query as well as the candidate responses (CQA pairs).",
"This enables offline encoding of all CQA pairs and at real-time only, the user query needs to be encoded making the model productionizable at scale.",
"In our model, a sequence of text is encoded first by passing it through the transformer network that generates embeddings for each token in the input sequence.",
"A mean pool (average) of the output token embeddings represents the full sequence.",
"We train our transformer based QA system using the triplet loss (Chechik et al., 2009) that tries to learn a similarity measure between user query, CQA pairs while maximizing the margin between relevant pairs and irrelevant pairs.",
"Such ranking loss has proven effective at numerous ranking tasks (Chechik et al., 2009; Schroff et al., 2015; Wang et al., 2014).",
"The triplet loss for a query q (also known as anchor), a relevant candidate response c + ve , and an irrelevant candidate response c ve , is formally defined as: max (cid:0) || e ( q ) e ( c + ve ) || || e ( q ) e ( c ve ) || + (cid:15), 0 (cid:1) (2) where || || is the Euclidean distance, and (cid:15) is the margin.",
"The goal is to maximize the loss over the triplets of the training set.",
"One of the biggest challenges in training the instant QA system for an e-commerce service is the lack of task specific labeled data.",
"One source of labeled data is the CQA pairs.",
"To create the relevant pairs (positive samples) and irrelevant pairs (neg-ative samples) we adopt the following sampling strategy: (1) we sample user questions (as anchors) from all product pages' CQA section.",
"This ensures the diversity of products in the training data.",
"(2) For each question, we pick a paired answer to that question as the relevant pair.",
"(3) For the same user question, we randomly select negative samples (an-swers from different user questions) both from the same product page and from other product pages.",
"The negatives from the same product page are the hard negatives (as these answers are related to the current product whereas answers from other product pages likely are completely unrelated and easy to distinguish).",
"In future, we wish to explore advanced negative sampling strategies such as Kumar et al. (Kumar et al., 2019) for answer sampling.",
"However, for pages having very few CQA pairs, the number of negative samples becomes small, and adding negative samples from other product pages is useful in such scenarios even though those may be easy negatives.",
"We show (in section 4.3) that such a model learns a good QA relevance function (between community questions and answers), however, it fails to learn a robust relevance function between the typical user queries asked on the instant QA widget and the CQA pairs (candidate responses).",
"The underlying reason is the difference in characteristics of the questions/answers posted in CQA forum (typically long, well-formed, and complete) and the queries asked on the instant answer widget (often short, grammatically incorrect, and ill-formed).",
"Consequently, a model trained to learn relevance between community questions and answers performs very well when the queries are long and well-formed, however, they perform poorly on the queries typically asked by a user on the instant answer widget.",
"To address the aforementioned challenge, we propose a knowledge distillation (Hinton et al., 2015) based training technique that acts as distant supervision on our Siamese transformer network.",
"We collect a random set of user queries asked on the instant QA system and the responses (CQA pairs) generated by the existing syntactic match system from the query logs of a popular e-commerce service.",
"For generating the relevant pairs we take a user query as the anchor question and the answer from the CQA pair retrieved by the existing system.",
"For generating the irrelevant pairs we follow a similar negative strategy as before.",
"The existing syntactic match based system can be thought of as the teacher model and the Siamese transformer model is the student model in the distillation process.",
"This distant supervision helps our semantic model adapt to the nuances of the instant QA system where queries are often short, and incoherent.",
"Additionally, the distant supervision system also helps the semantic model learn the strengths of syntactic match systems.",
"We train our Siamese transformer network with data from both the aforementioned sources (CQA pairs, distilling from predictions of syntactic match based system on real user queries).",
"We explore two strategies for jointly training our model with the two data sources: (1) we mix the data from both sources and train our model with the single triplet loss, and (2) we train our model in a multi-task fashion where there is a task (triplet loss) for each of the two data sources.",
"This joint training of a unified syntactic and semantic representation while adapting to the nuances of user querying language enables our instant QA system to learn a robust task specific relevance function.",
"Hence our instant QA system serves as an end-to-end unified framework for the e-commerce product QA problem.",
"For our proposed model the input is a user query on the instant QA system.",
"The query is embedded in real-time using equation 1 and searched against the candidate vectors (for that specific product) to retrieve the top-k most relevant candidates (where a candidate is an embedding of QA pair from the CQA section of the product).",
"For the top-k search, we use a weighted combination of squared Euclidean distance between the query, question (of CQA pair) embeddings and query, answer (of CQA pair) embeddings.",
"Our relevance score of a query, CQA pair is generated as follows: s ( q, Q, A ) = || e ( q ) e ( Q ) || 2 + (1 ) || e ( q ) e ( A ) || 2 (3) The above expression can be rewritten using linearity of inner products as follows: || e ( q ) || 2 + || e ( Q ) || 2 + (1 ) || e ( A ) || 2 2 (cid:104) e ( q ) , e ( Q ) + (1 ) e ( A ) (cid:105) (4) Here (cid:104) , (cid:105) denotes the inner product between vectors.",
"From the expression in equation 4 we can see that instead of storing e ( Q ) , and e ( A ) separately, we can store the weighted combination of the two vectors e ( Q ) + (1 ) e ( A ) along with two extra scalar dimensions || e ( Q ) || 2 and (1 ) || e ( A ) || 2 and the rest of the terms are query related and are computed real-time.",
"This enables us to reduce the offline index storage by half by storing only one vector per candidate QA pair.",
"Note that to enable such relevance score computation we had to use the square of Euclidean distance (instead of vanilla Euclidean distance) as the relevance scoring function at inference time.",
"We ran experiments both in offline settings as well as in large scale online setups.",
"We evaluated our models across 4 locales with 3 languages to test whether our distant supervision based training approach is able to generalize across languages and varying data characteristics.",
"In this section, we describe the methods that we compare.",
"All methods described below can encode query and candidates independently.",
"Consequently, the candidate index may be computed offline for all of these methods, enabling large scale deployment.",
"BM25: BM25 (Robertson et al., 1994) is the defacto ranking function used in retrieval systems.",
"It relies on a weighted combination of Term Frequency (TF) and Inverted Document Frequency (IDF) matching.",
"The standard form of the scoring function is as follows: bm 25( q, D ) = n (cid:88) i =1 IDF ( q i ) T F ( q i , D )( k + 1) T F ( q i , D ) + k (cid:16) 1 b + b | D | avgdl (cid:17) where, IDF ( q i ) = ln (cid:18) N m ( q i ) + 0 .",
"Here q is the user query consisting of n terms ( q 1 , q 2 , . . . q n ), D is a document (or a sequence of text), T F ( q i , D ) denotes the number of times q i appears in D , | D | denotes the number of terms in document D, avgdl is the average number of terms per document, m ( q i ) is the number of documents containing the term q i , N is the total number of documents in the corpus, and k , b are tunable parameters, which we fixed to 1.5 and 0.75 respectively (Manning et al., 2008).",
"Given the bm 25 function above, we derive the relevance function between a user query, and a CQA pair in a similar fashion as equation 3 as follows: bm 25( q, Q ) + (1 ) bm 25( q, A ) E-commerce Baseline: We use the syntactic feature based existing optimized instant QA system at a popular e-commerce service as a baseline.",
"We collect the query and responses shown by the system from query logs.",
"Sentence-transformers-STS-NLI : We use sentence-transformers (Reimers and Gurevych, 2019, 2020) which are state-of-the-art Siamese style trained transformer models for the general purpose semantic textual similarity (STS) and natural language inference (NLI) task.",
"For English, we use the roberta-large-nli-stsb-mean-tokens 1 model, and for French and German we use the xlm-r-100langs-bert-base-nli-stsb-mean-tokens 1 model as we found them to be the best performing pretrained models.",
"The relevance function is computed in a similar fashion as equation",
"3. SemQA-CQA: Our proposed model trained only with CQA data as described in section",
"3. SemQA-CQA-DS: Our proposed model that was trained with CQA data and distantly supervised with predictions of syntactic match system on user queries as described in section",
"3. 1 https://www.sbert.net/docs/pretrained_models.html 4.2 Training Setup We collect training data from the CQA section and user query logs for CA, DE, FR and IN locales of a popular e-commerce service.",
"For each locale, to generate the CQA triplets and user query triplets (for distant supervision), we use data from CQA section of products, and user query logs and follow the sampling strategy described in section 3.3.",
"The dataset statistics are described in table",
"2. CQA Triplets User Query Triplets CA 5,317,904 1,063,580 DE 5,000,000 4,949,766 FR 1,500,000 173,258 IN 7,176,824 10,641,498 Table 2: Training data statistics We use the bert-base-uncased 2 as the base transformer for our English models (for CA and IN locale), camembert-base (Martin et al., 2020) 3 as the base transformer for FR locale, and bert-base-multilingual-uncased 4 as the base transformer for DE locale.",
"We train our models upto 10 epochs, with a batch size of 16, Adam optimizer with learning rate of 2 e 5 with a schedule of linear warmup of first 10000 steps and then linear decay.",
"We set (cid:15) = 1 in the loss equation 2, and = 0 .",
"4 in the inference equation",
"3. For the joint training (CQA triplets and user query triplets), we have two training runs (data mixing and multi-task as described in section 3.3) per locale and picked the best models (data mixing for CA, FR and multi-task for DE, IN).",
"We use the Pytorch 5 , Huggingface (Wolf et al., 2019) and Sentence-Transformers (Reimers and Gurevych, 2019) libraries to develop our models on an Nvidia V100 GPU and hence our training time per batch and inference time per sample are same as that of Sentence-Transformers with BERT (base-model, 110M parameters).",
"We do offline evaluation of our models under two settings: (1) on CQA test sets collected from the product pages at a popular e-commerce service, and (2) on user queries test set collected from query logs of the instant QA system on product pages of",
"Evaluation on CQA Dataset: The goal of this section is to evaluate the relevance between community questions and answers learned by different approaches.",
"For all locales we randomly sample questions posted on product pages.",
"The paired answers to those questions are considered to be relevant answers and all other answers (from other CQA pairs) of the product are assumed to be irrelevant answers.",
"We only sampled products that at least have 5 CQA pairs posted.",
"For each question, the task is to rank all the candidate answers according to relevance.",
"We report precision@1 (P@1), mean average precision (mAP) and mean reciprocal rank (MRR) in table",
"4. Since there may be multiple paired answers to a community posted question, the rank (for MRR) of a relevant answer is the number of irrelevant answers ranked above it plus one.",
"We observe that both SemQA-CQA and SemQA-CQA-DS are able to significantly outperform other methods.",
"This is expected since both of these methods were trained using CQA data and hence is able to learn a good QA relevance function, whereas the sentence-transformers-STS-NLI were trained using STS and NLI tasks and they failed to generalize.",
"However, CQA pairs are significantly different from the language of user queries and in the next section, we will evaluate on those queries (the main goal of this paper).",
"Evaluation on User Queries: To evaluate on user queries, we sample user queries (and their corresponding top responses) uniformly at random from the query logs of the instant QA system.",
"We also retrieve the top responses generated by the different models we trained.",
"These query, response pairs are labeled as relevant or irrelevant by a team of human annotators.",
"We use the area under the precision recall curve (PR-AUC) as our quality metric.",
"We report the absolute percentage points M0 M1 M2 M3 P@1 CA 39.02 52.87 74.10 73.55 DE 40.82 38.66 73.04 71.65 FR 37.42 42.25 73.85 75.34 IN 26.51 35.67 53.05 53.62 mAP CA 41.02 51.23 73.37 72.46 DE 45.24 43.35 74.80 73.04 FR 41.24 44.46 74.27 75.38 IN 43.93 51.12 72.17 71.89 MRR CA 31.21 42.41 65.17 64.17 DE 37.05 35.48 68.30 66.20 FR 32.26 35.03 66.78 67.37 IN 34.70 41.23 58.44 58.46 Table 4: Evaluation on CQA pairs.",
"change in PR-AUC with respect to the E-commerce Baseline in table 6 (+ve sign implies PR-AUC has improved and -ve sign implies PR-AUC has de-creased).",
"We make the following observations: (1) the vanilla BM25 baseline performs the worst which is expected as it relies solely on syntactic matches and fails to capture semantic intent; (2) both the sentence-transformers-STS-NLI and our SemQA-CQA models fail to generalize validating our hypothesis that learning a general semantic matching model or a QA relevance model is not sufficient to learn the nuances of user querying language; (3) the SemQA-CQA-DS models significantly outperform all other models.There are two underlying reasons for these improvements.",
"Firstly, SemQA-CQA-DS is able to leverage the semantic understanding capabilities (that Pretrained-Transformers and SemQA-CQA are also able to do), and secondly, SemQA-CQA-DS is also able to learn the nuances of the task specific query language leading to a better relevance model between user queries and CQA pairs (that are potential candidate responses).",
"Next, we do a qualitative analysis on the cases where SemQA-CQA-DS is able to improve on the E-commerce Baseline.",
"We identify two main areas of improvement: (1) improving relevance in cases where the baseline fails to capture the semantic intent, and (2) improving coverage in cases where the baseline fails to retrieves any response.",
"We present examples of both cases in table",
"5. The examples include cases where the language is ill-formed and incoherent and our distantly supervised User query Top CQA pair retrieved by SemQA-CQA-DS Top CQA pair retrieved by E-commerce Baseline Improving semantic relevance Do you have size varia-tion??? Like i need this in bigger",
"model still captures the intent and retrieve relevant responses.",
"We also ran a large scale online A/B experiment with 50% of the user traffic.",
"All locales were experimented at least for two weeks to ensure diversity in periodic patterns and have enough queries to achieve statistically significant conclusions (p-values < 0.01 in Chi-Square tests) about the improvement in metrics.",
"Here the SemQA-CQA-DS model is used to complement the existing E-commerce Baseline 6 to improve the coverage of 6 Details can't be disclosed due to proprietary information the system.",
"There are two metrics of interest: (1) the coverage (percentage of queries answered by the system), and (2) the new question asking rate (percentage of queries for which even after seeing the response, a user asks a question in the CQA forum; if the relevance of the answers improves, the question asking rate should decrease).",
"We report the change in absolute percentage points with respect to the E-commerce Baseline (for coverage +ve is better, and for question asking rate -ve is better).",
"The results are present in table 7.",
"SemQA-CQA-DS was able to improve coverage while reducing the rate of new questions posted by users in all locales thereby showing the efficacy of our approach at scale.",
"In this paper we presented SemQA', a practical transformer-based framework to provide instant QA efficiently on the product pages of e-commerce services.",
"Given a user query, our framework directly retrieves the relevant CQA pairs from the product page, where user queries and CQA pairs have significantly different language characteristics.",
"Our model is able to learn a robust relevance function between user queries and CQA pairs by learning representations that leverage the strengths of both syntactic and semantic features, without the need for any explicit human labeled data.",
"Our model is able to scale to large scale real-time e-commerce systems and at inference time only requires model encoding of user queries for by index lookups, and candidate responses are encoded offline into the index in a space efficient manner.",
"Extensive offline evaluation shows our approach generalizes to multiple locales spanning different languages with a PR-AUC gain by upto 12.15% over the existing system at a popular e-commerce service.",
"We also ran a large scale online A/B experiment with 50% of the user traffic and our framework was able to improve coverage by upto 6.92% by complementing the existing system.",
"As a future direction, we would like to expand our SemQA system to include responses from additional content on the product pages (reviews, descriptions etc.).",
"We believe some of the existing approaches to leverage reviews (discussed in section 2) can be used to complement our system to expand our relevance model beyond CQA data.",
"Another direction of research will be to include features such as accuracy, sentiment, freshness etc. within our proposed SemQA system's responses.",
"We thank all the anonymous reviewers for providing their valuable comments that helped us improve the quality of our paper.",
"We also thank our colleagues in the science, product, and engineering teams at Amazon for their valuable inputs."
] | [
"objective",
"abstain",
"objective",
"result",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"result",
"method",
"result",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"method",
"objective",
"result",
"abstain"
] |
[
"In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers.",
"We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between current pretraining objectives and question answering.",
"We propose a new pretraining scheme tailored for question answering: recurring span selection.",
"Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span.",
"Masked spans are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select the answer span.",
"The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while maintaining competitive performance in the high-resource setting.",
"1 1 Introduction The standard approach to question answering is to pretrain a masked language model on raw text, and then fine-tune it with a span selection layer on top (Devlin et al., 2019; Joshi et al., 2020; Liu et al., 2019).",
"While this approach is effective, and sometimes exceeds human performance, its success is based on the assumption that large quantities of annotated question answering examples are available.",
"For instance, both SQuAD (Rajpurkar et al., 2016, 2018) and Natural Questions (Kwiatkowski et al., 2019) contain an order of 100,000 question and Equal contribution.",
"SpanBERT (Full Data) RoBERTa SpanBERT Splinter (Ours)",
"answer pairs in their training data.",
"This assumption quickly becomes unrealistic as we venture outside the lab conditions of English Wikipedia, and attempt to crowdsource question-answer pairs in other languages or domains of expertise (Tsatsaro-nis et al., 2015; Kembhavi et al., 2017).",
"How do question answering models fare in the more practical case, where an in-house annotation effort can only produce a couple hundred training examples?",
"We investigate the task of few-shot question answering by sampling small training sets from existing question answering benchmarks.",
"Despite the use of pretrained models, the standard approach yields poor results when fine-tuning on few examples (Figure 1).",
"For example, RoBERTa-base fine-tuned on 128 question-answer pairs from SQuAD obtains around 40 F1.",
"This is somewhat expected, since the pretraining objective is quite different from the fine-tuning task.",
"While masked language modeling requires mainly local context around the masked token, question answering needs to align the question with the global context of the pas-Figure 2: An example paragraph before",
"(a) and after",
"(b) masking recurring spans.",
"Each color represents a different cluster of spans.",
"After masking recurring spans (replacing each with a single [QUESTION] token), only one span from each cluster remains unmasked, and is considered the correct answer to the masked spans in the cluster.",
"The pretraining task is to predict the correct answer for each [QUESTION] .",
"sage.",
"To bridge this gap, we propose (1) a novel self-supervised method for pretraining span selection models, and (2) a question answering layer that aligns a representation of the question with the text.",
"We introduce Splinter ( sp anl evel po inter ), a pretrained model for few-shot question answering.",
"The challenge in defining such a self-supervised task is how to create question-answer pairs from unlabeled data.",
"Our key observation is that one can leverage recurring spans : n-grams, such as named entities, which tend to occur multiple times in a given passage (e.g., Roosevelt in Figure 2).",
"We emulate question answering by masking all but one instance of each recurring span with a special [QUESTION] token, and asking the model to select the correct span for each such token.",
"To select an answer span for each [QUESTION] token in parallel , we introduce a question-aware span selection (QASS) layer, which uses the [QUESTION] token's representation to select the answer span.",
"The QASS layer seamlessly integrates with fine-tuning on real question-answer pairs.",
"We simply append the [QUESTION] token to the input question, and use the QASS layer to select the answer span (Figure 3).",
"This is unlike existing models for span selection, which do not include an explicit question representation.",
"The compatibility between pretraining and fine-tuning makes Splinter an effective few-shot learner.",
"Splinter exhibits surprisingly high performance given only a few training examples throughout a variety of benchmarks from the MRQA 2019 shared task (Fisch et al., 2019).",
"For example, Splinter-base achieves 72.7 F1 on SQuAD with only 128 examples, outperforming all baselines by a very wide margin.",
"An ablation study shows that the pretraining method and the QASS layer itself (even without pretraining) both contribute to improved performance.",
"Analysis indicates that Splinter's representations change significantly less during fine-tuning compared to the baselines, suggesting that our pretraining is more adequate for question answering.",
"Overall, our results highlight the importance of designing objectives and architectures in the few-shot setting, where an appropriate inductive bias can lead to dramatic performance improvements.",
"Extractive question answering is a common task in NLP, where the goal is to select a contiguous span a from a given text T that answers a question Q .",
"This format was popularized by SQuAD (Rajpurkar et al., 2016), and has since been adopted by several datasets in various domains (Trischler et al., 2017; Kembhavi et al., 2017) and languages (Lewis et al., 2020; Clark et al., 2020), with some extensions allowing for unanswerable questions (Levy et al., 2017; Rajpurkar et al., 2018) or multiple answer spans (Dua et al., 2019; Dasigi et al., 2019).",
"In this work, we follow the assumptions in the recent MRQA 2019 shared task (Fisch et al., 2019) and focus on questions whose answer is a single span.",
"The standard approach uses a pretrained encoder, Figure 3: An example of our fine-tuning setup, taken from the development set of SQuAD.",
"such as BERT (Devlin et al., 2019), and adds two parameter vectors s , e to the pretrained model in order to detect the start position s and end position e of the answer span a , respectively.",
"The input text T and question Q are concatenated and fed into the encoder, producing a contextualized token representation x i for each token in the sequence.",
"To predict the start position of the answer span, a probability distribution is induced over the entire sequence by computing the inner product of a learned vector s with every token representation (the end position is computed similarly using a vector e ): P ( s = i | T, Q ) = exp( x (cid:62) i s ) (cid:80) j exp( x (cid:62) j s ) , P ( e = i | T, Q ) = exp( x (cid:62) i e ) (cid:80) j exp( x (cid:62) j e ) .",
"The parameters s , e are trained during fine-tuning, using the cross-entropy loss with the start and end positions of the gold answer span.",
"This approach assumes that each token representation x i is contextualized with respect to the question.",
"However, the masked language modeling objective does not necessarily encourage this form of long-range contextualization in the pretrained model, since many of the masked tokens can be resolved from local cues.",
"Fine-tuning the attention patterns of pretrained masked language models may thus entail an extensive learning effort, difficult to achieve with only a handful of training examples.",
"We overcome this issue by (1) pretraining directly for span selection, and (2) explicitly representing the question with a single vector, used to detect the answer in the input text.",
"We formulate a new task for pretraining question answering from unlabeled text: recurring span selection .",
"We replace spans that appear multiple times in the given text with a special [QUESTION] token, except for one occurrence, which acts as the answer span for each (masked) cloze-style question.",
"The prediction layer is a modification of the standard span selection layer, which replaces the static start and end parameter vectors, s and e , with dynamically-computed boundary detectors based on the contextualized representation of each [QUESTION] token.",
"We reuse this architecture when fine-tuning on question-answer pairs by adding a [QUESTION] token at the end of the actual question, thus aligning the pretraining objective with the fine-tuning task.",
"We refer to our pretrained model as Splinter .",
"Given an input text T , we find all recurring spans : arbitrary n-grams that appear more than once in the same text.",
"For each set of identical recurring spans R , we select a single occurrence as the answer a and replace all other occurrences with a single [QUESTION] token.",
"2 The goal of recurring span selection is to predict the correct answer a for a given [QUESTION] token q R \\ { a } , each q thus acting as an independent cloze-style question .",
"Figure 2 illustrates this process.",
"In the given passage, the span Roosevelt appears three times.",
"Two of its instances (the second and third) are replaced with [QUESTION] , while one instance (the first) becomes the answer, and remains intact.",
"After masking, the sequence is passed through a transformer encoder, producing contextualized to-2 In practice, only some sets of recurring spans are processed; see Cluster Selection below.",
"ken representations.",
"The model is then tasked with predicting the start and end positions of the answer given each [QUESTION] token representation.",
"In Figure 2b, we observe four instances of this prediction task: two for the Roosevelt cluster, one for the Allied countries cluster, and one for Decla-ration by United Nations .",
"Taking advantage of recurring words in a passage (restricted to nouns or named entities) was proposed in past work as a signal for coreference (Kocijan et al., 2019; Ye et al., 2020).",
"We further discuss this connection in Section 7.",
"Span Filtering To focus pretraining on semantically meaningful spans, we use the following defi-nition for spans, which filters out recurring spans that are likely to be uninformative: (1) spans must begin and end at word boundaries, (2) we consider only maximal recurring spans, (3) spans containing only stop words are ignored, (4) spans are limited to a maximum of 10 tokens.",
"These simple heuristic filters do not require a model, as opposed to masking schemes in related work (Glass et al., 2020; Ye et al., 2020; Guu et al., 2020), which require part-of-speech taggers, constituency parsers, or named entity recognizers.",
"Cluster Selection We mask a random subset of recurring span clusters in each text, leaving some recurring spans untouched.",
"Specifically, we replace up to 30 spans with [QUESTION] from each input passage.",
"3 This number was chosen to resemble the 15% token-masking ratio of Joshi et al. (2020).",
"Note that in our case, the number of masked tokens is greater than the number of questions.",
"Our approach converts texts into a set of questions that need to be answered simultaneously.",
"The standard approach for extractive question answering (Devlin et al., 2019) is inapplicable, because it uses fixed start and end vectors.",
"Since we have multiple questions, we replace the standard parameter vectors s , e with dynamic start and end vectors s q , e q , computed from each [QUESTION] token q : s q = Sx q e q = Ex q Here, S , E are parameter matrices, which extract ad hoc start and end position detectors s q , e q from the given [QUESTION] token's representation x q .",
"The rest of our model follows the standard span selection model by computing the start and end position probability distributions.",
"The model can also be viewed as two bilinear functions of the question representation x q with each token in the sequence x i , similar to Dozat and Manning (2017): P ( s = i | T, q ) = exp( x (cid:62) i Sx q ) (cid:80) j exp( x (cid:62) j Sx q ) P ( e = i | T, q ) = exp( x (cid:62) i Ex q ) (cid:80) j exp( x (cid:62) j Ex q ) Finally, we use the answer's gold start and end points ( s a , e a ) to compute the cross-entropy loss: log P ( s = s a | T, q ) log P ( e = e a | T, q ) We refer to this architecture as the question-aware span selection (QASS) layer.",
"After pretraining, we assume access to labeled examples, where each training instance is a text T , a question Q , and an answer a that is a span in T .",
"To make this setting similar to pretraining, we simply append a [QUESTION] token to the input sequence, immediately after the question Q (see Figure 3).",
"Selecting the answer span then proceeds exactly as during pretraining.",
"Indeed, the advantage of our approach is that in both pretraining and fine-tuning, the [QUESTION] token representation captures information about the question that is then used to select the span from context.",
"To evaluate how pretrained models work when only a small amount of labeled data is available for fine-tuning, we simulate various low-data scenarios by sampling subsets of training examples from larger datasets.",
"We use a subset of the MRQA 2019 shared task (Fisch et al., 2019), which contains extractive question answering datasets in a unified format, where the answer is a single span in the given text passage.",
"Split I of the MRQA shared task contains 6 large question answering datasets: SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).",
"For each dataset, we sample smaller training datasets from the original training set with sizes changing on a logarithmic scale, from 16 to 1,024 examples.",
"To reduce variance, for each training set size, we sample 5 training sets using different random seeds and report average performance across training sets.",
"We also experiment with fine-tuning the models on the full training sets.",
"Since Split I of the MRQA shared task does not contain test sets, we evaluate using the official development sets as our test sets.",
"We also select two datasets from Split II of the MRQA shared task that were annotated by domain experts: BioASQ (Tsatsaronis et al., 2015) and TextbookQA (Kembhavi et al., 2017).",
"Each of these datasets only has a development set that is publicly available in MRQA, containing about 1,500 examples.",
"For each dataset, we sample 400 examples for evaluation (test set), and follow the same protocol we used for large datasets to sample training sets of 16 to 1,024 examples from the remaining data.",
"To maintain the few-shot setting, every dataset in our benchmark has well-defined training and test sets.",
"To tune hyperparameters, one needs to extract validation data from each training set.",
"For simplicity, we do not perform hyperparameter tuning or model selection (see Section 5), and thus use all of the available few-shot data for training.",
"We describe our experimental setup in detail, including all models and baselines.",
"Splinter-base shares the same architecture (trans-former encoder (Vaswani et al., 2017)), vocabulary (cased wordpieces), and number of parameters (110M) with SpanBERT-base (Joshi et al., 2020).",
"In all experiments, we compare Splinter-base to three baselines of the same capacity: RoBERTa (Liu et al., 2019) A highly-tuned and optimized version of BERT, which is known to perform well on a wide range of natural language understanding tasks.",
"SpanBERT (Joshi et al., 2020) A BERT-style model that focuses on span representations.",
"SpanBERT is trained by masking contiguous spans of tokens and optimizing two objectives:",
"(a) masked language modeling, which predicts each masked token from its own vector representation;",
"(b) the span boundary objective, which predicts each masked token from the representations of the unmasked tokens at the start and end of the masked span.",
"SpanBERT (Reimpl) Our reimplementation of SpanBERT, using exactly the same code, data, and hyperparameters as Splinter.",
"This baseline aims to control for implementation differences and measures the effect of replacing masked language modeling with recurring span selection.",
"Also, this version does not use the span boundary objective, as Joshi et al. (2020) reported no significant improvements from using it in question answering.",
"We train Splinter-base using Adam (Kingma and Ba, 2015) for 2.4M training steps with batches of 256 sequences of length 512.",
"4 The learning rate is warmed up for 10k steps to a maximum value of 10 4 , after which it decays linearly.",
"As in previous work, we use a dropout rate of 0.1 across all layers.",
"We follow Devlin et al. (2019) and train on English Wikipedia (preprocessed by WikiExtractor as in Attardi (2015)) and the Toronto BookCorpus (Zhu et al., 2015).",
"We base our implementation on the official TensorFlow implementation of BERT, and train on a single eight-core v3 TPU (v3-8) on the Google Cloud Platform.",
"For fine-tuning, we use the hyperparameters from the default configuration of the HuggingFace Transformers package (Wolf et al., 2020).",
"5 Specifically, we train all models using Adam (Kingma and Ba, 2015) with bias-corrected moment estimates for few-shot learning (Zhang et al., 2021).",
"When fine-tuning on 1024 examples or less, we train for either 10 epochs or 200 steps (whichever is larger).",
"For full-size datasets, we train for 2 epochs.",
"We set the batch size to 12 and use a maximal learning rate of 3 10 5 , which warms up in the first 10% of the steps, and then decays linearly.",
"An interesting question is how to fine-tune the QASS layer parameters (i.e., the S and E matrices in Section 3.2).",
"In our implementation, we chose to discard the pretrained values and fine-tune 4 We used this setting to approximate SpanBERT's hyperparameter setting in terms of epochs.",
"That said, SpanBERT-base was trained for a quarter of the steps (600k steps) using four times as many examples per batch (1024 sequences).",
"See Section 5.1 for additional baselines that control for this difference.",
"5 We did rudimentary tuning on the number of steps only, using a held-out portion of the SQuAD training set, since our training sets can be too small for the default values (e.g., running 10 epochs on 16 examples results in 20 update steps).",
"from a random initialization, due to the possible discrepancy between span statistics in pretraining and fine-tuning datasets.",
"However, we report results on fine-tuning without resetting the QASS parameters as an ablation study (Section 6.3).",
"Our experiments show that Splinter dramatically improves performance in the challenging few-shot setting, unlocking the ability to train question answering models with only hundreds of examples.",
"When trained on large datasets with an order of 100,000 examples, Splinter is competitive with (and often better than) the baselines.",
"Ablation studies demonstrate the contributions of both recurring span selection pretraining and the QASS layer.",
"Figure 4 shows the F1 score (Rajpurkar et al., 2016) of Splinter-base, plotted against all baselines for two datasets, TriviaQA and TextbookQA, as a function of the number of training examples (see Figure 6 in the appendix for the remaining datasets).",
"In addition, Table 1 shows the performance of individual models when given 16, 128, and 1024 training examples across all datasets (see Table 3 in the appendix for additional performance and standard deviation statistics).",
"It is evident that Splinter outperforms all baselines by large margins.",
"Let us examine the results on SQuAD, for example.",
"Given 16 training examples, Splinter obtains 54.6 F1, significantly higher than the best base-line's 18.2 F1.",
"When the number of training examples is 128, Splinter achieves 72.7 F1, outperforming the baselines by 17 points (our reimplementation of SpanBERT) to 30 (RoBERTa).",
"When considering 1024 examples, there is a 5-point margin between Splinter (82.8 F1) and SpanBERT (77.8 F1).",
"The same trend is seen in the other datasets, whether they are in-domain sampled from larger datasets (e.g. TriviaQA) or not; in TextbookQA, for instance, we observe absolute gaps of 9 to 23 F1 between Splinter and the next-best baseline.",
"Table 1 also shows the performance when fine-tuning on the entire training set, when an order of 100,000 examples are available.",
"Even though Splinter was designed for few-shot question answering, it reaches the best result in five out of six datasets.",
"This result suggests that when the target task is extractive question answering, it is better to pretrain with our recurring span selection task than with masked langauge modeling, regardless of the number of annotated training examples.",
"We perform an ablation study to better understand the independent contributions of the pretraining scheme and the QASS layer.",
"We first ablate the effect of pretraining on recurring span selection by applying the QASS layer to pretrained masked language models.",
"We then test whether the QASS layer's pretrained parameters can be reused in Splinter during fine-tuning without reinitializion.",
"While the QASS layer is motivated by our pretraining scheme, it can also be used without pretraining.",
"We apply a randomly-initialized QASS layer to our implementation of SpanBERT, and fine-tune it in the few-shot setting.",
"Figure 5 shows the results of this ablation study for two datasets (see Figure 7 in the appendix for more datasets).",
"We observe Model SQuAD TriviaQA NQ NewsQA SearchQA HotpotQA BioASQ TextbookQA 16 Examples RoBERTa 7.7 7.5 17.3 1.4 6.9 10.5 16.7 3.3 SpanBERT 12.5 12.8 19.7 6.0 13.0 12.6 22.0 5.6 SpanBERT (Reimpl) 18.2 11.6 19.6 7.6 13.3 12.5 15.9 7.5 Splinter 54.6 18.9 27.4 20.8 26.3 24.0 28.2 19.4 128 Examples RoBERTa 43.0 19.1 30.1 16.7 27.8 27.3 46.1 8.2 SpanBERT 48.5 24.2 32.2 17.4 34.3 35.1 55.3 9.4 SpanBERT (Reimpl) 55.8 26.3 36.0 29.5 26.3 36.6 52.2 20.9 Splinter 72.7 44.7 46.3 43.5 47.2 54.7 63.2 42.6 1024 Examples RoBERTa 73.8 46.8 54.2 47.5 54.3 61.8 84.1 35.8 SpanBERT 77.8 50.3 57.5 49.3 60.1 67.4 89.3 42.3 SpanBERT (Reimpl) 77.8 55.5 59.5 52.2 58.9 64.6 89.0 45.7 Splinter 82.8 64.8 65.5 57.3 67.3 70.3 91.0 54.5 Full Dataset RoBERTa 90.3 74.0 79.6 69.8 81.5 78.7 -SpanBERT 92.0 77.2 80.6 71.3 80.1 79.6 -SpanBERT (Reimpl) 92.0 75.8 80.5 71.1 81.4 79.7 -Splinter 92.2 76.5 81.0 71.3 83.0 80.7 -Table 1: Performance (F1) across all datasets when the number of training examples is 16, 128, and 1024.",
"that replacing the static span selection layer with QASS can significantly improve performance on few-shot question answering.",
"Having said that, most of Splinter's improvements in the extremely low data regime do stem from combining the QASS layer with our pretraining scheme, and this combination still outperforms all other variants as the amount of data grows.",
"QASS Reinitialization Between pretraining and fine-tuning, we randomly reinitialize the parameters of the QASS layer.",
"We now test the effect of fine-tuning with the QASS layer's pretrained parameters; intuitively, the more similar the pretraining data is to the task, the better the pretrained layer will perform.",
"Figure 5 shows that the advantage of reusing the pretrained QASS is data-dependent, and can result in both performance gains (e.g. extremely low data in SQuAD) and stagnation (e.g. BioASQ with 256 examples or more).",
"Other datasets exhibit similar trends (see appendix).",
"We identify three conditions that determine whether keeping the pretrained head is preferable: (1) when the number of training examples is extremely low, (2) when the target domain is similar to that used at pretraining (e.g. Wikipedia), and (3) when the questions are relatively simple (e.g. SQuAD versus HotpotQA).",
"The latter two conditions pertain to the Model Representation Similarity RoBERTa 0.29 SpanBERT 0.23 SpanBERT (Reimpl) 0.19 Splinter 0.89 Table 2: Cosine similarity of the representations produced by the transformer encoder before and after fine-tuning on 128 SQuAD examples.",
"compatibility between pretraining and fine-tuning tasks; the information learned in the QASS layer is useful as long as the input and output distribution of the task are close to those seen at pretraining time.",
"The recurring span selection objective was designed to emulate extractive question answering using unlabeled text.",
"How similar is it to the actual target task?",
"To answer this question, we measure how much each pretrained model's functionality has changed after fine-tuning on 128 examples of SQuAD.",
"For the purpose of this analysis, we measure change in functionality by examining the vector representation of each token as produced by the transformer encoder; specifically, we measure the cosine similarity between the vector produced by # Examples F 1 0.0 20.0 40.0 60.0 80.0 100.0 16 32 64 128 256 512 1024 SpanBERT (Reimpl) SpanBERT (Reimpl) + QASS Splinter Splinter (with pretrained QASS) SQuAD # Examples F 1 0.0 20.0 40.0 60.0 80.0 100.0 16 32 64 128 256 512 1024 SpanBERT (Reimpl) SpanBERT (Reimpl) + QASS Splinter Splinter (with pretrained QASS) BioASQ Figure 5: Ablation studies on SQuAD and BioASQ datasets.",
"the pretrained model and the one produced by the fine-tuned model, given exactly the same input.",
"We average these similarities across every token of 200 examples from SQuAD's test set.",
"Table 2 shows that Splinter's outputs are very similar before and after fine-tuning (0.89 average cosine similarity), while the other models' representations seem to change drastically.",
"This suggests that fine-tuning with even 128 question-answering examples makes significant modifica-tions to the functionality of pretrained masked language models.",
"Splinter's pretraining, on the other hand, is much more similar to the fine-tuning task, resulting in much more modest changes to the produced vector representations.",
"The remarkable results of GPT-3 (Brown et al., 2020) have inspired a renewed interest in few-shot learning.",
"While some work focuses on classification tasks (Schick and Schutze, 2020; Gao et al., 2021), our work investigates few-shot learning in the context of extractive question answering.",
"One approach to this problem is to create synthetic text-question-answer examples.",
"Both Lewis et al. (2019) and Glass et al. (2020) use the traditional NLP pipeline to select noun phrases and named entities in Wikipedia paragraphs as potential answers, which are then masked from the context to create pseudo-questions.",
"Lewis et al. (2019) use methods from unsupervised machine translation to translate the pseudo-questions into real ones, while Glass et al. (2020) keep the pseudo-questions but use information retrieval to find new text passages that can answer them.",
"Both works assume access to languageand domain-specific NLP tools such as part-of-speech taggers, syntactic parsers, and named-entity recognizers, which might not always be available.",
"Our work deviates from this approach by exploiting the natural phenomenon of recurring spans in order to generate multiple question-answer pairs per text passage, without assuming any languageor domain-specific models or resources are available beyond plain text.",
"Similar ideas to recurring span selection were used for creating synthetic coreference resolution examples (Kocijan et al., 2019; Varkel and Globerson, 2020), which mask single words that occur multiple times in the same context.",
"CorefBERT (Ye et al., 2020) combines this approach with a copy mechanism for predicting the masked word during pretraining, alongside the masked language modeling objective.",
"Unlike our approach, which was designed to align well with span selection, CorefBERT masks only single-word nouns (rather than arbitrary spans) and replaces each token in the word with a separate mask token (rather than a single mask for the entire multi-token word).",
"Therefore, it does not emulate extractive question answering.",
"We did not add CorefBERT as a baseline since the performance of both CorefBERT-base and CorefBERT-large was lower than SpanBERT-base's performance on the full-data MRQA benchmark, and pretraining CorefBERT from scratch was beyond our available computational resources.",
"We explore the few-shot setting of extractive question answering, and demonstrate that existing methods, based on fine-tuning large pretrained language models, fail in this setup.",
"We propose a new pretraining scheme and architecture for span selection that lead to dramatic improvements, reaching surprisingly good results even when only an order of a hundred examples are available.",
"Our work shows that choices that are often deemed unimportant when enough data is available, again become crucial in the few-shot setting, opening the door to new methods that take advantage of prior knowledge on the downstream task during model development.",
"This project was funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant ERC HOLI 819080), the Blavatnik Fund, the Alon Scholarship, the Yandex Initiative for Machine Learning and Intel Corporation.",
"We thank Google's TPU Research Cloud (TRC) for their support in providing TPUs for this research."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"method",
"objective",
"objective",
"objective",
"other",
"other"
] |
[
"The principal task in supervised neural machine translation (NMT) is to learn to generate target sentences conditioned on the source inputs from a set of parallel sentence pairs, and thus produce a model capable of generalizing to unseen instances.",
"However, it is commonly observed that the generalization performance of the model is highly influenced by the amount of parallel data used in training.",
"Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples.",
"In this paper, we present a novel data augmentation paradigm termed Continuous Semantic Augmentation (CSANMT), which augments each training instance with an adjacency semantic region that could cover adequate variants of literal expression under the same meaning.",
"We conduct extensive experiments on both rich-resource and low-resource settings involving various language pairs, including WMT14 English {German,French}, NIST Chinese English and multiple low-resource IWSLT translation tasks.",
"The provided empirical evidences show that CSANMT sets a new level of performance among existing augmentation techniques, improving on the state-of-the-art by a large margin.",
"1 1 Introduction Neural machine translation (NMT) is one of the core topics in natural language processing, which aims to generate sequences of words in the target language conditioned on the source inputs (Sutskever et al., 2014; Cho et al., 2014; Wu et al., 2016; Vaswani et al., 2017).",
"In the common supervised setting, the training objective is to learn a transformation from the source space to the target space X (cid:55) Y : f ( y | x ; ) with the usage of parallel data.",
"In this way, NMT models are expected to 1 The core codes are contained in Appendix E. be capable of generalizing to unseen instances with the help of large scale training data, which poses a big challenge for scenarios with limited resources.",
"To address this problem, various methods have been developed to leverage abundant unlabeled data for augmenting limited labeled data (Sen-nrich et al., 2016a; Cheng et al., 2016; He et al., 2016; Hoang et al., 2018; Edunov et al., 2018; He et al., 2020; Song et al., 2019).",
"For example, back-translation (BT) (Sennrich et al., 2016a) makes use of the monolingual data on the target side to synthesize large scale pseudo parallel data, which is further combined with the real parallel corpus in machine translation task.",
"Another line of research is to introduce adversarial inputs to improve the generalization of NMT models towards small perturbations (Iyyer et al., 2015; Fadaee et al., 2017; Wang et al., 2018; Cheng et al., 2018; Gao et al., 2019).",
"While these methods lead to significant boosts in translation quality, we argue that augmenting the observed training data in the discrete space inherently has two major limitations.",
"First, augmented training instances in discrete space are lack diversity.",
"We still take BT as an example, it typically uses beam search (Sennrich et al., 2016a) or greedy search (Lample et al., 2018a,c) to generate synthetic source sentences for each target monolingual sentence.",
"The above two search strategies are approximate algorithms to identify the maximum a-posteriori (MAP) output (Edunov et al., 2018), and thus favor the most frequent one in case of ambiguity.",
"Edunov et al. (2018) proposed a sampling strategy from the output distribution to alleviate this issue, but this method typically yields synthesized data with low quality.",
"While some extensions (Wang et al., 2018; Imamura et al., 2018; Khayrallah et al., 2020; Nguyen et al., 2020) augment each training instance with multiple literal forms, they still fail to cover adequate variants under the same meaning.",
"crete space to preserve their original meanings.",
"In the context of natural language processing, discrete manipulations such as adds, drops, reorders, and/or replaces words in the original sentences often result in significant changes in semantics.",
"To address this issue, Gao et al. (2019) and Cheng et al. (2020) instead replace words with other words that are predicted using language model under the same context, by interpolating their embeddings.",
"Although being effective, these techniques are limited to word-level manipulation and are unable to perform the whole sentence transformation, such as producing another sentence by rephrasing the original one so that they have the same meaning.",
"In this paper, we propose C ontinuous S emantic A ugmentation (CSANMT), a novel data augmentation paradigm for NMT, to alleviate both limitations mentioned above.",
"The principle of CSANMT is to produce diverse training data from a semantically-preserved continuous space.",
"Specifically, (1) we first train a semantic encoder via a tangential contrast, which encourages each training instance to support an adjacency semantic region in continuous space and treats the tangent points of the region as the critical states of semantic equivalence.",
"This is motivated by the intriguing observation made by recent work showing that the vectors in continuous space can easily cover adequate variants under the same meaning (Wei et al., 2020a).",
"(2) We then introduce a Mixed Gaussian Recurrent Chain (MGRC ) algorithm to sample a cluster of vectors from the adjacency semantic region.",
"(3) Each of the sampled vectors is finally incorporated into the decoder by developing a broadcasting integration network , which is agnostic to model architectures.",
"As a consequence, transforming discrete sentences into the continuous space can effectively augment the training data space and thus improve the generalization capability of NMT models.",
"We evaluate our framework on a variety of machine translation tasks, including WMT14 English-German/French, NIST Chinese-English and multiple IWSLT tasks.",
"Specifically, CSANMT sets the new state of the art among existing augmentation techniques on the WMT14 English-German task with 30.94 BLEU score.",
"In addition, our approach could achieve comparable performance with the baseline model with the usage of only 25% of training data.",
"This reveals that CSANMT has great potential to achieve good results with very few data.",
"Furthermore, CSANMT demonstrates consistent improvements over strong baselines in low resource scenarios, such as IWSLT14 English-German and IWSLT17 English-French.",
"Problem Definition Supposing X and Y are two data spaces that cover all possible sequences of words in source and target languages, respectively.",
"We denote ( x , y ) ( X , Y ) as a pair of two sentences with the same meaning, where x = { x 1 , x 2 , ..., x T } is the source sentence with T tokens, and y = { y 1 , y 2 , ..., y T } is the target sentence with T tokens.",
"A sequence-to-sequence model is usually applied to neural machine translation, which aims to learn a transformation from the source space to the target space X (cid:55) Y : f ( y | x ; ) with the usage of parallel data.",
"Formally, given a set of observed sentence pairs C = { ( x ( n ) , y ( n ) ) } N n =1 , the training objective is to maximize the log-likelihood: J mle () = E ( x , y ) C (cid:0) log P ( y | x ; ) (cid:1) .",
"The log-probability is typically decomposed as: log P ( y | x ; ) = (cid:80) T t =1 log P ( y t | y <t , x ; ) , where is a set of trainable parameters and y <t is a partial sequence before time-step t .",
"However, there is a major problem in the common supervised setting for neural machine translation, that is the number of training instances is very limited because of the cost in acquiring parallel data.",
"This makes it difficult to learn an NMT model generalized well to unseen instances.",
"Traditional data augmentation methods generate more training samples by applying discrete manipulations to unlabeled (or labeled) data, such as back-translation or randomly replacing a word with another one, which usually suffer from the problems of semantic deviation and the lack of diversity.",
"We propose a novel data augmentation paradigm for neural machine translation, termed continuous semantic augmentation (CSANMT), to better generalize the model's capability to unseen instances.",
"We adopt the Transformer (Vaswani et al., 2017) model as a backbone, and the framework is shown in Figure",
"1. In this architecture, an extra semantic encoder translates the source x and the target sentence y to real-value vectors r x = ( x ; ) and r y = ( y ; ) respectively, where ( ; ) is the forward function of the semantic encoder parameterized by (parameters other than ).",
"Definition",
"1. There is a universal semantic space among the source and the target languages for neural machine translation, which is established by a semantic encoder.",
"It defines a forward function ( ; ) to map discrete sentences into continuous vectors, that satisfies: ( x , y ) ( X , Y ) : r x = r y .",
"Besides, an adjacency semantic region ( r x , r y ) in the semantic space describes adequate variants of literal expression centered around each observed sentence pair ( x , y ) .",
"In our scenario, we first sample a series of vectors (denoted by R ) from the adjacency semantic region to augment the current training instance, that is R = { r (1) , r (2) , ..., r ( K ) } , where r ( k ) ( r x , r y ) .",
"K is the hyperparameter that determines the number of sampled vectors.",
"Each sample r ( k ) is then integrated into the generation process through a broadcasting integration network: o t = W 1 r ( k ) + W 2 o t + b, (2) where o t is the output of the self-attention module at position t .",
"Finally, the training objective in Eq.",
"(1) can be improved as J mle () = E ( x , y ) C , r ( k ) R (cid:0) log P ( y | x , r ( k ) ; ) (cid:1)(cid:1) .",
"By augmenting the training instance ( x , y ) with diverse samples from the adjacency semantic region, the model is expected to generalize to more unseen instances.",
"To this end, we must consider such two problems: (1) How to optimize the semantic encoder so that it produces a meaningful adjacency semantic region for each observed training pair.",
"(2) How to obtain samples from the adjacency semantic region in an efficient and effective way.",
"In the rest part of this section, we introduce the resolutions of these two problems, respectively.",
"Tangential Contrastive Learning We start from analyzing the geometric interpretation of adjacency semantic regions.",
"The schematic diagram is illustrated in Figure",
"2. Let ( x ( i ) , y ( i ) ) and ( x ( j ) , y ( j ) ) are two instances randomly sampled from the training corpora.",
"For ( x ( i ) , y ( i ) ) , the adjacency semantic region ( r x ( i ) , r y ( i ) ) is defined as the union of two closed balls that are centered by r x ( i ) and r y ( i ) , respectively.",
"The radius of both balls is d = r x ( i ) r y ( i ) 2 , which is also considered as a slack variable for determining semantic equivalence.",
"The underlying interpretation is that vectors whose distances from r x ( i ) (or r y ( i ) ) do not exceed d , are semantically-equivalent to both r x ( i ) and r y ( i ) .",
"To make ( r x ( i ) , r y ( i ) ) conform to the interpretation, we employ a similar method as in (Zheng et al., 2019; Wei et al., 2021) to optimize the semantic encoder with the tangential contrast.",
"Specifically, we construct negative samples by applying the convex interpolation between the current instance and other ones in the same training batch for instance comparison.",
"And the tangent points (i.e., the points on the boundary) are considered as the critical states of semantic equivalence.",
"The training objective is formulated as: J ctl ( ) = E ( x ( i ) , y ( i ) ) B (cid:18) log e s (cid:0) r x ( i ) ,r y ( i ) (cid:1) e s (cid:0) r x ( i ) ,r y ( i ) (cid:1) + (cid:19) , = |B| (cid:88) j & j = i (cid:16) e s (cid:0) r y ( i ) ,r y ( j ) (cid:1) + e s (cid:0) r x ( i ) ,r x ( j ) (cid:1)(cid:17) , (4) where B indicates a batch of sentence pairs randomly selected from the training corpora C , and s ( ) is the score function that computes the cosine similarity between two vectors.",
"The negative samples r x ( j ) and r y ( j ) are designed as the following 7932 Figure 3: The geometric diagram of the proposed MGRC sampling.",
"r x ( j ) = r x ( i ) + x ( r x ( j ) r x ( i ) ) , x ( d d x , 1] , r y ( j ) = r y ( i ) + y ( r y ( j ) r y ( i ) ) , y ( d d y , 1] , (5)",
"where d x = r x ( i ) r x ( j ) and d y = r y ( i ) r y ( j ) .",
"The two equations in Eq.",
"(5) set up when d x and d y are larger than d respectively, or else r x ( j ) = r x ( j ) and r y ( j ) = r y ( j ) .",
"According to this design, an adjacency semantic region for the i -th training instance can be fully established by interpolating various instances in the same training batch.",
"We follow Wei et al. (2021) to adaptively adjust the value of x (or y ) during the training process, and refer to the original paper for details.",
"MGRC Sampling To obtain augmented data from the adjacency semantic region for the training instance ( x , y ) , we introduce a Mixed Gaussian Recurrent Chain (denoted by MGRC ) algorithm to design an efficient and effective sampling strategy.",
"As illustrated in Figure 3, we first transform the bias vector r = r y r x according to a predefined scale vector , that is r , where is the element-wise product operation.",
"Then, we construct a novel sample r = r + r for augmenting the current instance, in which r is either r x or r y .",
"As a consequence, the goal of the sampling strategy turns into find a set of scale vectors, i.e. { (1) , (2) , ..., ( K ) } .",
"Intuitively, we can assume that follows a distribution with universal or Gaussian forms, despite the latter demonstrates better results in our experience.",
"Formally, we design a Algorithm 1 MGRC Sampling Input: The representations of the training instance ( x , y ) , i.e. r x and r y .",
"mixed Gaussian distribution as follow:",
"( k ) p ( | (1) , (2) , ..., ( k 1) ) , p = N (cid:0) 0 , diag( W 2 r ) (cid:1) + (1 . 0 ) N (cid:18) 1 k 1 k 1 (cid:88) i =1 ( i ) , 1 (cid:19) .",
"(6) This framework unifies the recurrent chain and the rejection sampling mechanism.",
"Concretely, we first normalize the importance of each dimension in r as W r = | r | min ( | r | ) max ( | r | ) min ( | r | ) , the operation | | takes the absolute value of each element in the vector, which means the larger the value of an element is the more informative it is.",
"Thus N ( 0 , diag( W 2 r )) limits the range of sampling to a subspace of the adjacency semantic region, and rejects to conduct sampling from the uninformative dimensions.",
"Moreover, N ( 1 k 1 (cid:80) k 1 i =1 ( i ) , 1 ) simulates a recurrent chain that generates a sequence of reasonable vectors where the current one is dependent on the prior vectors.",
"The reason for this design is that we expect that p in Eq.",
"(6) can become a stationary distribution with the increase of the number of samples, which describes the fact that the diversity of each training instance is not infinite.",
"is a hyperparameter to balance the importance of the above two Gaussian forms.",
"For a clearer presentation, Algorithm 1 summarizes the sampling process.",
"The training objective in our approach is a combination of J mle () in Eq.",
"(3) and J ctl ( ) in Eq.",
"(4).",
"In practice, we introduce a two-phase training procedure with mini-batch losses.",
"Firstly, we train the semantic encoder from scratch using the task-specific data, i.e. = argmax J ctl ( ) .",
"Secondly, we optimize the encoder-decoder model by maximizing the log-likelihood, i.e. = argmax J mle () , and fine-tune the semantic encoder with a small learning rate at the same time.",
"During inference, the sequence of target words is generated auto-regressively, which is almost the same as the vanilla Transformer (Vaswani et al., 2017).",
"A major difference is that our method involves the semantic vector of the input sequence for generation: y t = argmax y t P ( | y <t , x , r x ; ) , where r x = ( x ; ) .",
"This module is plug-in-use as well as is agnostic to model architectures.",
"We first apply CSANMT to NIST Chinese-English (Zh En), WMT14 English-German (En De) and English-French (En Fr) tasks, and conduct extensive analyses for better understanding the proposed method.",
"And then we generalize the capability of our method to low-resource IWSLT tasks.",
"Datasets.",
"For the Zh En task, the LDC corpus is taken into consideration, which consists of 1.25M sentence pairs with 27.9M Chinese words and 34.5M English words, respectively.",
"The NIST 2006 dataset is used as the validation set for selecting the best model, and NIST 2002 (MT02), 2003 (MT03), 2004 (MT04), 2005 (MT05), 2008 (MT08) are used as the test sets.",
"For the En De task, we employ the popular WMT14 dataset, which consists of approximately 4.5M sentence pairs for training.",
"We select newstest2013 as the validation set and newstest2014 as the test set.",
"For the En Fr task, we use the significantly larger WMT14 dataset consisting of 36M sentence pairs.",
"The combination of { newstest2012, 2013 } was used for model selection and the experimental results were reported on newstest2014 .",
"Refer to Appendix A for more details.",
"Training Details.",
"We implement our approach on top of the Transformer (Vaswani et al., 2017).",
"The semantic encoder is a 4-layer transformer encoder with the same hidden size as the backbone model.",
"Following sentence-bert (Reimers and Gurevych, 2019), we average the outputs of all positions as the sequence-level representation.",
"The learning rate for finetuning the semantic encoder at the second training stage is set as 1 e 5 .",
"All experiments are performed on 8 V100 GPUs.",
"We accumulate the gradient of 8 iterations and update the models with a batch of about 65K tokens.",
"The hyperparameters K and in MGRC sampling are tuned on the validation set with the range of K { 10 , 20 , 40 , 80 } and { 0 .",
"15 , 0 .",
"30 , 0 .",
"45 , 0 .",
"6 , 0 .",
"75 , 0 .",
"90 } .",
"We use the default setup of K = 40 for all three tasks, = 0 .",
"6 for both Zh En and En De while = 0 .",
"45 for En Fr.",
"For evaluation, the beam size and length penalty are set to 4 and 0.6 for the En De as well as En Fr, while 5 and 1.0 for the Zh En task.",
"Results of Zh En.",
"Table 1 shows the results on the Chinese-to-English translation task.",
"From the results, we can conclude that our approach outperforms existing augmentation strategies such as back-translation (Sennrich et al., 2016a; Wei et al., 2020a) and switchout (Wang et al., 2018) by a large margin (up to 3.63 BLEU), which verifies that augmentation in continuous space is more effective than methods with discrete manipulations.",
"Compared to the approaches that replace words in the embedding space (Cheng et al., 2020), our approach also demonstrates superior performance, which reveals that sentence-level augmentation with continuous semantics works better on generalizing to unseen instances.",
"Moreover, compared to the vanilla Transformer, our approach consistently 7934 Model WMT 2014 En De WMT 2014 En Fr #Params.",
"achieves promising improvements on five test sets.",
"Results of En De and En Fr.",
"From Table 2, our approach consistently performs better than existing methods (Sennrich et al., 2016a; Wang et al., 2018; Wei et al., 2020a; Cheng et al., 2020), yielding significant gains (0.65 1.76 BLEU) on the En De and En Fr tasks.",
"An exception is that Nguyen et al. (2020) achieved comparable results with ours via multiple forward and backward NMT models, thus data diversification intuitively demonstrates lower training efficiency.",
"Moreover, we observe that CSANMT gives 30.16 BLEU on the En De task with the base setting, significantly outperforming the vanilla Transformer by 2.49 BLEU points.",
"Our approach yields a further improvement of 0.68 BLEU by equipped with the wider architecture, demonstrating superiority over the standard Transformer by 2.15 BLEU.",
"Similar observations can be drawn for the En Fr task.",
"Effects of K and .",
"Figure 4 illustrates how the hyper-parameters K and in MGRC sampling affect the translation quality.",
"From Figures",
"4(a)-4(c), we can observe that gradually increasing the number of samples significantly improves BLEU scores, which demonstrates large gaps between K = 10 and K = 40 .",
"However, assigning larger values (e.g., 80 ) to K does not result in further improvements among all three tasks.",
"We conjecture that the reasons are two folds: (1) it is fact that the diversity of each training instance is not infinite and thus MGRC gets saturated is inevitable with K increasing.",
"(2) MGRC sampling with a scaled item (i.e., W r ) may degenerate to traverse in the same place.",
"This prompts us to design more sophisticated algorithms in future work.",
"In our experiments, we default set K = 40 to achieve a balance between the training efficiency and translation quality.",
"Figure",
"4(d) shows the effect of on validation sets, which balances the importance of two Gaussian forms during the sampling process.",
"The setting of = 0 .",
"6 achieves the best results on both the Zh En and En De tasks, and = 0 .",
"45 consistently outperforms other values on the En Fr task.",
"Lexical diversity and semantic faithfulness.",
"We demonstrate both the lexical diversity (mea-sured by TTR = num. of types num. of tokens ) of various trans-7935 0.00 0.25 0.50 0.75 1.00 Ratio of the Training Data 16 20 24 28 32 BLEU ( % ) Ours Back-translation + Mono.",
"lations and the semantic faithfulness of machine translated ones (measured by BLEURT with considering human translations as the references) in Table 4.",
"It is clear that CSANMT substantially bridge the gap of the lexical diversity between translations produced by human and machine.",
"Meanwhile, CSANMT shows a better capability on preserving the semantics of the generated translations than Transformer.",
"We intuitively attribute the significantly increases of BLEU scores on all datasets to these two factors.",
"We also have studied the robustness of CSANMT towards noisy inputs and the translationese effect, see Appendix D for details.",
"Effect of the semantic encoder.",
"We introduce two variants of the semantic encoder to investigate its performance on En De validation set.",
"Specifically, (1) we remove the extra semantic encoder and construct the sentence-level representations by averaging the sequence of outputs of the vanilla sentence encoder.",
"(2) We replace the default 4-layer semantic encoder with a large pre-trained model (PTM) (i.e., XLM-R (Conneau et al., 2020)).",
"The results are reported in Table 3.",
"Comparing line 2 with line 3, we can conclude that an extra semantic encoder is necessary for constructing the universal continuous space among different languages.",
"Moreover, when the large PTM is incorporated, our approach yields further improvements, but it causes massive computational overhead.",
"isons between different augmentation methods, we asymptotically increase the training data to analyze the performance of them on the En De translation.",
"As in Figure 5, our approach significantly outperforms the back-translation method on each subset, whether or not extra monolingual data (Sen-nrich et al., 2016a) is introduced.",
"These results demonstrate the stronger ability of our approach than discrete augmentation methods on generalizing to unseen instances with the same set of observed data points.",
"Encouragingly, our approach achieves comparable performance with the baseline model with only 25% of training data, which indicates that our approach has great potential to achieve good results with very few data.",
"Effect of MGRC sampling and tangential contrastive learning.",
"To better understand the effectiveness of the MGRC sampling and the tangential contrastive learning, we conduct detailed ablation studies in Table 5.",
"The details of four variants with different objectives or sampling strategies are shown in Appendix C .",
"From the results, we can observe that both removing the recurrent dependence and replacing the Gaussian forms with uniform distributions make the translation quality decline, but the former demonstrates more drops.",
"We also have tried the training objectives with other forms, such as variational inference and cosine similarity, to optimize the semantic encoder.",
"However, the BLEU score drops significantly.",
"shows the evolution of BLEU scores during training. It is obvious that our method performs consistently better than both the vanilla Transformer and the back-translation method at each iteration (ex-cept for the first 10K warm-up iterations, where the former one has access to less unique training data than the latter two due to the K times over-sampling). For the vanilla Transformer, the BLEU score reaches its peak at about 52K iterations. In comparison, both CSANMT and the back-translation method require 75K updates for convergence. In other words, CSANMT spends 44% more training costs than the vanilla Transformer, due to the longer time to make the NMT model converge with augmented training instances. This is the same as the back-translation method.",
"Word prediction accuracy. Figure 7 illustrates the prediction accuracy of both frequent and rare words. As expected, CSANMT generalizes to rare words better than the vanilla Transformer, and the gap of word prediction accuracy is as large as 16%. This indicates that the NMT model alleviates the probability under-estimation of rare words via continuous semantic augmentation.",
"Baselines.",
"In contrast to the vanilla Transformer, CSANMT involves with approximate 20% additional parameters.",
"In this section, we further compare against the baselines with increased amounts of parameters, and investigate the performance of CSANMT equipped with much stronger baselines (e.g. deep and scale Transformers (Ott et al., 2018; Wang et al., 2019; Wei et al., 2020b)).",
"From the results on WMT14 testsets in Table 6, we can observe that CSANMT still outperforms the vanilla Transformer (by more than 1.2 BLEU) under the same amount of parameters, which shows that the additional parameters are not the key to the improvement.",
"Moreover, CSANMT yields at least 0.9 BLEU gains equipped with much stronger baselines.",
"For example, the scale Transformer (Ott et al., 2018), which originally gives 29.3 BLEU in the En De task, now gives 31.37 BLEU with our continuous semantic augmentation strategy.",
"It is important to mention that our method can help models to achieve further improvement, even if they are strong enough.",
"We further generalize the capability of the proposed CSANMT to various low-resource machine translation tasks, including IWSLT14 English-German and IWSLT17 English-French.",
"The details of the datasets and model configurations can be found in Appendix B .",
"Table 7 shows the results of different models.",
"Compared to the vanilla Transformer, the proposed CSANMT improve the BLEU scores of the two tasks by 2.7 and 2.9 points, respectively.",
"This result indicates that the claiming of the continuous semantic augmentation enriching the training corpora with very limited observed instances.",
"Data Augmentation (DA) (Edunov et al., 2018; Kobayashi, 2018; Gao et al., 2019; Khayrallah et al., 2020; Pham et al., 2021) has been widely used in neural machine translation.",
"The most popular one is the family of back-translation (Sennrich et al., 2016a; Nguyen et al., 2020), which utilizes a target-to-source model to translate monolingual target sentences back into the source language.",
"Besides, constructing adversarial training instances with diverse literal forms via word replacing or embedding interpolating (Wang et al., 2018; Cheng et al., 2020) is beneficial to improve the generalization performance of NMT models.",
"Vicinal Risk Minimization (VRM) (Chapelle et al., 2000) is another principle of data augmentation, in which DA is formalized as extracting additional pseudo samples from the vicinal distribution of observed instances.",
"Typically the vicinity of a training example is defined using dataset-dependent heuristics, such as color (scale, mixup) augmentation (Simonyan and Zisserman, 2014; Krizhevsky et al., 2012; Zhang et al., 2018) in computer vision and adversarial augmentation with manifold neighborhoods (Ng et al., 2020; Cheng et al., 2021) in NLP.",
"Our approach relates to VRM that involves with an adjacency semantic region as the vicinity manifold for each training instance.",
"Sentence Representation Learning is a well investigated area with dozens of methods (Kiros et al., 2015; Cer et al., 2018; Yang et al., 2018).",
"In recent years, the methods built on large pre-trained models (Devlin et al., 2019; Conneau et al., 2020) have been widely used for learning sentence level representations (Reimers and Gurevych, 2019; Huang et al., 2019; Yang et al., 2019).",
"Our work is also related to the methods that aims at learning the universal representation (Zhang et al., 2016; Schwenk and Douze, 2017; Yang et al., 2021) for multiple semantically-equivalent sentences in NMT.",
"In this context, contrastive learning has become a popular paradigm in NLP (Kong et al., 2020; Clark et al., 2020; Gao et al., 2021).",
"The most related work are Wei et al. (2021) and Chi et al. (2021), which suggested transforming cross-lingual sentences into a shared vector by contrastive objectives.",
"We propose a novel data augmentation paradigm CSANMT, which involves with an adjacency semantic region as the vicinity manifold for each training instance.",
"This method is expected to make more unseen instances under generalization with very limited training data.",
"The main components of CSANMT consists of the tangential contrastive learning and the Mixed Gaussian Recurrent Chain (MGRC ) sampling.",
"Experiments on both richand low-resource machine translation tasks demonstrate the effectiveness of our method.",
"In the future work, we would like to further study the vicinal risk minimization with the combination of multi-lingual aligned scenarios and large-scale monolingual data, and development it as a pure data augmentator merged into the vanilla Transformer.",
"We would like to thank all of the anonymous reviewers (during ARR Oct. and ARR Dec.) for the helpful comments.",
"We also thank Baosong Yang and Dayiheng Liu for their instructive suggestions and invaluable help."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"A range of studies have concluded that neural word prediction models can distinguish grammatical from ungrammatical sentences with high accuracy.",
"However, these studies are based primarily on monolingual evidence from English.",
"To investigate how these models' ability to learn syntax varies by language, we introduce CLAMS (Cross-Linguistic Assessment of Models on Syntax), a syntactic evaluation suite for monolingual and multilingual models.",
"CLAMS includes subject-verb agreement challenge sets for English, French, German, Hebrew and Russian, generated from grammars we develop.",
"We use CLAMS to evaluate LSTM language models as well as monolingual and multilingual BERT.",
"Across languages, monolingual LSTMs achieved high accuracy on dependencies without attractors, and generally poor accuracy on agreement across object relative clauses.",
"On other constructions, agreement accuracy was generally higher in languages with richer morphology.",
"Multilingual models generally underperformed monolingual models.",
"Multilingual BERT showed high syntactic accuracy on English, but noticeable deficiencies in other languages.",
"Neural networks can be trained to predict words from their context with much greater accuracy than the architectures used for this purpose in the past.",
"This has been shown to be the case for both recurrent neural networks (Mikolov et al., 2010; Sun-dermeyer et al., 2012; Jozefowicz et al., 2016) and non-recurrent attention-based models (Devlin et al., 2019; Radford et al., 2019).",
"To gain a better understanding of these models' successes and failures, in particular in the domain of syntax, proposals have been made for testing the Work done while at Johns Hopkins University.",
"models on subsets of the test corpus where successful word prediction crucially depends on a correct analysis of the structure of the sentence (Linzen et al., 2016).",
"A paradigmatic example is subject-verb agreement.",
"In many languages, including English, the verb often needs to agree in number (here, singular or plural) with the subject (asterisks represent ungrammatical word predictions): (1) The key to the cabinets is/*are next to the coins.",
"To correctly predict the form of the verb (under-lined), the model needs to determine that the head of the subject of the sentencean abstract, structurally defined notionis the word key rather than cabinets or coins .",
"The approach of sampling challenging sentences from a test corpus has its limitations.",
"Examples of relevant constructions may be difficult to find in the corpus, and naturally occurring sentences often contain statistical cues (confounds) that make it possible for the model to predict the correct form of the verb without an adequate syntactic analysis (Gu-lordava et al., 2018).",
"To address these limitations, a growing number of studies have used constructed materials, which improve experimental control and coverage of syntactic constructions (Marvin and Linzen, 2018; Wilcox et al., 2018; Futrell et al., 2019; Warstadt et al., 2019a).",
"Existing experimentally controlled data setsin particular, those targeting subject-verb agreement have largely been restricted to English.",
"As such, we have a limited understanding of the effect of the cross-linguistic variability in neural networks' syntactic prediction abilities.",
"In this paper, we introduce the Cross-Linguistic Assessment of Models on Syntax (CLAMS) data set, which extends the subject-verb agreement component of the Marvin and Linzen (2018) challenge set to French, German, Hebrew and Russian.",
"By focusing on a single linguistic phenomenon in related languages, 1 we can directly compare the models' performance across languages.",
"We see the present effort as providing a core data set that can be expanded in future work to improve coverage to other languages and syntactic constructions.",
"To this end, we release the code for a simple grammar engineering framework that facilitates the creation and generation of syntactic evaluation sets.",
"2 We use CLAMS to test two hypotheses.",
"First, we hypothesize that a multilingual model would show transfer across languages with similar syntactic constructions, which would lead to improved syntactic performance compared to monolingual models.",
"In experiments on LSTM language models (LMs), we do not find support for this hypothesis; contrarily, accuracy was lower for the multilingual model than the monolingual ones.",
"Second, we hypothesize that language models would be better able to learn hierarchical syntactic generalizations in morphologically complex languages (which provide frequent overt cues to syntactic structure) than in morphologically simpler languages (Gulordava et al., 2018; Lorimor et al., 2008; McCoy et al., 2018).",
"We test this using LSTM LMs we train, and find moderate support for this hypothesis.",
"In addition to our analysis of LSTM LMs, we demonstrate the utility of CLAMS for testing pre-trained word prediction models.",
"We evaluate multilingual BERT (Devlin et al., 2019), a bidirectional Transformer model trained on a multilingual corpus, and find that this model performs well on English, has mixed syntactic abilities in French and German, and performs poorly on Hebrew and Russian.",
"Its syntactic performance in English was somewhat worse than that of monolingual English BERT, again suggesting that interference between languages offsets any potential syntactic transfer.",
"Language models (LMs) are statistical models that estimate the probability of sequences of wordsor, equivalently, the probability of the next word of the sentence given the preceding ones.",
"Currently, the most effective LMs are based on neural networks that are trained to predict the next word in a 1 English, French, German and Russian are all Indo-European languages, and (Modern) Hebrew syntax exhibits European areal influence (for different perspectives, see Wexler 1990; Zuckermann 2006; Zeldes 2013).",
"large corpus.",
"Neural LMs are commonly based on LSTMs (Hochreiter and Schmidhuber, 1997; Sun-dermeyer et al., 2012) or non-recurrent attention-based architectures (Transformers, Vaswani et al. 2017).",
"The results of existing studies comparing the performance of the two architectures on grammatical evaluations are mixed (Tran et al., 2018; van Schijndel et al., 2019), and the best reported syntactic performance on English grammatical evaluations comes from LMs trained with explicit syntactic supervision (Kuncoro et al., 2018, 2019).",
"We focus our experiments in the present study on LSTM-based models, but view CLAMS as a general tool for comparing LM architectures.",
"A generalized version of the word prediction paradigm, in which a bidirectional Transformer-based encoder is trained to predict one or more words in arbitrary locations in the sentence, has been shown to be an effective pre-training method in systems such as BERT (Devlin et al., 2019).",
"While there are a number of variations on this architecture (Raffel et al., 2019; Radford et al., 2019), we focus our evaluation on the pre-trained English BERT and multilingual BERT.",
"Human acceptability judgments have long been employed in linguistics to test the predictions of grammatical theories (Chomsky, 1957; Schutze, 1996).",
"There are a number of formulations of this task; we focus on the one in which a speaker is expected to judge a contrast between two minimally different sentences (a minimal pair).",
"For instance, the following examples illustrate the contrast between grammatical and ungrammatical subject-verb agreement on the second verb in a coordination of short (2a) and long (2b) verb phrases; native speakers of English will generally agree that the first underlined verb is more acceptable than the second one in this context.",
"(2) Verb-phrase coordination :",
"a. The woman laughs and talks/*talk.",
"b. My friends play tennis every week and then get/*gets ice cream.",
"In computational linguistics, acceptability judgments have been used extensively to assess the grammatical abilities of LMs (Linzen et al., 2016; Lau et al., 2017).",
"For the minimal pair paradigm, this is done by determining whether the LM assigns a higher probability to the grammatical member of the minimal pair than to the ungrammatical member.",
"This paradigm has been applied to a range of constructions, including subject-verb agreement (Marvin and Linzen, 2018; An et al., 2019), negative polarity item licensing (Marvin and Linzen, 2018; Jumelet and Hupkes, 2018), filler-gap dependencies (Chowdhury and Zamparelli, 2018; Wilcox et al., 2018), argument structure (Kann et al., 2019), and several others (Warstadt et al., 2019a).",
"To the extent that the acceptability contrast relies on a single word in a particular location, as in (2), this approach can be extended to bidirectional word prediction systems such as BERT, even though they do not assign a probability to the sentence (Gold-berg, 2019).",
"As we describe below, the current version of CLAMS only includes contrasts of this category.",
"An alternative use of acceptability judgments in NLP involves training an encoder to classify sentences into acceptable and unacceptable, as in the Corpus of Linguistic Acceptability (CoLA, Warstadt et al. 2019b).",
"This approach requires supervised training on acceptable and unacceptable sentences; by contrast, the prediction approach we adopt can be used to evaluate any word prediction model without additional training.",
"Most of the work on grammatical evaluation of word prediction models has focused on English.",
"However, there are a few exceptions, which we discuss in this section.",
"To our knowledge, all of these studies have used sentences extracted from a corpus rather than a controlled challenge set, as we propose.",
"Gulordava et al. (2018) extracted English, Italian, Hebrew, and Russian evaluation sentences from a treebank.",
"Dhar and Bisazza (2018) trained a multilingual LM on a concatenated French and Italian corpus, and tested whether grammatical abilities transfer across languages.",
"Ravfogel et al. (2018) reported an in-depth analysis of LSTM LM performance on agreement prediction in Basque, and Ravfogel et al. (2019) investigated the effect of different syntactic properties of a language on RNNs' agreement prediction accuracy by creating synthetic variants of English.",
"Finally, grammatical evaluation has been proposed for machine translation systems for languages such as German and French (Sennrich, 2017; Isabelle et al., 2017).",
"To construct our challenge sets, we use a lightweight grammar engineering framework that we term attribute-varying grammars (AVGs).",
"This framework provides more flexibility than the hard-coded templates of Marvin and Linzen (2018) while avoiding the unbounded embedding depth of sentences generated from a recursive context-free grammar (CFG, Chomsky 1956).",
"This is done using templates , which consist of preterminals (which have attributes ) and terminals .",
"A vary statement specifies which preterminal attributes are varied to generate ungrammatical sentences.",
"Templates define the structure of the sentences in the evaluation set.",
"This is similar to the expansions of the S nonterminal in CFGs.",
"Preterminals are similar to nonterminals in CFGs: they have a left-hand side which specifies the name of the preterminal and the preterminal's list of attributes, and a right-hand side which specifies all terminals to be generated by the preterminal.",
"However, they are non-recursive and their right-hand sides may not contain other preterminals; rather, they must define a list of terminals to be generated.",
"This is because we wish to generate all possible sentences given the template and preterminal definitions; if there existed any recursive preterminals, there would be an infinite number of possible sentences.",
"All preterminals have an attribute list which is defined at the same time as the preterminal itself; this list is allowed to be empty.",
"A terminal is a token or list of space-separated tokens.",
"The vary statement specifies a list of preterminals and associated attributes for each.",
"Typically, we only wish to vary one preterminal per grammar such that each grammatical case is internally consistent with respect to which syntactic feature is varied.",
"The following is a simple example of an attribute-varying grammar: vary: V[] S[] je V[1,s] V[1,s] pense V[2,s] penses V[1,p] pensons V[2,p] pensez Preterminals are blue and attributes are orange.",
"Here, the first statement is the vary statement.",
"This is followed by a template, with the special S keyword in red.",
"All remaining statements are preterminal definitions.",
"All attributes are spec-ified within brackets as comma-separated lists; these may be multiple characters and even multiple words long, so long as they do not contain commas.",
"The output of this AVG is as follows (True indicates that the sentence is grammatical): True je pense False je penses False je pensons False je pensez This particular grammar generates all possible verb forms because the attribute list for V in the vary statement is empty, which means that we may generate any V regardless of attributes.",
"One may change which incorrect examples are generated by changing the vary statement; for example, if we change V[] to V[1] , we would only vary over verbs with the 1 (first-person) attribute, thus generating je pense and * je pensons .",
"One may also add multiple attributes within a single vary preterminal (implementing a logical AND) or multiple semicolon-separated vary preterminals (a logical OR).",
"Changing V[] to V[1,s] in the example above would generate all first-person singular V terminals (here, je pense ).",
"If instead we used V[1]; V[s] , this would generate all V terminals with either first-person and/or singular attributes (here, je pense , * je penses , and * je pensons ).",
"We construct grammars in French, German, Hebrew and Russian for a subset of the English constructions from Marvin and Linzen (2018), shown in Figure 1.",
"These are implemented as AVGs by native or fluent speakers of the relevant languages who have academic training in linguistics.",
"3 A number of the constructions used by Marvin and Linzen are English-specific.",
"None of our languages besides English allow relative pronoun dropping, so we are unable to compare performance across languages on reduced relative clauses ( the author the farmers like smile/*smiles ).",
"Likewise, we exclude Marvin and Linzen's sentential complement condition, which relies on the English-specific ability to omit complementizers ( the bankers knew the officer smiles/*smile ).",
"and negative polarity item licensing.",
"We do not include reflexive anaphora, as our languages vary significantly in how those are implemented.",
"French and German, for example, do not distinguish singular from plural third-person reflexive pronouns.",
"Similarly, negative polarity items (NPIs) have significantly different distributions across languages, and some of our evaluation languages do not even have items comparable to English NPIs (Giannaki-dou, 2011).",
"We attempt to use translations of all terminals in Marvin and Linzen (2018).",
"In cases where this is not possible (due to differences in LM vocabulary across languages), we replace the word with another in-vocabulary item.",
"See Appendix D for more detail on vocabulary replacement procedures.",
"For replicability, we observe only third-person singular vs. plural distinctions (as opposed to all possible present-tense inflections) when replicating the evaluation sets of Marvin and Linzen (2018) in any language.",
"Following Gulordava et al. (2018), we download recent Wikipedia dumps for each of the languages,",
"strip the Wikipedia markup using WikiExtractor, 4 and use TreeTagger 5 to tokenize the text and segment it into sentences.",
"We eliminate sentences with more than 5% unknown words.",
"Our evaluation is within-sentence rather than across sentences.",
"Thus, to minimize the availability of cross-sentential dependencies in the training corpus, we shuffle the preprocessed Wikipedia sentences before extracting them into train/dev/test corpora.",
"The corpus for each language consists of approximately 80 million tokens for training, as well as 10 million tokens each for development and testing.",
"We generate language-specific vocabularies containing the 50,000 most common tokens in the training and development set; as is standard, out-of-vocabulary tokens in the training, development, and test sets are replaced with <unk> .",
"We experiment with recurrent LMs and Transformer-based bidirectional encoders.",
"LSTM LMs are trained for each language using the best hyperparameters in van Schijndel et al. (2019).",
"6 We will refer to these models as monolingual LMs.",
"We also train a multilingual LSTM LM over all of our languages.",
"The training set for this model is a concatenation of all of the individual languages' training corpora.",
"The validation and test sets are concatenated in the same way, as are the vocabularies.",
"We use the same hyperparameters as the monolingual models (Footnote 6).",
"At each epoch, the corpora are randomly shuffled before batching; as such, each training batch consists with very high probability of sentences from multiple languages.",
"To obtain LSTM accuracies, we compute the total probability of each of the sentences in our challenge set, and then check within each minimal set whether the grammatical sentence has higher probability than the ungrammatical one.",
"Because the syntactic performance of LSTM LMs has been found to vary across weight initializations (McCoy et al., 2018; Kuncoro et al., 2019), we report mean accuracy over five random initializations for each 4 https://github.com/attardi/ wikiextractor 5 https://www.cis.uni-muenchen.de/ schmid/tools/TreeTagger/ 6 Specifically, we use 2-layer word-level LSTMs with 800 hidden units in each layer, 800-dimensional word embeddings, initial learning rate 20 .",
"0 (annealed after any epoch in which validation perplexity did not improve relative to the previous epoch), batch size 20 , and dropout probability 0 .",
"2 .",
"LM.",
"See Appendix C for standard deviations across runs on each test construction in each language.",
"We evaluate the syntactic abilities of multilingual BERT (mBERT, Devlin et al. 2019) using the approach of Goldberg (2019).",
"Specifi-cally, we mask out the focus verb, obtain predictions for the masked position, and then compare the scores assigned to the grammatical and ungrammatical forms in the minimal set.",
"We use the scripts provided by Goldberg 7 without modification, with the exception of using bert-base-multilingual-cased to obtain word probabilities.",
"This approach is not equivalent to the method we use to evaluate LSTM LMs, as LSTM LMs score words based only on the left context, whereas BERT has access to left and right contexts.",
"In some cases, mBERT's vocabulary does not include the focus verbs that we vary in a particular minimal set.",
"In such cases, if either or both verbs were missing, we skip that minimal set and calculate accuracies without the sentences contained therein.",
"The overall syntactic performance of the monolingual LSTMs was fairly consistent across languages (Table 1 and Figure 2).",
"Accuracy on short dependencies without attractorsSimple Agreement and Short VP Coordinationwas close to perfect in all languages.",
"This suggests that all monolingual models learned the basic facts of agreement, and were able to apply them to the vocabulary items in our materials.",
"At the other end of the spectrum, performance was only slightly higher than chance in the Across an Object Relative Clause condition for all languages except German, suggesting that LSTMs tend to struggle with center embeddingthat is, when a subject-verb dependency is nested within another dependency of the same kind (Marvin and Linzen, 2018; Noji and Takamura, 2020).",
"There was higher variability across languages in the remaining three constructions.",
"The German models had almost perfect accuracy in Long VP Coordination and Across Prepositional Phrase, compared to accuracies ranging between 0 .",
"76 and 0 .",
"87 for other languages in those constructions.",
"The Hebrew, Russian, and German models showed very high performance on the Across Subject Relative Clause condition: 0 .",
"88 compared to 0 .",
"6 0 .",
"71 7 https://github.com/yoavg/bert-syntax English French German Hebrew Russian Mono Multi Mono Multi Mono Multi Mono Multi Mono Multi Test Perplexity 57.90 66.13 35.48 57.40 46.31 61.06 48.78 61.85 35.09 54.61 Simple agreement 1.00 1.00 1.00 1.00 1.00 0.96 0.95 0.96 0.91 0.75 VP coordination (short) 0.94 0.96 0.97 0.85 0.99 1.00 1.00 0.95 0.98 0.92 VP coordination (long) 0.76 0.69 0.85 0.72 0.96 0.73 0.84 0.70 0.86 0.72 Across subject rel.",
"in other languages (recall that all our results are averaged over five runs, so this pattern is unlikely to be due to a single outlier).",
"With each of these trends, German seems to be a persistent outlier.",
"This could be due to its marking of cases in separate article tokensa unique feature among the languages evaluated hereor some facet of its word ordering or unique capitalization rules.",
"In particular, subject relative clauses and object relative clauses have the same word order in German, but are differentiated by the case markings of the articles and relative pronouns.",
"More investigation will be necessary to determine the sources of this deviation.",
"For most languages and constructions, the multilingual LM performed worse than the monolingual LMs, even though it was trained on five times as much data as each of the monolingual ones.",
"Its average accuracy in each language was at least 3 percentage points lower than that of the corresponding monolingual LMs.",
"Although all languages in our sample shared constructions such as prepositional phrases and relative clauses, there is no evidence that the multilingual LM acquired abstract representations that enable transfer across those languages; if anything, the languages interfered with each other.",
"The absence of evidence for syntactic transfer across languages is consistent with the results of Dhar and Bisazza (2020), who likewise found no evidence of transfer in an LSTM LM trained on two closely related languages (French and Italian).",
"One caveat is that the hyperparameters we chose for all of our LSTM LMs were based on a monolingual LM (van Schijndel et al., 2019); it is S i m p l e VP c o o r d ( s h o r t ) VP c o o r d ( l o n g ) A c r o s s s u b j .",
"possible that the multilingual LM would have been more successful if we had optimized its hyperparameters separately (e.g., it might benefit from a larger hidden layer).",
"These findings also suggest that test perplexity and subject-verb agreement accuracy in syntactically complex contexts are not strongly correlated cross-linguistically.",
"This extends one of the results of Kuncoro et al. (2019), who found that test perplexity and syntactic accuracy were not necessarily strongly correlated within English.",
"Finally, the multilingual LM's perplexity was always higher than that of the monolingual LMs.",
"At English French German Hebrew Russian Simple agreement 1.00 1.00 0.95 0.70 0.65 VP coordination (short) 1.00 1.00 0.97 0.91 0.80 VP coordination (long) 0.92 0.98 1.00 0.73 Across subject relative clause 0.88 0.57 0.73 0.61 0.70 Within object relative clause 0.83 Across object relative clause 0.87 0.86 0.93 0.55 0.67 Across prepositional phrase 0.92 0.57 0.95 0.62 0.56 Table 2: Multilingual BERT accuracies on CLAMS.",
"first glance, this contradicts the results of Ostling and Tiedemann (2017), who observed lower perplexity in LMs trained on a small number of very similar languages (e.g., Danish, Swedish, and Norwegian) than in LMs trained on just one of those languages.",
"However, their perplexity rose precipitously when trained on more languages and/or less-related languagesas we have here.",
"Table 2 shows mBERT's accuracies on all stimuli.",
"Performance on CLAMS was fairly high in the languages that are written in Latin script (English, French and German).",
"On English in particular, accuracy was high across conditions, ranging between 0 .",
"83 and 0 .",
"88 for sentences with relative clauses, and between 0 .",
"92 and 1 .",
"00 for the remaining conditions.",
"Accuracy in German was also high: above 0 .",
"90 on all constructions except Across Subject Relative Clause, where it was 0 .",
"73 .",
"French accuracy was more variable: high for most conditions, but low for Across Subject Relative Clause and Across Prepositional Phrase.",
"In all Latin-script languages, accuracy on Across an Object Relative Clause was much higher than in our LSTMs.",
"However, the results are not directly comparable, for two reasons.",
"First, as we have mentioned, we followed Goldberg (2019) in excluding the examples whose focus verbs were not present in mBERT's vocabulary; this happened frequently (see Appendix D for statistics).",
"Perhaps more importantly, unlike the LSTM LMs, mBERT has access to the right context of the focus word; in Across Object Relative Clause sentences ( the farmers that the lawyer likes smile/*smiles. ), the period at the end of the sentence may indicate to a bidirectional model that the preceding word ( smile/smiles ) is part of the main clause rather than the relative clause, and should therefore agree with farmers rather than lawyer .",
"In contrast to the languages written in Latin script, mBERT's accuracy was noticeably lower on Hebrew and Russianeven on the Simple Agreement cases, which do not pose any syntactic challenge.",
"Multilingual BERT's surprisingly poor syntactic performance on these languages may arise from the fact that mBERT's vocabulary (of size 110,000) is shared across all languages, and that a large proportion of the training data is likely in Latin script.",
"While Devlin et al. (2019) reweighted the training sets for each language to obtain a more even distribution across various languages during training, it remains the case that most of the largest Wikipedias are written in languages which use Latin script, whereas Hebrew script is used only by Hebrew, and the Cyrillic script, while used by several languages, is not as well-represented in the largest Wikipedias.",
"We next compare the performance of monolingual and multilingual BERT.",
"Since this experiment is not limited to using constructions that appear in all of our languages, we use additional constructions from Marvin and Linzen (2018), including reflexive anaphora and reduced relative clauses (i.e., relative clauses without that ).",
"We exclude their negative polarity item examples, as the two members of a minimal pair in this construction differ in more than one word position.",
"The results of this experiment are shown in Table 3.",
"Multilingual BERT performed better than English BERT on Sentential Complements, Short VP Coordination, and Across a Prepositional Phrase, but worse on Within an Object Relative Clause, Across an Object Relative Clause (no relative pro-noun), and in Reflexive Anaphora Across a Relative Clause.",
"The omission of the relative pronoun that caused a sharp drop in performance in mBERT, and a milder drop in English BERT.",
"Otherwise, both models had similar accuracies on other stimuli.",
"These results reinforce the finding in LSTMs that multilingual models generally underperform monolingual models of the same architecture, though there are specific contexts in which they can perform slightly better.",
"Languages vary in the extent to which they indicate the syntactic role of a word using overt morphemes.",
"In Russian, for example, the subject is generally marked with a suffix indicating nominative case, and the direct object with a different suffix indicating accusative case.",
"Such case distinctions are rarely indicated in English, with the exception of pronouns ( he vs. him ).",
"English also displays significant syncretism: morphological distinctions that are made in some contexts (e.g., eat for plural subjects vs. eats for singular subjects) are neutralized in others ( ate for both singular and plural subjects).",
"We predict that greater morphological complexity, which is likely to correlate with less syncretism, will provide more explicit cues to hierarchical syntactic structure, 8 and thus result in increased overall accuracy on a given language.",
"language, we use the CWALS metric of Bentz et al. (2016): (cid:80) ni =1 f i n .",
"This is a typological measure of complexity based on the World Atlas of Language Structures (WALS, Dryer and Haspelmath 2013), where f i refers to a morphological feature value normalized to the range [0 , 1] .",
"9 This essentially amounts to a mean over normalized values of quan-tified morphological features.",
"Here, n is 27 or 28 depending on the number of morphological categorizations present for a given language in WALS.",
"Does the morphological complexity of a language correlate with the syntactic prediction accuracy of LMs trained on that language?",
"In the LSTM LMs (Table 1), the answer is generally yes, but not consistently.",
"We see higher average accuracies for French than English (French has more distinct person/number verb inflections), higher for Russian than French, and higher for Hebrew than Russian (Hebrew verbs are inflected for person, number, and gender).",
"However, German is again an outlier: despite its notably lower complexity than Hebrew and Russian, it achieved a higher average accuracy.",
"The same reasoning applied in Section 6.1 for German's deviation from otherwise consistent trends applies to this analysis as well.",
"Nonetheless, the Spearman correlation between morphological complexity and average accuracy including German is 0 .",
"4 ; excluding German, it is 1 .",
"0 .",
"Because we have the same amount of training data per-language in the same domain, this could point to the importance of having explicit cues to lin-9 For example, if WALS states that a language has negative morphemes, f 28 is 1 ; otherwise, f 28 is 0 .",
"guistic structure such that models can learn that structure.",
"While more language varieties need to be evaluated to determine whether this trend is robust, we note that this finding is consistent with that of Ravfogel et al. (2019), who compared English to a synthetic variety of English augmented with case markers and found that the addition of case markers increased LSTM agreement prediction accuracy.",
"We see the opposite trend for mBERT (Table 2): if we take the average accuracy over all stimulus types for which we have scores for all languages i.e., all stimulus types except Long VP Coordination and Within an Object Relative Clausethen we see a correlation of = 0 .",
"9 .",
"In other words, accuracy is likely to decrease with increasing morphological complexity.",
"This unexpected inverse correlation may be an artifact of mBERT's limited vocabulary, especially in non-Latin scripts.",
"Morphologically complex languages have more unique word types.",
"In some languages, this issue can be mitigated to some extent by splitting the word into subword units, as BERT does; however, the effectiveness of such a strategy would be limited at best in a language with non-concatenative morphology such as Hebrew.",
"Finally, we stress that the exclusion of certain stimulus types and the differing amount of training data per-language act as confounding variables, rendering a comparison between mBERT and LSTMs difficult.",
"In this work, we have introduced the CLAMS data set for cross-linguistic syntactic evaluation of word prediction models, and used it to to evaluate monolingual and multilingual versions of LSTMs and BERT.",
"The design conditions of Marvin and Linzen (2018) and our cross-linguistic replications rule out the possibility of memorizing the training data or relying on statistical correlations/token collocations.",
"Thus, our findings indicate that LSTM language models can distinguish grammatical from ungrammatical subject-verb agreement dependencies with considerable overall accuracy across languages, but their accuracy declines on some constructions (in particular, center-embedded clauses).",
"We also find that multilingual neural LMs in their current form do not show signs of transfer across languages, but rather harmful interference.",
"This issue could be mitigated in the future with architectural changes to neural LMs (such as better handling of morphol-ogy), more principled combinations of languages (as in Dhar and Bisazza 2020), or through explicit separation between languages during training (e.g., using explicit language IDs).",
"Our experiments on BERT and mBERT suggest (1) that mBERT shows signs of learning syntactic generalizations in multiple languages, (2) that it learns these generalizations better in some languages than others, and (3) that its sensitivity to syntax is lower than that of monolingual BERT.",
"It is possible that its performance drop in Hebrew and Russian could be mitigated with fine-tuning on more data in these languages.",
"When evaluating the effect of the morphological complexity of a language on the LMs' syntactic prediction accuracy, we found that recurrent neural LMs demonstrate better hierarchical syntactic knowledge in morphologically richer languages.",
"Conversely, mBERT demonstrated moderately better syntactic knowledge in morphologically simpler languages.",
"Since CLAMS currently includes only five languages, this correlation should be taken as very preliminary.",
"In future work, we intend to expand the coverage of CLAMS by incorporating language-specific and non-binary phenomena (e.g., French subjunctive vs. indicative and different per-son/number combinations, respectively), and by expanding the typological diversity of our languages.",
"This material is based on work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the other supporting agencies.",
"Additionally, this work was supported by a Google Faculty Research Award to Tal Linzen, and by the United StatesIsrael Binational Science Foundation (award 2018284)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"objective",
"result",
"abstain",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"The product reviews summarization task aims to automatically produce a short summary for a set of reviews of a given product.",
"Such summaries are expected to aggregate a range of different opinions in a concise, coherent and informative manner.",
"This challenging task gives rise to two shortcomings in existing work.",
"First, summarizers tend to favor generic content that appears in reviews for many different products, resulting in template-like, less informative summaries.",
"Second, as reviewers often disagree on the pros and cons of a given product, summarizers sometimes yield inconsistent, self-contradicting summaries.",
"We propose the PASS system (Perturb-and-Select Summarizer) that employs a large pre-trained Transformer-based model (T5 in our case), which follows a few-shot fine-tuning scheme.",
"A key component of the PASS system relies on applying systematic perturbations to the model's input during inference, which allows it to generate multiple different summaries per product.",
"We develop a method for ranking these summaries according to desired criteria, coherence in our case, enabling our system to almost entirely avoid the problem of self-contradiction.",
"We compare our system against strong baselines on publicly available datasets, and show that it produces summaries which are more informative, diverse and coherent.",
"1 1 Introduction Online shopping has become a popular form of purchasing goods even before the most recent acceleration due to the COVID-19 pandemic.",
"As e-commerce websites strive to make the shopping process more useful and enjoyable for customers, many interesting challenges arise.",
"One challenge deals with how to surface opinions from product Completed during an internship at Amazon.",
"reviews in a concise yet reliable fashion.",
"The research community has addressed this challenge early on, starting from the work of (Hu and Liu, 2004) which defined the task of mining and summarizing customer reviews.",
"More recent advancements have relied on modern deep learning models trained on large collections of unannotated customer reviews (Brazinskas et al., 2020b,a).",
"Our first observation relates to the summaries generated by CopyCat (Brazinskas et al., 2020b) and FewSum (Brazinskas et al., 2020a), two of these SOTA systems, which tend to mix generic statements such as Would recommend this product to anyone along with more informative content such as The sound quality is good (see Table 6 in Appendix B for examples of such generated summaries).",
"Due to the emphasis of summarization systems on conciseness, we maintain that generic content should be used sparingly.",
"Additionally, even if the content is not extremely generic, customers may perceive summaries as less useful if they tend to repeat themselves across products.",
"In order to estimate the similarity between summaries generated for different products, we devise the Set-Pairwise-ROUGE metric (henceforth denoted as SPR ), that computes the average ROUGE (Lin, 2004b) scores of summaries for two different products, across all product pairs.",
"Using this metric we show that human written reference summaries are indeed far more diverse than their system generated counterparts, i.e. the SPR of reference summaries is significantly lower.",
"We henceforth denote the notion of cross product diversity of summaries as CP-Diversity .",
"Large pre-trained Transformer-based (Vaswani et al., 2017) models such as OpenAI's GPT-3 (Brown et al., 2020), Google's T5 (Raffel et al., 2020), PEGASUS (Zhang et al., 2020a), and Face-book's BART (Lewis et al., 2020) have made compelling advancements on a host of NLG tasks, including abstractive text summarization.",
"In this work we wish to leverage such models for product reviews summarization, aiming to generally improve the quality of generated summaries, and specifically in terms of their diversity across different products.",
"While we aim to generate humanlike texts, care has to be taken with respect to their correctness.",
"Indeed, concerns have been raised regarding the factual consistency of abstractive summaries, i.e., whether the facts conveyed in the summary agree with the source text (Cao et al., 2018; Kryscinski et al., 2019; Maynez et al., 2020).",
"Our second observation relates to this issue of factual consistency in the context of product reviews summarization.",
"Our task not only faces the risk of models hallucinating incorrect information, as in traditional abstractive text summarization, but also the risk of generating self-contradicting summaries which are not caused by model hallucinations.",
"The latter can occur when the source documents contradict one another.",
"This situation is quite likely because reviews may disagree on some product aspects or even disagree entirely.",
"For example, review A states a machine is easy to operate vs. review B which states it requires trial and error (see more examples in Table 7 in Appendix B).",
"In this unique setup, factual consistency is undefined and instead we wish to measure a different characteristic: the self-consistency of the summary.",
"To the best of our knowledge this issue has not been analyzed in the past and in some sense it renders the task ill-defined because it's not clear whether the summary is supposed to convey a range of possibly contradicting opinions about the product or the majority opinion.",
"From here on, we shall assume that a summary has to convey the majority opinion of the reviews and do so in a self-consistent manner.",
"Our proposed method starts by fine-tuning a strong pre-trained language model for product reviews summarization in a few-shot setup.",
"We then employ an input perturbation method that drops k reviews out of the input and concatenates the remaining reviews in random order.",
"This process, denoted as L k O , short for leave k out , produces notable variation between candidate summaries, which increases the model's output diversity.",
"2 Once we have produced a set of candidate 2 Diversity here is between candidate summaries for the summaries, we essentially cast our original summary generation problem as a ranking problem.",
"This approach gives us the choice over what kind of summary we are interested in as the final output, i.e. choosing our ranking criteria.",
"As mentioned above, our main concern in this work is producing self-consistent summaries.",
"Instead of basing our ranking solely on this criterion, we train a more general coherence summary ranker using human annotated coherence scores (Fabbri et al., 2021).",
"Finally, for each product, we select the top ranked summary as the system's output.",
"We compare our method against strong baselines, comprised of systems introduced in previous work on multi-document opinion summarization, and a T5 language model fine-tuned for abstractive text summarization.",
"We evaluate each over 3 dimensions, of which relevance and coherence are commonly used in summarization (Dang, 2005), and our newly introduced metric for CP-Diversity.",
"We demonstrate that our method produces high quality summaries which are more informative, diverse and coherent.",
"In summary, the main contributions of this work are: (1) highlight two shortcomings of existing product reviews summarizers, namely low CP-Diversity and self-inconsistency, and propose a dedicated metric for the former.",
"(2) Propose a method that leverages strong pre-trained models that improve the CP-Diversity while significantly reducing the risk of self-inconsistencies.",
"Product Review Summarization.",
"Product review summarization is a form of multi-document summarization in which a set of product reviews for a single product serves as the document cluster to be summarized.",
"A common approach for product review summarization, which centers the summary around a set of extracted aspects and their respective sentiment, is termed aspect-based summarization (Hu and Liu, 2004; Kansal and Toshni-wal, 2014; Wu et al., 2016; Angelidis and Lapata, 2018; Coavoux et al., 2019).",
"As in traditional summarization, there are two inherently different requirements for the task, a simplified one, in which the goal is to provide an extractive output, i.e., a list of sentences extracted from the review set, or a more advanced one, in which the goal is to provide an abstrac-same product, not to be confused with CP-Diversity.",
"tive output, i.e., generated content not restricted to use the same wording of the source set.",
"Extractive summarization include earlier works such as (Carenini et al., 2006; Lerman et al., 2009; Xiong and Litman, 2014).",
"More recently, (Tan et al., 2017) suggested a novel generative topic aspect sentiment model, while (Angelidis et al., 2021) suggested a novel system able to extract both general and aspect-specific summaries.",
"As for abstractive summarization, recent advances on pre-training neural networks were explored in the context of product reviews in unsupervised and few-shot learning schemes which led to promising results (Chu and Liu, 2019; Brazinskas et al., 2020b,a; Suhara et al., 2020; Amplayo et al., 2021).",
"Evaluating Summarization Systems.",
"Evaluation of summarization systems is usually performed utilizing a mix of automatic metrics and human ratings.",
"Among the automated metrics, probably the most well-known is the ROUGE family of scores (Lin, 2004b) that measures n-gram overlap between generated summaries and corresponding reference summaries.",
"Many other metrics that aim to quantify how well generated summaries align with reference summaries have been proposed, such as BLEU (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), ROUGE-WE (Ng and Abrecht, 2015) and Bert-Score (Zhang et al., 2020b) to name a few.",
"Unfortunately, such metrics alone do not tell the whole story and recently several works observed that a new requirement is necessary in order to ensure that facts from the summary agree with the source document (Cao et al., 2018; Kryscinski et al., 2019; Maynez et al., 2020).",
"This requirement is usually known as factual consistency.",
"As for human ratings, those are usually obtained across several dimensions of summary quality.",
"The DUC 2005 task (Dang, 2005) suggested the following 5 dimensions: Grammaticality, Non-redundancy, Referential clarity, Focus and Structure, and Coherence.",
"In the context of product reviews summarization (Brazinskas et al., 2020a) use the standard ROUGE-1/2/L metrics as well human comparative judgments on 5 dimensions: Fluency, Coherence, Non-Redundancy, Informativeness and Sentiment.",
"To the best of our knowledge the issues of self-consistency and diversity across products were not directly analyzed before.",
"In this section, we propose a system that employs a large pre-trained Transformer-based model (T5) in a few-shot fine-tuning scheme for multiple reviews abstractive summarization.",
"We aim to leverage the inherent diversity between reviews for a given product to our advantage, by applying systematic perturbations to the model's input during inference.",
"This allows our fine-tuned model to generate multiple different candidate summaries per product, exhibiting variability both in the content being surfaced as well as in the phrasing of said content.",
"We develop a ranking mechanism for selecting the best candidate summary according to desired criteria, which in our case is coherence.",
"We provide an end-to-end diagram of the PASS Summarizer's components in Figure",
"1. 3.1 Fine-tuning T5 for Summary Generation PASS relies on a pre-trained T5 language model, which we fine-tuned on a small publicly available dataset for product reviews summarization (Brazinskas et al., 2020a).",
"We follow a similar fine-tuning scheme for abstractive text summarization to the one presented in (Raffel et al., 2020) with the exception that we concatenate the multiple reviews into a single input text as a preprocessing step.",
"As the dataset contains multiple reference summaries per product, we repeat our training process for each reference summary using the same (concatenated) input text.",
"In light of the natural diversity existing between product reviews, we explore a modeling approach which allows for such diversity to emerge in our summarizer's output as well.",
"We do this by manipulating the model's input, sampling which reviews to use each time, in a way that allows for increasing the relative prevalence of certain reviews over others.",
"We also re-shuffle the reviews before concatenation to ensure the model is not affected by their internal order.",
"Note that prior attempts have been made to directly manipulate the content within the reviews (Amplayo and Lapata, 2020) a path that we do not explore here.",
"Our intervention method guarantees that each review's correctness, integrity and meaning are preserved.",
"Since it only affects the subset of reviews being used and their order of concatenation, this increases the potential for diversity (per product and across products) Figure 1: A diagram of the PASS components, with an example for a collection of reviews of size d = 4 , k = 1 .",
"L k O Input Perturbation Method.",
"Given a set of d reviews R = { r 1 , ..., r d } for a product p , our perturbation method iterates over A ( R ) the set of all possible subests of size d k in R , A ( R ) = (cid:8) S (cid:12)(cid:12) S R, | S | = d k, 1 k < d (cid:9) .",
"Given a subset S A ( R ) we concatenate its reviews in random order, and feed the concatenated text into our fine-tuned T5 summarizer, which generates a candidate summary c .",
"We repeat this step for all S A ( R ) , resulting in a set of generated candidate summaries which we denote as C = { c 1 , ..., c m } , m = (cid:0) dk (cid:1) .",
"This process, denoted as L k O , short for leavek -out , produces notable variation between candidate summaries (see Table 8 in Appendix B for examples), and allows for different content and aspects to emerge in the summaries, which were less likely to have surfaced otherwise.",
"We found that this perturbation approach produces higher variation across candidate summaries when applying it on the model's input only during the inference stage, not during training.",
"Our method produces multiple perturbed versions of a given input while its references remain the same.",
"If applied during training, this might encourage the model to fit a larger range of input features to a smaller set of outputs.",
"We are interested in the opposite effect we would like to encourage higher output variation as a function of input diversity.",
"Note that when dealing with large review sets, achieving diversity does not require iterating over all subsets in A ( R ) .",
"For such scenarios, we recommend constructing a fixed number ( m ) of randomly sampled review subsets, so long as m is sufficiently large.",
"In our experiments we employ the full L k O input perturbation method, since standard datasets focus on relatively small review sets.",
"3 An alternative method for increasing novelty and variability in the output of a generative language model, is to directly intervene in its decoding algorithm, e.g., Beam Search (Vijayakumar et al., 2016; Cibils et al., 2018).",
"Note that this will not have the same effect as our proposed approach.",
"First, since beam search is a decoding algorithm, it only has access to the underlying language model, and is completely separated from the model's input.",
"Second, beam search's mechanism is fixed to make local word-by-word decisions, before the complete summary is revealed.",
"Finally, our approach guarantees that given a set of input texts, at least one candidate output will not be influenced at all by a specific input text (or more if k > 1 ).",
"For example, if a set of 4 reviews contains 3 reviews discussing price, and 1 review discussing quality, our method guarantees that at least 1 candidate summary will be generated solely based on the first three (discussing price).",
"Furthermore, our method increases the probability for a summary to mention both price and quality, when a review discussing price is left out.",
"Once a set of candidate summaries are generated per product, we have essentially cast our summary generation problem as a summary ranking problem.",
"This allows us to retrieve a summary, which ranks best out of a diverse set of candidates, according to desired, interpretable criteria.",
"3 A few recent works attempt to explicitly address this issue (Shapira and Levy, 2020; Angelidis et al., 2021).",
"As mentioned in Section 1, our main concern is producing CP-diverse yet self-consistent and coherent summaries.",
"Since our input perturbation method generates multiple candidate summaries, we are now left with the task of ranking this set by coherence.",
"We would like the ranking process to filter out self-contradicting, incoherent or inconsistent candidates (by assigning low rank) and to promote well-formed, coherent candidates to the top of the list.",
"To achieve this, we train a classifier that receives two summaries as input and decides whether the first summary is more coherent than the second or the opposite.",
"The classifier can also decide that both summaries are equally coherent.",
"Using such a classifier, we can obtain a partial ranking of the reviews by running all pairwise comparisons and count the number of times each summary was better than the summary it was paired with.",
"Pairwise Summary Classifier.",
"We train a model to classify a pair of summaries for coherence, by fine-tuning a pre-trained T5 model for pairwise text classification.",
"Given a pair of summaries, the model is required to classify them as either: summary A is more coherent, summary B is more coherent, or A and B are equivalent in terms of coherence.",
"A pair of summaries can often be considered equivalent when judging them according to specific criteria, stemming from the natural fact that often more than one summary can be considered correct or good.",
"Indeed it has been shown that several reference summaries are needed for reliable evaluation showing that there is more than one truth (Lin, 2004a).",
"Since this model is used as a comparator for ranking candidate summaries, we are especially sensitive to specific types of classification errors.",
"If the model mistakenly classifies a summary to be more coherent than the other while the opposite is true, we consider this a critical classification error.",
"This type of error could be detrimental to the validity of the ranking process, therefore we aim to minimize its rate.",
"While other types of errors also reduce the classifier's accuracy, we consider a mistake where the model classifies two summaries to be equivalent when in truth one is more coherent than the other, as less harmful for ranking purposes.",
"Ranking Method.",
"Our proposed ranking method iterates over all possible pairs of candidate summaries for a given product, and counts how many times each candidate was classified by the coherence pairwise classifier (our primary comparator), as more coherent than its counterpart.",
"As a tie-breaking, secondary comparator, we train an additional pairwise summary classifier, to classify which candidate is more fluent, out of a pair of given candidates.",
"We select the top ranked candidate as the final output summary for each product.",
"We utilize a recent publicly available Amazon product reviews summarization dataset (Brazin-skas et al., 2020a) for fine-tuning the T5 model which underlines the PASS system and for evaluating the L k O input perturbation method, both in isolation and as part of the end-to-end PASS system.",
"The dataset contains product reviews and reference summaries for 60 products on Amazon.",
"Each product has 8 reviews and 3 reference summaries written by crowd source workers.",
"We follow the dataset splits to the training, development and test sets provided by the authors of the dataset.",
"While we mainly focus on product reviews summarization, we include the Yelp business reviews summarization dataset (also from (Brazinskas et al., 2020a)) in our end-to-end evaluation for the sake of completeness.",
"The Yelp dataset contains business reviews and reference summaries for 100 businesses.",
"For training and evaluating the pairwise coherence classifier, we utilize a public dataset of human annotated summaries (Fabbri et al., 2021), generated by 16 modern text summarization models for 100 news articles ( 1600 examples in total) from the CNN/DailyMail dataset (Hermann et al., 2015).",
"Each summary was rated (on a scale of 1 to 5 ) across 4 dimensions: coherence, consistency, fluency and relevance, by 5 independent crowd source workers and 3 independent experts ( 8 annotations in total).",
"We chose to use the ex-perts' annotations only, as they are considered to be more accurate and reliable for coherence and fluency (Fabbri et al., 2021).",
"We construct a pairwise version of this dataset, by creating summary pairs from all 16 model outputs for each of the 100 news stories, along with their annotation scores for each metric respectively.",
"We split the dataset according to news stories, by randomly sampling 20 stories for the test set, 16 stories for the development set and the rest are used for the training set.",
"Given a pair of summaries ( a, b ) , their respective average expert rating, ( r a , r b ) and a threshold parameter (cid:15) , we define the label for that pair as: label ( a, b ) = A, if r a r b (cid:15) B, if r b r a (cid:15) E, otherwise where E denotes the case where both summaries are equivalent, A denotes that summary a is better than b and B denotes the opposite.",
"To ensure that our training data is invariant to a pair's internal order, we create examples for all ( a, b ) and ( b, a ) pairs in the training set.",
"Fine-tuning T5 for Summary Generation.",
"We fine-tune a T5-Base model ( 220 M parameters (Raffel et al., 2020)) for abstractive text summarization as described in 3.1 on the training set, and tune its hyperparameters on the development set.",
"We train for maximum 20 epochs while employing a standard early stopping mechanism (Falcon, 2019) based on the development set's average loss per epoch.",
"We fine-tune a separate model for the Amazon and Yelp datasets.",
"Hyperparameters and further details can be found in Appendix A. L k O Input Perturbation.",
"We experiment with the L k O method described in Section 3.2 with k { 1 , 2 , 3 , 4 , 5 } on the development set.",
"For the end-to-end system we choose k = 2 aiming to obtain high output diversity while limiting computation complexity, and avoiding the risk of dropping a majority of the reviews ( k > 4 ) each time.",
"We provide evaluation details in 5.1.",
"Pairwise Summary Classifier.",
"We train two T5-Base models to classify which summary is better, one in terms of coherence, to be used as our ranking method's primary comparator, and one in terms of fluency to break ties.",
"We experimented with different values for (cid:15) { 0 .",
"25 , 0 .",
"5 , 0 .",
"75 , 1 .",
"0 } , and chose (cid:15) = 0 .",
"5 for the coherence classifier and (cid:15) = 0 .",
"25 for the fluency classifier.",
"The choice of (cid:15) was based on dataset statistics per metric and evaluation of each model's performance on the development set.",
"Baselines.",
"We compare the PASS system to four baselines: COPYCAT (Brazinskas et al., 2020b) is an unsupervised reviews summarizer that is trained to generate a review given other reviews for the same product.",
"The authors suggest a novelty mechanism that controls the extent to which the summary deviates from the inputs.",
"FEWSUM (Brazinskas et al., 2020a) is a few-shot reviews summarizer that builds upon the ideas of CopyCat but also conditions the model on certain linguistic properties such as writing style.",
"T5 is the pre-trained T5-base language model which was not fine-tuned.",
"We do not report results for this model, as it consistently performed worst.",
"T5-FT is the fine-tuned T5-base model described above.",
"We do not report results for MEANSUM (Chu and Liu, 2019) since it was consistently outperformed by FEWSUM (Brazinskas et al., 2020a).",
"Recall that our main objective for generating candidate summaries is to encourage output diversity.",
"Hence, we would like to verify that our perturbation method, L k O , produces sufficiently diverse candidates for a given product.",
"In order to measure textual diversity between candidate summaries for a given product, we need to devise a diversity metric.",
"We propose the SPR metric (shorthand for Set-Pairwise-ROUGE) which measures the opposite of diversity, i.e., the average lexical similarity across pairs of summaries from a given set.",
"We base SPR on ROUGE F1 scores for any n-gram level, therefore SPR-1 relies on ROUGE-1 F1 scores and so on.",
"SPR Formal Definition.",
"For a given set of summaries S = { s 1 , ..., s n } , we define the set of all pairs from S as P ( S ) = (cid:8) { s i , s j } (cid:12)(cid:12) s i S, s j S, i (cid:54) = j (cid:9) .",
"We then define the set-pairwise-rouge (SPR) metric as: SP R ( S ) = 1 | P ( S ) | (cid:88) { s i ,s j } P ( S ) ROUGE ( s i , s j ) Note that SPR is a general metric of diversity, applicable to an arbitrary set of summaries.",
"Therefore, it can be applied to measure both IP-Diversity (in-product diversity, as we do here) and CP-Diversity (cross-product diversity, as we do in Section 5.3).",
"For clarity, we shall denote IP-SPR when measuring IP-Diversity and CP-SPR when measuring CP-Diversity with SPR.",
"the biggest drop in similarity (increase in diversity) between k = 1 and k = 2 .",
"While we aim to increase diversity, we are also mindful of the increase in runtime as k grows.",
"Additionally, we would like to avoid sampling out a majority of reviews ( k > 4 ), since the risk of generating a summary with minority view or low informativeness also increases with k .",
"Indeed, as shown in Figure 3, which depicts a similar box plot but this time of the ROUGE-2 scores against the reference summaries, the variance increases with k and the worst-case ROUGE-2 score decreases with k .",
"The pairwise summary classifiers can be evaluated directly using human scores from (Fabbri et al., 2021) after adapting them to our ternary classification task.",
"Figure 4 depicts the confusion matrix for Figure 3: ROUGE-2 F1 scores box plot, for all candidate summary sets generated with LkO input perturbation method for k = 1 , ..., 5 .",
"our coherence classifier.",
"We observe that the estimated probability of a critical error (choosing A over B or B over A) is very low, 0 .",
"05 , while at the same time the overall accuracy of 0 .",
"61 is reasonably high compared to 0 .",
"33 and 0 .",
"36 achieved by the random and majority (always predicts that A and B are equally coherent) baselines respectively.",
"Applying the classifier to a set of 28 candidates per product, yields a single top ranking candidate for 70% of products in the Amazon test set.",
"To further break ties, we utilize the fluency classifier as a secondary comparator.",
"See Figure 10 in Appendix C for a similar confusion matrix for the fluency classifier.",
"Again, the probability for a critical error is very low, 0 .",
"0125 , while the overall accuracy is 0 .",
"67 .",
"After applying fluency as a tie breaker, we find that all products in the Amazon test set have a unique top ranking summary.",
"The training data for both classifiers comes from a domain (News Articles) which is different from our main dataset's domain (Product Re-views).",
"We hypothesize that coherence and fluency are linguistic properties that are not heavily tied with the domain, since they relate to a sum-mary's overall collective and individual sentence quality (Dang, 2005).",
"Indeed, our results show (see Table 2) that PASS benefited from this data despite the risk of a possible domain shift.",
"4 Figure 4: Confusion matrix for the Coherence Pairwise Classifier.",
"We evaluate our end-to-end system across 3 dimensions.",
"The first, informativeness, is traditionally evaluated using the ROUGE-1/2/L F1 measures (Lin, 2004b) and we follow suit.",
"The second dimension, which subsumes the self-consistency issue, is coherence.",
"To this end, we conducted a crowdsourced human evaluation task, which compares between the generated summaries of 4 different summarization systems, including our proposed PASS system.",
"We used Best-Worst Scaling (Louviere and Woodworth, 1991; Louviere et al., 2015; Kiritchenko and Mohammad, 2016, 2017) to compute each system's score as the difference between the percentage of times it was selected as best , and the percentage of times it was selected as worst (Orme, 2009).",
"This is inline with prior work on product review summarization (Brazin-skas et al., 2020b,a).",
"As for our third dimension, recall that we would like our system to generate diverse summaries across different products, a notion that we denoted as CP-Diversity.",
"Lacking an existing metric, we use our previously defined SPR-1/2/L measure on the set of final (top-ranked) summaries across all test set products.",
"4 While we did not find evidence suggesting a domain shift, it is an aspect we leave for further investigation in future work.",
"Table 1 reports results for all 3 dimensions.",
"For the Amazon dataset (top table), we observe that PASS outperforms the baselines in coherence and CP-Diversity while keeping a comparable informativeness to the next best system, T5-FT .",
"The only exception being ROUGE-2 in which T5-FT outperforms PASS which could be explained by the somewhat longer summaries it generates.",
"Interestingly, in CP-Diversity, the performance of PASS is closer to human performance than to CopyCat and FewSum but there's still room to make the summaries even more diverse.",
"For the sake of completeness and following previous work (Chu and Liu, 2019; Brazinskas et al., 2020b,a) we report results on business reviews from the Yelp dataset in the bottom of Table",
"1. Recall that our key goals were to avoid generating summaries containing crude coherence (CE) and self-consistency (SCE) errors (see Table 3 for examples of such errors).",
"In order to evaluate these directly, both authors independently marked each of the summaries generated by FewSum , T5-FT and PASS for the Amazon test set as having a crude error or not, for both types of errors.",
"Table 2 reports the ratios of crude errors per system, considering cases where at least one annotator (I) and both annotators (II) marked as crude.",
"We measured the level of agreement between the two annotators by calculating Cohen's Kappa coefficients (Cohen, 1960) for each annotation task, which resulted in CE = 0 .",
"571 and SCE = 0 .",
"779 .",
"Finally, for a qualitative impression we provide in Table 4 an example of the systems' outputs for a product from the Amazon test set.",
"In this work we highlight two shortcomings of existing product reviews summarization systems, namely low CP-Diversity and self-inconsistency.",
"We propose the SPR metric to quantify cross prod-Tights.",
"These tights are very comfortable and durable.",
"They can be worn with ballet slippers or sandals.",
"The color is beautiful and the fabric is soft.",
"They will last a long time.",
"They are great for transitioning from ballet to ballet.",
"Purse.",
"This purse is not as cute as it looks in the picture.",
"It is very small and will not hold a lot of stuff.",
"It would be a great purse if it was a little bigger but it would have been nice to have a purse that would hold more than one purse.",
"Protein Bar.",
"These bars are a great snack bar.",
"They taste good and have a good amount of protein.",
"They do not have a lot of protein in them so they are not as sweet as some protein bars, but for the price, they are well worth it.",
"Tank Top.",
"This tank top is well made, fits well, and is comfortable to wear.",
"The only thing is that it runs a little small, so order a size up from what you normally wear.",
"Other than that, it's a great top.",
"It's well made and it looks like it will last a long time.",
"Love it!",
"uct similarity of summaries and demonstrate that indeed, humans summaries are far more diverse than system generated summaries.",
"To overcome this issue we rely on stronger pre-trained models such as the recent T5 model which significantly improves the CP-Diversity.",
"However, the second problem still remains and even intensifies as without the safety net of generic content, the risk of incoherent or even self-contradicting text is substantial.",
"To this end, we propose the Perturb and Select summarizer ( PASS ).",
"In the first step, PASS applies systematic perturbations to the input texts in a way that allows the T5 model to generate multiple summary candidates that sufficiently differ from one another.",
"Given such a set of diverse summaries, PASS applies a trained ranker to smartly select a promising candidate in terms of coherence.",
"Finally, we show that the resulting PASS system, outperforms SOTA models in the domain of product reviews in terms of informativeness, CP-Diversity and coherence.",
"When comparing to a fine-tuned T5 model PASS outperforms it in coherence and CP-Diversity, while maintaining comparable performance for informativeness.",
"PASS.",
"These Reeboks are great for supporting a high arch and are lightweight and comfortable.",
"They come in a variety of colors and sizes, and are ideal for walking or biking.",
"They are also flexible and well made.",
"T5-FT.",
"These Reeboks are a great choice for those with wide feet.",
"They run true to size and the colors are great.",
"They are lightweight and comfortable, yet they are flexible and flexible.",
"They are recommended for people with wide feet.",
"They are also very popular for running and casual wear.",
"FewSum.",
"These running shoes are great!",
"They fit true to size and are very comfortable to run around in.",
"They are light weight and have great support.",
"They run a little on the narrow side, so make sure to order a half size larger than normal.",
"CopyCat.",
"I love these shoes.",
"They are light weight and comfortable to wear.",
"I have worn them for several months now and they are holding up well.",
"I would recommend them to anyone looking for a comfortable shoe.",
"In future work we plan to investigate the Perturb-and-Select framework in order to promote summaries with a plethora of desired linguistic characteristics, other than coherence.",
"We shall further explore ways of extending this framework to employ other input perturbation methods and experiment with scenarios of larger scale input.",
"In addition, we plan to further investigate our proposed SPR evaluation metric for lexical diversity, by studying its correlation with human judgments.",
"Lastly, we believe our proposed framework and evaluation metric may be applicable to other domains of opinion or news summarization.",
"We would like to thank Hila Gonen, Iftah Gamzu and anonymous reviewers, who helped improve the draft with their invaluable comments and insight."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"other",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other"
] |
[
"Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages.",
"In practice, however, we still face the problem of scarce labeled data, leading to subpar results.",
"In this paper, we propose a novel data augmentation strategy for better cross-lingual natural language inference by enriching the data to reflect more diversity in a semantically faithful way.",
"To this end, we propose two methods of training a generative model to induce synthesized examples, and then leverage the resulting data using an adversarial training regimen for more robustness.",
"In a series of detailed experiments, we show that this fruitful combination leads to substantial gains in cross-lingual inference.",
"There is a growing need for NLP systems that support low-resource languages, for which task-specific training data may be lacking, while domain-specific parallel corpora may be too scarce to train a reliable machine translation engine.",
"To overcome this, zero-shot cross-lingual systems can be trained on a source language LS and subsequently also be applied to other languages LT despite a complete lack of labelled training data for those target languages.",
"In the past, such systems typically drew on translation dictionaries, lexical knowledge graphs, or parallel corpora, to build a cross-lingual model that exploits simple connections between words and phrases across different languages (de Melo and Siersdorfer, 2007; Fu et al., 2020).",
"Recently, pretrained language model architectures such as BERT (Devlin et al., 2019) have been shown capable of learning joint multilingual representations with self-supervised objectives under a shared vocabulary, simply by combining the input from multiple languages (Devlin et al., 2019; Artetxe and Schwenk, 2019; Conneau and Lample, 2019; Conneau et al., 2019).",
"Such representations greatly facilitate cross-lingual applications.",
"Still, the success of such cross-lingual transfer hinges on how close the involved languages are, with substantial drops observed for some more distant language pairs (Lauscher et al., 2020).",
"For our study, we focus on natural language inference (NLI), i.e., classifying whether a premise sentence entails, contradicts, or is neutral with regard to a hypothesis sentence (Williams et al., 2017).",
"This is a useful building block for applications involving semantic understanding (Zhu et al., 2018; Reimers and Gurevych, 2019).",
"However, the task is also very challenging, as it not only requires accounting for very subtle differences in meaning but also inferring presuppositions and implications that are not explicitly stated.",
"Due to these intricate subtleties, zero-shot cross-lingual models are often fairly brittle, while obtaining in-language training data is fairly costly.",
"Data Augmentation.",
"To boost the performance of cross-lingual models, an intuitive thought is to draw on unlabeled data from the target language so as to enable the model to better account for the specifics of that language, rather than just being fine-tuned on the source language.",
"A natural way of exploiting unlabeled data is to consider standard semi-supervised learning methods that leverage a model's own predictions on unlabeled target language inputs (Dong and de Melo, 2019).",
"However, this strategy fails when the predictions are too noisy to serve as reliable training signals.",
"In this paper, we hence explore data augmentation to circumvent this problem.",
"The idea, widespread in computer vision and speech recognition, is to generate new training data from existing labeled data.",
"For images, a common approach is to apply transformations such as rotation and flipping, as these typically preserve the original label assigned to an image (Krizhevsky et al., 2012).",
"For text, in contrast, data augmentation is more challenging, and straightforward techniques include simple operations on words within the original training sequences, such as synonym replacement, random insertion, random swapping, or random deletion (Wei and Zou, 2019).",
"In practice, however, there are two notable problems.",
"One is that the synthesized data from data augmentation techniques may as well be noisy and unreliable.",
"Second, new examples may diverge from the distribution of the original data.",
"On NLI, these problems are particularly pronounced, as the very nature of this task is to account for subtle differences between sentences.",
"Modified versions of the original sentences may no longer have the same meaning and entailments.",
"Hence, existing data augmentation techniques often fail to boost the result quality.",
"Overview and Contributions.",
"In this paper, we propose a novel data augmentation scheme to synthesize controllable and much less noisy data for cross-lingual NLI.",
"This augmentation consists of two parts.",
"One serves to encourage language adaptation by means of reordering source language words based on word alignments to better cope with typological divergency between languages, denoted as Reorder Augmentation (RA).",
"Another seeks to enrich the set of semantic relationships between a premise and pertinent hypotheses , denoted as Semantic Augmentation (SA).",
"Both are achieved by learning corresponding sequence-to-sequence (Seq2Seq) models.",
"The resulting samples along with their new labels serve as an enriched training set for the final cross-lingual training.",
"During this phase, we invoke a special adversarial training regimen that enables the model to better learn from such automatically induced training samples and transfer more information to the target languages while better bridging the gap between typologically distinct languages.",
"Our empirical study demonstrates the necessity of incorporating adversarial training into training with synthetic samples and the superiority of our new augmentation method on cross-lingual Natural Language Inference (Conneau et al., 2018).",
"Remarkably, our cross-lingual approach even outperforms in-language supervised learning.",
"Our proposed method consists of two steps.",
"The first involves inducing training examples with two data augmentation models.",
"Next, a task-specific classifier is trained on both the original and the newly generated training instances, with adversarial perturbation for improved robustness and generalization.",
"Reorder augmentation is based on the intuition of making a model more robust with respect to differences in word order typology.",
"If our training examples consist entirely of instances from a language LS with a fairly strict subjectverbobject (SVO) word order such as English, the model will be less well equipped to pay attention to subtle semantic differences between sentences from a target language LT obeying subjectobjectverb (SOV) order.",
"To alleviate this problem, we can rely on auxiliary data to diversify the training data.",
"For this, we obtain word alignments for unannotated bilingual parallel sentence pairs covering LS and an auxiliary language LA that need not be the same as LT .",
"We then reorder all source sentences to match the word order of LA based on the alignments, and train a model to apply such reordering on the NLI training instances.",
"Formally, suppose we have obtained l unlabelled parallel sentences in the source language LS and in the auxiliary language LA , C = { ( (cid:104) s i , a i (cid:105) | i = 1 , ..., l } , where (cid:104) s, a (cid:105) is a sourceauxiliary language sentence pair.",
"Based on a word alignment model, in our case FastAlign (Dyer et al., 2013), which uses Expectation Maximization to compute the lexical translation probabilities, we obtain a word pair table for each sentence pair (cid:104) s, t (cid:105) , denoted as A ( s, a ) = { ( i 1 , j 1 ) , ..., ( i m , j m ) } .",
"Following the word order of LA , we then reorder the source sequence s by consulting the table A ( s, t ) , yielding the new sentence pair (cid:104) s, s (cid:105) .",
"Next, we consider a pretrained Seq2Seq model, denoted as r ( ; ) .",
"The model is assumed to have been pretrained with an encoder and a decoder in the source language, and we fine-tune this generative model by training on the new parallel corpus C = { ( (cid:104) s i , s i (cid:105) | i = 1 , ..., l } .",
"This generative Seq2Seq model can then reorder the sequences in the labeled training dataset D = { ( x i , y i ) | i = 1 , ..., n } , where n is the number of labeled instances, each x i consists of a sequence pair (cid:104) s 1 , s 2 (cid:105) , and each y i Y is the corresponding ground truth label describing their relationship.",
"Our second augmentation strategy involves training a controllable model that, given a sentence and a label describing the desired relationship, seeks to emit a second sentence that stands in said relationship to the input sentence.",
"Thus, given an existing training sentence pair, we can consider different variations of one sentence in the pair and invoke the model to generate a suitable second sentence.",
"However, such automatically induced samples from SA are inordinately noisy, precluding their immediate use as training data, so we exploit a large pretrained Teacher model trained on available source language samples to rectify the labels of these synthetic samples with appropriate strategies.",
"Generation.",
"As we wish to be able to control the label of a generated example, the requested label is prepended to the input as a (textual) prefix before it is fed into a Seq2Seq model.",
"We adopt the ground-truth label of each example as the respective prefix, resulting in a new input sequence ( y i : s 1 ) coupled with s 2 as the desired output forming a training pair for the generation model.",
"Given the resulting labeled training dataset DSA , we can fine-tune a pretrained Seq2Seq model, denoted as g ( ; ) .",
"This generative Seq2Seq model can then be invoked for semantic data augmentation to generate new training instances.",
"For each ( y : s 1 ) as a labeled input sequence, where y Y \\ { y i } , we generate an s 2 via the fine-tuned Seq2Seq model, yielding a new training instance ( (cid:104) s 1 , s 2 (cid:105) , y ) .",
"Label Rectification.",
"The semantic augmentation induces s 2 automatically based on s 1 and the requested label y .",
"However, the obtained s 2 may not always genuinely have the desired relationship y to s 1 .",
"Thus, we treat this data as inherently noisy and propose a rectifying scheme based on a Teacher model.",
"We wish for this Teacher to be as accurate as possible, so we start off with a large pretrained language model specifically for the source language LS , which we assume obtains a better performance on LS than a pretrained multilingual model.",
"We train the Teacher network h ( ; ) in K epochs using the set of original labeled data D .",
"This teacher model is then invoked to verify and potentially rectify labels from the automatically induced augmentation data D a = { ( x i , y i ) | i = 1 , ..., m } obtained in the previous step (where m is the number of instances).",
"We assume ( y i , c ) = h ( x i ; ) denotes the predicted label along with the confidence score c [0 , 1] emitted by the classifier, and assume a confidence threshold T has been predetermined.",
"There are several strategies to determine the final labels.",
"Teacher Strategy: We adopt D r = { ( x i , y i ) | ( x i , y i ) D a , ( y i , c ) = h ( x i ) , c > T } , i.e., when the confidence score is above T , we believe the Teacher model is sufficiently confident to ensure a reliable label, while other instances are discarded.",
"TR Strategy: An alternative scheme is to instead adopt D r = { ( x i , ( y i , y i , c )) | ( x i , y i ) D a , ( y i , c ) = h ( x i ) } , where ( y i , y i , c ) = (cid:40) y i c > T y i otherwise",
"Here, labels remain unchanged when Teacher predictions match the originally requested labels.",
"In case of an inconsistency, we adopt the Teacher model's label if it is sufficiently confident, and otherwise retain the requested label.",
"Upon completing the two kinds of data augmentation, we possess synthesized data that is substantially less noisy, denoted as D r , which can be incorporated into the original training data D to yield the final augmented training set D a = D D r .",
"With this, we proceed to train a new model f ( ; ) for the final cross-lingual sentence pair classification.",
"As a special training regimen, we adopt adversarial training, which seeks to minimize the maximal loss incurred by label-preserving adversarial perturbations (Szegedy et al., 2014; Goodfellow et al., 2015), thereby promising to make the model more robust.",
"Nonetheless, the gains observed from it in practice have been somewhat limited in both monolingual and cross-lingual settings.",
"We conjecture that this is because it has previously merely been invoked as an additional form of monolingual regularization (Miyato et al., 2017).",
"In contrast, we hypothesize that adversarial training is particularly productive in a cross-lingual framework when used to exploit augmented data, as it encourages the model to be more robust towards the divergence among similar words and word orders in different languages and to better adapt to the new modestly noisy data.",
"This hypothesis is later confirmed in our experimental results.",
"Adversarial training is based on the notion of finding optimal parameters to make the model robust against any perturbation r within a norm ball on a continuous multilingual (sub-)word embedding space.",
"Hence, the loss function becomes: L adv ( x i , y i ) = L ( f ( x i + r adv ( x i , y i ); ) , y i ) (1) where r adv ( x i , y i ) = argmax r , || r || (cid:15) L ( f ( x i + r ; ) , y i ) Generally, a closed form for the optimal perturbation r adv ( x i , y i ) cannot be obtained for deep neural networks.",
"Goodfellow et al. (2015) proposed approximating this worst case perturbation by linearizing f ( x i ; ) around x i .",
"With a linear approximation and an L 2 norm constraint in Equation 2, the adversarial perturbation is r adv ( x i , y i ) (cid:15) g ( x i , y i ) || g ( x i , y i ) || 2 (2) where g ( x i , y i ) = x i L ( f ( x i ; ) , y i ) .",
"However, neural networks are typically not linear even over a relatively small region, so this approximation cannot guarantee to achieve the best optimal point within the bound.",
"Madry et al. (2017) demonstrated that projected gradient descent (PGD) allows us to find a better perturbation r adv ( x i , y i ) .",
"In particular, for the norm ball constraint || r || (cid:15) , given a point r 0 , || r || (cid:15) aims to find a perturbation r that is closest to r 0 as follows: || r || (cid:15) ( r 0 ) = argmin || r || (cid:15) || r r 0 || (3) To find more optimal points, K -step PGD is needed during training, which requires K forward backward passes through the network.",
"With a linear approximation and an L 2 norm constraint, PGD takes the following step in each iteration: r t +1 = || r || (cid:15) (cid:18) r t + g ( x i , y i , r t ) || g ( x i , y i , r t ) || 2 ) (cid:19) (4) where g ( x i , y i , r t ) = r t L ( f ( x i + r t ; ) , y i ) Here, is the step size and t is the step index.",
"Tasks and Datasets.",
"For evaluation, we used XNLI (Conneau et al., 2018), the most prominent cross-lingual Natural Language Inference corpus, which extends the MultiNLI dataset (Williams et al., 2017) to 15 languages.",
"In our experiments, we considered 20k training data, i.e., 5% of the original training size to study lower-resource settings requiring augmentation.",
"Following previous work, we consider English as the source language in our experiments.",
"Model Details.",
"To show that our reorder augmentation strategy does not require auxiliary data from a low-resource target language, we only give it access to parallel data for another closely related high-resource language.",
"Specifically, we use the EnglishGerman bilingual parallel corpus from JW300 (Agic and Vulic, 2019).",
"Like English, German commonly adopts an SVO word order, but in some instances also mandates SOV and is generally less rigid than English.",
"This allows us to demonstrate the utility of reorder augmentation even in the absence of data from a language similar to the target language.",
"We relied on FastAlign 1 to induce 200k training pairs for Seq2Seq fine-tuning on reordering.",
"As the pre-trained Seq2Seq model, we used Google's T5-base (Raffel et al., 2020), a unified text-to-text Transformer, to generate new training examples.",
"During generation, we set the beam size as 1 and use sampling instead of greedy decoding.",
"For the Teacher model in semantic augmentation, we relied on RoBERTa-Large (Liu et al., 2019), a robustly optimized BERT model, to fine-tune NLI on English.",
"As the multilingual model, we employ XLM-RoBERTa-base (XLM-R) (Conneau et al., 2019), trained on over 100 different languages.",
"For PGD, the step size , norm constraint size (cid:15) , and number of steps K are 1.0, 3.0, 3, respectively.",
"All hyperparameter tuning is conducted based on the 1 https://github.com/clab/fast align Table 1: Hyper-parameters for pretrained models.",
"accuracy on the English validation set.",
"The Teacher strategy for XNLI then is used for the rectification of semantically augmented texts, as inference requires particularly clean data.",
"The threshold T for this is 0.8.",
"An overview of the basic network parameter values is given in Table 1.",
"We rely on early stopping as a termination criterion.",
"For all NLI classification results, we randomly repeat each experiment 5 times and report the averaged accuracy.",
"Cross-lingual Inference Classification.",
"Table 2 compares our approach against several strong baselines on XNLI.",
"The first part considers in-language supervised learning, where we relied on genuine training data from the target language rather than a cross-lingual setting.",
"These results are merely provided for comparison.",
"The second part considers zero-shot cross-lingual transfer, i.e., the setting we are targeting in this paper: We first used English training data to train the XLM-R model and then applied it to non-English languages without any training data in the target language.",
"We also trained the model with PGD adversarial training to assess how well PGD works without any data augmentation.",
"Next, we evaluate XLM-R when trained on original and augmented examples from several augmentation methods, with and without adversarial training, respectively.",
"The first of these is Easy Augmentation (EA) by Wei and Zou (2019), a state-of-the-art method for data augmentation in NLP.",
"It mixes 4 strategies, namely synonym replacement, random insertion, random swapping, and random deletion, applying each of these to 20% of words in a sentence.",
"Additionally, we consider our proposed RA and SA strategies, as well as combinations of EA or RA with SA.",
"Compared with vanilla XLM-R without adversarial training, XLM-R with PGD works better across a range of non-English languages, which shows the effectiveness of adversarial training for more robustness in cross-lingual settings.",
"We observe that XLM-R, when trained with EA or RA, outperform the setting without augmentation for English and some non-English languages, though it does not achieve sufficiently stronger results in terms of the average accuracy across different languages.",
"This suggests that XLM-R struggles to benefit from the augmented instances from RA for better generalizability.",
"In contrast, when trained with SA, XLM-R performs better than without SA examples for most languages, confirming that our semantic augmentation is beneficial.",
"Remarkably, XLM-R with SA examples even succeeds at outperforming in-language training with an average absolute improvement of about 1.1% in accuracy, suggesting that cross-lingual models trained with automatically generated English examples can be more informative with regard to inference than target language examples.",
"2 Next, we also observe that the accuracy of XLM-R with additional examples from EA, RA, SA is boosted with PGD.",
"This suggests that adversarial training is particularly useful to boost generalizability and robustness when operating on artificial augmented examples.",
"Beyond this, our full zero-shot approach further outperforms all baselines across 14 languages, including in-language training.",
"This demonstrates the value of improving generalizability and robustness by adding diverse forms of augmentation in an adversarial training framework that can cope with noisy examples.",
"Comparisons on Different Rectifying Strategies.",
"One key part of our method is the label rectification mechanism.",
"We compare different rectification strategies in Table 3.",
"The results show that the Teacher and TR methods introduced in Section 2.1.2 yield fairly similar results.",
"This confirms the robustness of our approach with regard to the choice of strategy.",
"The same also holds for an additional option, Agreement , which retains only those examples on which the prediction from the Teacher agrees with the originally requested label.",
"Finally, for comparison, we evaluated yet another strategy, Requested , which always adopts the originally requested labels as chosen for generation.",
"We find that this strategy introduces overly many unreliable labels, so the model is unable to work well.",
"This confirms that rectifying labels with a Teacher model is a crucial ingredient.",
"Comparisons on Adversarial Perturbations.",
"For assessing the value of PGD for adversarial per-2 Note that the in-language training data in XNLI was created using machine translation.",
"turbation, Table 4 compares PGD with the standard Fast Gradient Method (FGM) for adversarial perturbation (Goodfellow et al., 2015) as introduced in Section 2.2.",
"We ran experiments on XNLI with 10k and 20k training data, each augmented with 80k induced semantic examples.",
"We observe that FGM obtains a lower average accuracy than PGD with the same amount of training data, confirming the superiority of PGD in providing better adversarial perturbations than FGM to improve both generalization and robustness.",
"Effectiveness on Different Training Sizes.",
"Data augmentation is an important approach to deal with scarce labels.",
"The results in Table 4 further show that when fine-tuning T5 using 10k XNLI training instances with 80k semantic and 10k reorder augmented examples, we obtain substantially better results than when using 20k training instances without augmentation.",
"We can also observe the improvement of XLM-R with RA, SA, and adversarial training over vanilla XLM-R on each language as plotted in Figure 2.",
"The relative gains with 10k training data are larger than with 20k training data across a range of languages, which shows that our method is consistently most beneficial when training data is scarce.",
"Influence of Amount of Augmentation.",
"To assess the role of the amount of data augmentation, we conducted experiments on XNLI with 20k training examples, and evaluated the effect of adding either 20k or 80k augmented examples from EA, RA, SA.",
"The results are given in Table 5.",
"When trained without PGD, one can often benefit from using up to 80k augmented examples.",
"Due to the inherent reordering differences between English and German, there are limits regarding the amount of such data one ought to incorporate.",
"We find that 20k instances from RA can suffice.",
"We observe Table 4: Accuracy (in %) on XNLI experiments with different amounts of training and augmentation data, and different adversarial training methods.",
"that EA with PGD requires up to 80k augmented instances, i.e., 3 times the size of the original training data, to outperform XLM-R with PGD, whereas only 20k augmented examples suffice for RA with PGD to beat XLM-R with PGD.",
"Case Studies.",
"To better illustrate the principles of our data augmentation technique, we provide several examples.",
"Table 6 shows two examples of the three data augmentation processes on XNLI.",
"For the first example, the original label is contradiction , so entailment and neutral serve as requested labels to generate new training text.",
"Next, our Teacher model attempts to rectify these labels.",
"Although our generative model treats Vrenna and I fought him in a fight, but he had just gotten us as neutral to S 1 ( Vrenna and I both fought him and he nearly took us ), the Teacher model changes the label to entailment .",
"For the second example, both the generative and Teacher model are unable to conclude that The rice ripens in the summer is contradictory with the premise.",
"From the two EA outputs, we can observe him is randomly deleted in Example (1) and the and rice is swapped in Example (2), which loses some information, whereas RA Seq2Seq generated examples maintain all crucial information despite the reordering.",
"Data Augmentation.",
"Data augmentation is a promising technique, especially when dealing with scarce data, imbalanced data, or semi-supervised learning problems.",
"Back-translation (Sennrich et al., 2015) has been considered as a technique to obtain alternative examples preserving the original semantics, by translating an existing example in language LA into another language LB and then translating it back into LA to obtain an augmented example.",
"Yu et al. (2018) and Xie et al. (2020) applied it to question answering and semi-supervised monolingual training scenarios.",
"However, this requires high-quality translation engines that often do not exist in the settings in which one wishes to apply cross-lingual systems.",
"Wei and Zou (2019) instead combined synonym replacement, random insertion, random swapping, and random deletion in a method named EDA.",
"Since insertion and deletion may affect the semantics of the utterance, some studies opt to control Table 6: Examples of XNLI data augmentation.",
"the selection of words to be replaced with indicators such as TF-IDF scores (Xie et al., 2020).",
"Fadaee et al. (2017) use contextualized word embeddings to replace the target word.",
"Kobayashi (2018) proposed a bi-directional language-model-based augmentation method, and Wu et al. (2019) further improved its results by switching to BERT.",
"Another major category is text generation based augmentation.",
"Anaby-Tavor et al. (2020) proposed a language model based data augmentation method, shown to improve classifier performance on a variety of English datasets.",
"It relies on GPT-2 (Radford et al., 2018) to generate a single new sequence in each instance.",
"Our work, in contrast, presents a novel augmentation scheme designed to cope with the special challenges of sentence pair classification, where a Seq2Seq Transformer enables augmentation based on a paired input sentence.",
"Our method also introduces a Teacher model to rectify labels.",
"Apart from this, we expand the idea of language model based augmentation to cross-lingual settings and leverage noisy instances with adversarial training.",
"Adversarial Training.",
"Many approaches for improving the robustness of a machine learning system against adversarial perturbations (Szegedy et al., 2014) have been advanced.",
"Goodfellow et al. (2015) proposed a fast gradient method based on linear perturbation of non-linear models.",
"Later, Madry et al. (2017) presented PGD-based adversarial training through multiple projected gradient ascent steps to adversarially maximize the loss.",
"In NLP, Belinkov and Bisk (2017) exploited structure-invariant word manipulation and robust training on noisy texts for improved robustness.",
"Iyyer et al. (2018) proposed syntactically controlled paraphrase networks with back-translated data and used them to generate adversarial examples.",
"Adversarial training also plays a role in improving a neural model's generalization.",
"For instance, Cheng et al. (2019) used adversarial source examples to improve a translation model.",
"Dong et al. (2020) exploit FGM-based adversarial training in self-learning for improved cross-lingual text classification.",
"In our setting, we count on adversarial training in the word embedding space and show that PGD-based adversarial training remains effective when the adversarial perturbation is applied to noisy augmented examples.",
"While multilingual pretrained model have enabled better cross-lingual learning, we still often encounter data scarcity issues due to the high cost of collecting data, which weakens the generalization ability of the multilingual model.",
"To address this, this paper proposes a novel data augmentation strategy with label rectification to build synthetic examples, outperforming even models trained with larger amounts of ground-truth data.",
"We show that we can best learn from such noisy instances with adversarial training, which enables the classifier to transfer more information from the source language to other languages and to become more robust.",
"Remarkably, with this, our models trained without any target language training data at all are able to outperform models trained fully on in-language training data.",
"Moreover, the amount of augmented data from our Seq2Seq-based reorder augmentation used in training is much less than that required by the state-of-the-art EDA method in order to achieve comparable performance.",
"Finally, in our series of follow-up experiments comparing different training regimens and variants, one notable finding is that our overall augmented approach can even outperform non-augmented supervision with twice as many ground truth labels.",
"Overall, this suggests our combination of data augmentation with adversarial training as a valuable way of learning substantially more accurate and more robust models without any target-language training data.",
"Research on cross-lingual NLP is often motivated by a desire to provide state-of-the-art advances to linguistic communities that have been underserved.",
"Such advances may enable better access to information as well as to products and services.",
"However, there is a risk that such technological advances may not always be desired by the relevant communities and may indeed also cause harm to them (Bird, 2020).",
"Moreover, cross-lingual systems in particular may exhibit biases with regard to the source language used for training and the general cultural assumptions reflected in such data.",
"In light of this, special care needs to be taken to analyze potential outcomes and risks before deploying cross-lingual systems in real-world applications."
] | [
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"result",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Multilayer architectures are currently the gold standard for large-scale neural machine translation.",
"Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding.",
"Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks.",
"Based on our understanding, we then propose to regularize the layer-wise representations with all tree-induced tasks.",
"To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization.",
"Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.",
"Neural machine translation (NMT) has witnessed great successes in recent years (Bahdanau et al., 2014; Wu et al., 2016).",
"Current state-of-the-art (SOTA) NMT models are mainly constructed by a stacked neural architecture consisting of multiple hidden layers from bottom-up, where a classifier is built upon the topmost layer to solve the target task of translation (Gehring et al., 2017; Vaswani et al., 2017).",
"Most works tend to focus on the translation performance of the classifier defined on the topmost layer, however, they do not deeply understand the learned representations of hidden layers.",
"Shi et al. (2016) and Belinkov et al. (2017) attempt Conghui Zhu is the corresponding author.",
"to understand the hidden representations through the lens of a few linguistic tasks, while Ding et al. (2017) and Strobelt et al. (2018) propose appealing visualization approaches to understand NMT models including the representation of hidden layers.",
"However, employing the analyses to motivate new methods for better translation, the ultimate goal of understanding NMT, is not achieved in these works.",
"In our paper, we aim at understanding the hidden representation of NMT from an alternative viewpoint, and particularly we propose simple yet effective methods to improve the translation performance based on our understanding.",
"We start from a fundamental question: what are the characteristics of the hidden representation for better translation modeling?",
"Inspired by the lessons from transfer learning (Yosinski et al., 2014), we propose to empirically verify the argument: good hidden representation for a target task should be able to generalize well across any similar tasks.",
"Unlike Shi et al. (2016) and Belinkov et al. (2017) who employ one or two linguistic tasks involving human annotated data to evaluate the feature generalization ability of the hidden representation, which might make understanding bias to a specific task, we instead construct a nested sequence of many relative tasks with entailment structure induced by a hierarchical clustering tree over the output label space (target vocabulary).",
"Each task is defined as predicting the cluster of the next token according to a given source sentence and its translation prefix.",
"Similar to Yu et al. (2018), Zamir et al. (2018) and Belinkov et al. (2017), we measure the feature generalization ability of the hidden representation regarding each task.",
"Our observations are ( 2): The hidden representations learned by NMT indeed has decent feature generalization ability for the tree-induced relative tasks compared to the randomly initialized NMT model and a strong baseline with lexical features.",
"The hidden representations from the higher layers generalize better across tasks than those from the lower layers.",
"And more similar tasks have closer performances.",
"Based on the above findings, we decide to regularize and improve the hidden representations of NMT for better predictive performances regarding those relative tasks, in hope of achieving improved performance in terms of the target translation task.",
"One natural solution is to feed all relative tasks to every hidden layer of the NMT decoder under the framework of multi-task learning.",
"This may make the full coverage of the potential regularization effect.",
"Unfortunately, this vanilla method is inefficient in training because there are more than one hundred task-layer combinations.",
"1 Based on the second finding, to approximate the vanilla method, we instead feed a single relative task to each hidden layer as a regularization auxiliary in a coarse-to-fine manner ( 3.1).",
"Furthermore, we design another regularization criterion to encourage predictive decision consistency between a pair of adjacent hidden layers, which leads to better approximated regularization effect ( 3.2).",
"Our method is simple to implement and efficient for training and testing.",
"Figure 1 illustrates the representation regularization framework.",
"To summarize, our contributions are as follows: We propose an approach to understand hidden representation of multilayer NMT by 1 There are about 22 tasks that we have constructed and 6 layers in SOTA NMT models (Vaswani et al., 2017).",
"measuring their feature generalization ability across relative tasks constructed by a hierarchical clustering tree.",
"We propose two simple yet effective methods to regularize the hidden representation.",
"These two methods serve as trade-offs between regularization coverage and efficiency with respect to the tree-induced tasks.",
"We conduct experiments on two widely used datasets and obtain consistent improvements (up to +1.3 BLEU) over the current SOTA Transformer (Vaswani et al., 2017) model.",
"In this section, we first introduce some background knowledge and notations of the multilayer NMT model.",
"Then, we present a simple approach to better understand hidden representation through transfer learning.",
"By analyzing the feature generalization ability, we draw some constructive conclusions which are used for designing regularization methods in Section 3. 2.1 Background and Notations Suppose x = (cid:104) x 1 , , x | x | (cid:105) is a source sentence, i.e. a sequence of source tokens, and a target sentence y = (cid:104) y 1 , , y | y | (cid:105) is a translation of x, where each y t in y belongs to Y , the target vocabulary.",
"A translation model minimizes the following chain-rule factorized negative log-likelihood loss: (cid:96) mle = log P ( y | x ; ) = (cid:88) t log P ( y t | x , y <t ; ) , (1) where denotes the overall parameter of the translation model.",
"According to",
"Eq.(1), an alternative view of the translation problem can be cast to token-level stepwise classification (Daume et al., 2009): predict the target token y t given a context consisting of x and y <t = (cid:104) y 1 , , y t 1 (cid:105) corresponding to each factor P ( y t | x , y <t ; ) .",
"The SOTA multilayer NMT models parameterize P ( y t | x , y <t ; ) via powerful multilayer encoder and stacked layers of feature transformations (cid:104) h 1 , , h L (cid:105) at the decoder side: P ( y t | x , y <t ; ) = P ( y t | x , y <t , h L ; ) , (2) where h l ( x , y <t ) = l (cid:0) x , y <t ; h l 1 ; (cid:1) is the l th hidden layer recursively defined by l on h l 1 .",
"We also use h l ( x , y <t ) to represent the output hidden representation of layer l for a specific context.",
"Note that, l bears several types of instantiation and is an active area of research (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017).",
"Inspired by feature transfer learning (Yosinski et al., 2014), we attempt to understand hidden representations of NMT by evaluating their generalization abilities across any tasks that are related to translation.",
"There are some researchers who study hidden representations of NMT by using linguistic tasks such as morphology, named entity, part-or-speech or syntax (Shi et al., 2016; Belinkov et al., 2017, 2018).",
"They typically rely on human annotated resources to train a model for each linguistic task, so their methods can not be used for languages which lack human annotations.",
"Moreover, their considered tasks are too few to have a good coverage over task space for measuring transferability (Yu et al., 2018; Zamir et al., 2018), and their understanding results may bias to a specific task.",
"As a result, to evaluate the feature generalization ability of hidden representation, we artificially construct plenty of relative tasks which do not employ any human annotation.",
"This makes our evaluation approach more general.",
"Definition of the relative tasks Suppose Y k denotes any partition (or clustering) regarding the output label space (target vocabulary) Y .",
"That is, Y k is a set of subsets Y ki Y where i = 1 ... |Y k | , such that i, j, Y ki Y kj = and i Y ki = Y .",
"We define the following relative task: given a context (cid:104) x , y <t (cid:105) , predict the subset or the cluster to which the t th token y t belongs in Y k , denoted as Y k ( y t ) .",
"To simplify notation, we regard Y k both as a relative task and as a partition.",
"It is clear that the above type of tasks are similar to the task of translation according to the description in Section 2.1.",
"Furthermore, different k represents different relative task and thus we actually obtain a great many relative tasks in total.",
"However, it is impossible to evaluate the hidden representation on all those tasks; moreover, due to relationship between tokens (Hu et al., 2016) in Y , not all partitions are reasonable.",
"As a consequence, motivated by the analysis of VC Dimension (Vapnik, 1995), we construct a sequence of nested partitions with an entailment structure: 2 Y 1 (cid:22) (cid:22) YK .",
"The benefit is that a spectrum of task hardness can be constructed due to the increased partition or task cardinalities.",
"As a matter of fact, we instantiate the above nested partitions through brown clustering (Brown et al., 1992; Stratos et al., 2014) over Y to get a hierarchical clustering tree and then consider each tree depth along the tree as a partition representing a relative task Y k (as shown in Figure 1).",
"In the following experiments, we run brown clustering algorithm over a Ch En dataset ( 4) and construct a tree of English with depth 21.",
"Without loss of generality, we regard the task Y 22 at a virtual 22 depth of the tree as equivalent to the translation task Y .",
"Actually, Y and Y 22 have the same cardinality but are different in definition.",
"3 Evaluating generalization We use multi-class logistic regression to fit the layer-wise hidden representation learned by a well-trained 6-layer Transformer (Vaswani et al., 2017) over each training instance (cid:104) x , y <t (cid:105) .",
"Specifically, given a context (cid:104) x , y <t (cid:105) , for each task Y k and a hidden representation h l ( x , y <t ) of this context, which is fixed as constant, we predict the cluster Y k ( y t ) according to the following probability: P (cid:0) Y k ( y t ) | h l ( x , y <t ); l Y k (cid:1) , (3) where l Y k is the parameter of the logistic regression model for task Y k at l th layer.",
"The difference between",
"Eq.(3) and",
"Eq.(2) is that the former is the linear model parameterized by l Y k while the latter is the NMT model parameterized by .",
"Since there are L = 6 layers in Transformer's decoder and K = 22 relative tasks, we have more than one hundred such linear models defined with Eq.",
"(3) in total.",
"Therefore, it is costly to train them independently.",
"Since the loss for each linear model is convex, joint training leads to exactly the same optimum as independent training and thus we employ mini-batch stochastic gradient descent to minimize the joint loss as follows: (cid:88) k (cid:88) l (cid:88) t log P (cid:0) Y k ( y t ) | h l ( x , y <t ); l Y k (cid:1) .",
"2 Here what we mean entailment relation ( (cid:22) ) between two partitions Y k and Y k +1 is: Y k +1 i , !",
"Y kj , s.t. Y k +1 i Y kj .",
"3 Please refer to Appendix A for detailed preprocessing of the tree to get nested partitions.",
"After training, we fix each l Y k and then measure the feature generalization ability of each h l by validating on the task Y k regarding a heldout dataset, following Yu et al. (2018).",
"For validation, we report accuracy on a heldout dataset through the strategy of maximum a posteriori (MAP).",
"4 Analysis To figure out how good the learned hidden representations are, we consider two baselines to extract features regarding each context (cid:104) x , y <t (cid:105) to train logistic regression models for comparison.",
"For the first baseline, the features of the context are the hidden representations from the last layer of a randomly initialized Transformer; for the second, the features are derived by lexical feature templates, which include the source-side bag-of-words (BOW) features and target-side BOW features indexed by relative positions of y t 's previous with up to m (Markov length) tokens.",
"5 As shown in Figure 2, the lexical baseline delivers comparable accuracies for fine-grained tasks with respect to well learned Transformer's first layer, thanks to its discriminant ability with abundant lexical features.",
"For example, its accuracy reaches about 26% for the task with cardinality |Y 21 | .",
"The random baseline performs worse for tasks with cardinality |Y 8 | , which indicates that random representations in NMT have limited generalization abilities to fine-grained tasks as expected.",
"The well-trained low-layer hidden representations yield much higher accuracies than the random baseline and are even better than the lexical baseline.",
"This shows that the hidden representations from a well-trained NMT have good generalization abilities across relative tasks.",
"In addition, as the layer goes up, the performance of hidden representations increase significantly over differ-4 The accuracy is measured by whether arg max z Y k P ( z | h l ( x , y <t ); l Y k )) = Y k ( y t ) .",
"ent relative tasks, which clearly demonstrates that more complex neural architecture leads to stronger expressibility.",
"This provides a quantitative evidence to support the statement in Bengio et al. (2009), Goodfellow et al. (2016).",
"In this section, we propose two simple methods, which respect the above findings, to enhance the hidden representations in NMT such that they generalize well across those relative tasks.",
"A natural method to improve feature generalization of hidden representation is to jointly train the target task with all relative tasks for all hidden layers, which we call full-coverage method.",
"As mentioned in Section 2.2, this method will lead to training more than one hundred tasks ( K L ) in total, where K denotes the depth of the hierarchical clustering tree (aka. the number of tasks) and L the number of hidden layers.",
"Unfortunately, since each task involves a softmax operation which may be the computation bottleneck for the task Y k with large cardinality, this method is inefficient for training.",
"As a solution to approximate the potential regularization effect of the full-coverage method, we confine each hidden layer to engage in a single relative task.",
"Motivated by the observation that representations from higher layers have better expressibility than lower layers, as claimed in 2.2, we instead employ a coarse-to-fine strategy to select one task for each layer: finer-grained tasks for higher layers while coarser-grained task for lower layers.",
"Specifically, suppose 1 s ( l ) K is the selected index regarding task Y s ( l ) for the l th layer, then it subjects to s ( l ) < s ( l + 1) for each l .",
"In addition, to encourage the diversity among the selected L tasks, we require s ( l + 1) s ( l ) to be large enough for all l .",
"Formally, the loss of the hierarchical regularization (HR) method is: (cid:96) hr = (cid:88) l (cid:88) t log P ( Y s ( l ) ( y t ) | x , y <t , h l ; , l Y s ( l ) ) , (5) where P ( Y s ( l ) ( y t ) | x j , y j <t , h l ; , l Y s ( l ) ) is similar to Eq.",
"3 except that it treats the parameters in NMT as parameters besides l Y s ( l ) .",
"Compared to Eq.",
"4, it includes fewer terms for summation.",
"The HR method is very simple and computationally efficient, however, using one task to regularize a layer may not be a good approximation of the full-coverage method, since HR method might lead to inconsistent decisions for two different layers, which is formalized through the following entailment structure as introduced in Section 2.2:",
"where s ( l ) is the selected task for the l th layer by HR, 1 l 1 < l 2 L and P ( z | x , y <t , h l ; , l Y s ( l ) ) is similar to Eq.",
"(3) for the task Y s ( l ) and l th layer except that it does not treat the NMT parameters as constant.",
"However, it always occurs on the training data that Y s ( l 1 ) ( y t ) Y s ( l 2 ) ( y t ) .",
"To alleviate this inconsistency issue for better approximating the full-coverage method, we leverage the above structural property by adding another regularization term.",
"Firstly, we project the distribution P ( | x , y <t , h l ; , l Y s ( l ) ) into the domain of Y s ( l 1) .",
"Then we calculate KL divergence between the projected distribution and P ( | x , y <t , h l 1 ; , l 1 Y s ( l 1) ) .",
"Figure 3 illustrates the idea.",
"Since it is inefficient to consider all pairs of l 1 and l 2 , so we instead consider the consistency between all adjacent layers.",
"Formally, we obtain Method # Param.",
"(cid:96) shr = (cid:96) hr + 1 L 1 (cid:88) l KL (cid:18) P ( | x , y <t , h l ; , l Y s ( l ) ) || PROJ (cid:2) P ( | x , y <t , h l +1 ; , l +1 Y s ( l +1) ) (cid:3)(cid:19) , (7)",
"where PROJ is the projection defined in Figure 3, and other notations are defined as before.",
"We call the above regularization as structural hierarchical regularization (SHR) since it takes advantage of the structure of the tree.",
"In our experiments, we add HR",
"(Eq.(5)) and SHR",
"(Eq.(7)) losses respectively into the negative log-likelihood regarding Eq.",
"(1) for training all parameters and l Y s ( l ) .",
"One of our advantage is that we only use for testing and thus our testing is as efficient as that for the baseline NMT model.",
"We conduct experiments on two widely-used corpora.",
"We choose from the LDC corpora about 1.8M sentence pairs for Zh En translation with word-level vocabulary of 30k for both languages.",
"We use the WMT14 En De task which consists 4.5M sentence pairs and the vocabulary is built by joint BPE with 32k merging operations.",
"Besides the baseline, we also conduct experiments on 3 regularization variants: Baseline : the Transformer base model proposed in Vaswani et al. (2017).",
"FHR : fine-grained HR based Transformer, which adopts the original label space as task for all selected layers for regularization.",
"This variant is used to demonstrate that low layers which are weak in expressibility can mess up hard tasks which are unsuitable to learn.",
"HR and SHR : as proposed in Section 3. Method MT02 MT03 MT04 MT05 MT06 MT08 Avg.",
"Choice of relative tasks Based on the heuristics in Section 3.1, we first choose the task with the largest cardinality from the hierarchical clustering tree without the virtual depth, because this task is most related to translation (close cardinal-ities).",
"Then we balance task diversity through a 5 times cardinality difference between tasks from the previous chosen task.",
"As a result, we can obtain 4 tasks with s ( l ) = 5 , 8 , 11 , 20 for the Zh En task and s ( l ) = 5 , 7 , 10 , 21 for the En De task, where l = 2 , 3 , 4 , 5 of the 6-layer decoder.",
"6 4.2 Efficiency Comparison Table 1 summarizes the total number of parameters for the baseline and 3 regularization variants.",
"As in Eq.",
"(5), HR introduces extra parameters compared with the baseline.",
"Besides, calculating the second term in Eq.",
"(7) requires modest overheads.",
"Therefore, training our SHR is slower than training the baseline.",
"Although the proposed HR and SHR introduce extra parameters during training, they do not involve them during testing and thus testing is as efficient as the baseline.",
"Table 2 shows the evaluation results of the baseline and 3 regularization variants on the Zh En dataset.",
"Since there are no recent work reporting Transformer's performance on this dataset, we choose a recurrent SOTA model to show that our baseline is already better than it, which is a common knowledge that Transformer can outperform recurrent NMT models.",
"Our HR method surpasses the baseline 0.6 BLEU point, while the SHR method can improve upon HR by about a further 0.8 point, namely about 1.4 points over the baseline.",
"Interestingly, the FHR method only performs on par with baseline, which indicates that forcing low layers to learn fine-grained tasks will not lead to beneficial intermediate representations since they struggle to learn a well-structured rep-6 Please refer to Appendix C for detailed information.",
"resentation space.",
"This matches the finding in Section 2: low layers may not be expressible enough to perform well on tasks with large cardinalities.",
"In the following, we conduct several quantitative experiments to demonstrate the advantages of our proposed two regularization methods over the baseline.",
"Note that, since we need to guarantee that the decoded sequence has the same length with the reference for one-by-one token comparison, the following experiments are all conducted with teacher forcing and greedy decoding.",
"4.4.1 Better Feature Generalization Ability In the same manner as Section 2, we learn softmax weights for all relative tasks by fixing model weights learned by HR and SHR methods.",
"Figure",
"4(a),",
"(b) show the feature generalization ability (absolute accuracy difference) of HR and SHR over baseline.",
"Since layer 1 is not selected as the regularized layer, no significant gap is observed.",
"However, since layer 1 is close to the loss directly imposed on layer 2, improvements about 5% and 8% are obtained.",
"Since in the baseline, layer 5, 6 are already close or with the ultimate fine-grained loss, HR method shows very small gain.",
"But our SHR method can still improve about 4% absolute points.",
"Except for layer 1, it is also evident to see larger gaps (more than 20%) at lower layers than higher layers due to the fact that lower layers, which are distant from the topmost loss in the baseline, require more supervision signals to shape their latent representation space.",
"We measure decision consistency for a specific layer and decision consistency between a pair of layers using two metrics.",
"The first metric is measured by conditional accuracy, which is the possibilities of the classifier parameterized by l Y k correctly predicting Y k ( y t ) if the classifier parameterized by l Y k (cid:48) correctly predicts Y k (cid:48) ( y t ) for any",
"k (cid:48) < k .",
"The second metric is measured by the counts of consistent decision pairs between any pair of regularized layers as defined in Eq.",
"(6).",
"Figure",
"4(c),",
"(d) shows the absolute conditional accuracy difference of our HR and SHR over baseline.",
"In accordance with the observations in previous subsection, except for layer 1, other layers show significant gains (HR more than 7%, SHR more than 10%) over baseline.",
"Decision consistency for each layer proves the well-shaped layer-wise representation and potentially paves the way for better inter-layer decision consistency.",
"Figure 5 illustrates the consistency counts between any regularized layer pairs, including those without KL-based regularization.",
"Deeper color represents more consistency counts.",
"It is evident that the baseline has a very poor consistency between any layers.",
"Our HR method is almost 2 times better, and the SHR obtains further improvement.",
"A better decision consistency can couple the decision between relative tasks, so that by reaching a high accuracy on easier tasks can benefit the harder ones.",
"Another interesting observation is that non-adjacent layers without KL loss also obtain significant improvements on decision consistency, because the KL term is actually transitive between layers where the predictive distributions are in accordance with the tree structure.",
"In this subsection, we clarify that the coarse-to-fine regularized representations can also benefit low-frequency words.",
"We divide the vocabulary into ten equally-sized bins, and summarize token accuracy for each bin over the development set.",
"As shown in Figure 6, the x-axis represents the frequency spectra, that is, we sort the bins by word frequency from rank 1 (the most frequent words) to 10 (the rare words).",
"We can see that both HR and SHR methods demonstrate a gradually increased gap over the baseline as the word frequency decreases, which means our methods become better for less frequent word bins.",
"However the gap shrinks at the 10 th bin.",
"This may be the fact that for those words that appears with less than 50 counts, both methods are helpless.",
"For baseline, it is hard to train well-shaped hidden representations for low-frequent words; in addition, due to the distance between the loss and the low layers, it is also hard to train weights due to the unstable gradient signal.",
"By adding our regularization terms, every level of the multilayer decoder will receive supervision signals directly and lower layers will receive coarser grained thus higher frequency signals to shape their representations.",
"Table 3 shows the evaluation results of the baseline and the 3 regularization variants on the En De dataset.",
"Notice that we use the base model while Chen et al. (2018) and Ott et al. (2018) use big models.",
"The FHR method still does not show significant improvement over the baseline (less than 0.2 BLEU point), which verifies the hypothesis that we make by analyzing the Zh En results.",
"Our HR method is already stronger than Chen et al. (2018) which uses a multilayer RNN as decoder.",
"Compared to the current state-of-the-art in Ott et al. (2018) who utilize huge batch size and over 100 GPUs on the Transformer big model, our SHR method can be on par with them.",
"This comparison indicates that better regularized hidden representations can be potentially powerful than increasing model capacity when using the same optimization method.",
"in the learned hidden representations.",
"Shi et al. (2016) are the first to investigate source syntax encoded in source hidden representations.",
"Similarly, Belinkov et al. (2017) and Belinkov et al. (2018) give detailed analyses of both encoder and decoder's learned knowledge about part-of-speech and semantic tags at different layers.",
"Unlike those works that employ one or two linguistic tasks, we instead construct plenty of artificial tasks without any human annotations to analyze the hidden representations.",
"This makes our approach more general and may potentially lead to less biased conclusions.",
"Based on our understanding of the hidden representations, we further develop simple methods to improve NMT through representation regularization.",
"Many works regularize NMT with lexical knowledge such as BOW (Weng et al., 2017) and morphology (Niehues and Cho, 2017; Zaremoodi et al., 2018), or syntactic knowledge (Kiperwasser and Ballesteros, 2018; Eriguchi et al., 2017).",
"One significant difference is that we take into account the structure among plenty of artificial tasks and design a well motivated regularization term to encourage the structural consistency of tasks, which further improves NMT performance.",
"In addition, our coarse-to-fine way to select tasks for regularization is also inspired by recent works using a coarse-to-fine mechanism for learning better word embeddings in NMT (Zhang et al., 2018) and predicting intermediate solutions for semantic parsing (Dong and Lapata, 2018).",
"In this work, we present a simple approach for better understanding NMT learned layer-wise representations with transfer learning over plenty of artificially constructed relative tasks.",
"This approach is general as it requires no human annotated data, only demanding target monolingual corpus.",
"Based on our understanding, we propose two efficient yet effective methods for representation regularization which further pushes forward the SOTA NMT performances.",
"In the future, we want to dig deeply into the subspace regularities of the learned representations for more fine-grained understanding.",
"The authors would like to first thank all the anonymous reviewers for their critical suggestion and valuable experimental advice.",
"The authors would also like to thank Yong Jiang for discussion; Shangchen Zhou for better figure design; Haozhe Xie, Chaoqun Duan, Xin Li, Ziyi Dou and Mengzhou Xia for proof reading.",
"Tiejun Zhao is supported by National Key RD Program of China Project 2017YFB1002102."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"other",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.",
"However, much of this work only relies on the shared vocabulary and bilingual contexts to encourage the correlation across languages, which is loose and implicit for aligning the contextual representations between languages.",
"In this paper, we plug a cross-attention module into the Transformer encoder to explicitly build the interdependence between languages.",
"It can effectively avoid the degeneration of predicting masked words only conditioned on the context in its own language.",
"More importantly, when fine-tuning on downstream tasks, the cross-attention module can be plugged in or out on-demand, thus naturally benefiting a wider range of cross-lingual tasks, from language understanding to generation.",
"As a result, the proposed cross-lingual model delivers new state-of-the-art results on various cross-lingual understanding tasks of the XTREME benchmark, covering text classification, sequence labeling, question answering, and sentence retrieval.",
"For cross-lingual generation tasks, it also outperforms all existing cross-lingual models and state-of-the-art Transformer variants on WMT14 English-to-German and English-to-French translation datasets, with gains of up to 1 2 BLEU.",
"1 1 Introduction Cross-lingual pre-trained models like mBERT (De-vlin et al., 2019), XLM (Lample and Conneau, 2019) and XLM-R (Conneau et al., 2019) that target providing contextualized representations for the inputs across languages, have shown large poten-* Equal contribution.",
"chrome-extension://mbniclmhobmnbdlbpiphghaielnnpgdp/screenshot.html?id=screenshot_0.0007918474100581108 1/1",
"Behind the great success, two major factors play the role of aligning the contextual representations between languages: 1) build the shared vocabulary across languages through subword tokenization, which supports the simple extension of masked language modeling (MLM) from English corpus to multilingual corpus; 2) capture the alignment in parallel data via concatenating two sentences as input, called translation language modeling (TLM).",
"However, both of these two mechanisms rely on the self-attention module (query=key/value) of the Transformer encoder to implicitly enhance the interdependence between languages, which may lead to few attention patterns across languages.",
"Taking Figure 1 as an example, even though inputting a pair of parallel sentences, both models only attend to the English context to build the representation of English tokens, while ignoring the se-b) Translation Language Modeling (TLM)",
"mantically related Chinese tokens.",
"That is, the self-attention module captures little communication across languages, which is crucial for learning universal cross-lingual representations.",
"Based on the above observation, we propose to plug a cross-attention module (query!=key/value) into the Transformer encoder and design a cross-attention MLM task to explicitly capture the interdependence between languages.",
"As illustrated in Figure 2",
"(c), the cross-attention module takes the representation of x as query and y as key/value (purple lines) to build the representations of x in the next layer, thus explicitly aligning the representations across languages (purple attention ma-trices).",
"It can effectively avoid the degeneration of predicting masked words only conditioned on the context in its own language.",
"Moreover, what distinguishes our work from pre-training an encoder-decoder model (Liu et al., 2020b) is that we also keep the good nature (i.e., bidirectional contextual modeling) of the original encoder by unplugging the cross-attention from the model to predicting the masked words (e.g., x 2 and y 3 ).",
"Furthermore, when fine-tuning on various downstream tasks, we can choose either plug-in or plug-out the cross-attention module on-demand, thus making it suitable for both cross-lingual language understanding (NLU) and generation tasks (NLG).",
"For cross-lingual NLU tasks, if plugging the cross-attention module out, we can adopt the same fine-tuning methods as an encoder-only model like XLM.",
"However, we find that plugging the cross-attention module in fine-tuning can better utilize the bilingual context to boost the performance.",
"For cross-lingual NLG like machine translation (MT), the cross attention is already jointly pre-trained with the whole network.",
"Therefore, the parameters of the decoder do not need to be re-adjusted substantially in the following tuning process, thus fundamentally solving the main drawback of utilizing pre-trained encoders like XLM for initializing encoder-decoder models.",
"We call our approach VECO for Variable and Flexible Cross-lingual Pre-training.",
"We validate VECO on a variety of representative cross-lingual understanding and generation benchmarks.",
"Regrading cross-lingual understanding tasks, we conduct experiments on the XTREME benchmark consisting of 9 cross-lingual tasks, including text classification, sequence labeling, question answering, and sentence retrieval.",
"VECO ranks first at the XTREME leaderboard 2 at the submission deadline.",
"Regrading cross-lingual generation tasks, we validate VECO on the widely used WMT14 English-German and English-French machine translation benchmarks.",
"VECO obtains 44.5 and 31.7 BLEU scores, consistently outperforming existing cross-lingual pre-training approaches and state-of-the-art Transformer variants by around 1 2 BLEU.",
"2 https://sites.research.google/xtreme 2 Pre-training of VECO 2.1 Overview of VECOVECO extends from a multi-layer Transformer encoder and plugs a cross-attention module in each layer.",
"Given a pair of input ( x , y ) and its corrupted version ( x , y ) via randomly masking part of its tokens, the model builds two types of contextualized vector representation for each token: One suit of contextual representations H , denoted as green blocks and yellow blocks in Figure 2",
"(c), are only build on self-attention module (i.e., unpluging the cross-attention module) in each layer.",
"Another suit of contextual representations S , denoted as mixed color blocks in Figure 2",
"(c), are build on both the self-attention and cross-attention modules 3 .",
"The model is trained to predict the masked tokens via two corresponding representations, conditioning on both its own context and paired context, respectively.",
"Take predicting the masked words in sequence x as an example, the training objective is the cross-entropy of the gold distribution and predicted distribution P ( x | x ) and P ( x | y , x ) computed via the above two suits of contextual representations.",
"Thus, the training objective of cross-attention masked language modeling (CA-MLM) can be formulated as L ( x , y ) = log P ( x | x ; s ) log P ( x | y , x ; s , c ) log P ( y | y ; s ) log P ( y | x , y ; s , c ) (1) where s and c are the parameters of self-attention and cross-attention modules.",
"The backbone network of VECO is composed of a stack of N Transformer layers.",
"Each layer has three modules: a required self-attention module, a plug-and-play cross-attention module, and a required feed-forward linear module.",
"Both self-attention and cross-attention modules are based on the multihead attention (Vaswani et al., 2017).",
"An attention function can be described as mapping a query ( Q ) and a set of key-value ( K V ) pairs to an output.",
"For the self-attention module, all the queries, keys and values are the same representations from the previous layer.",
"Specifically, for the l -th Transformer layer, the output of a self-attention head A sl is computed via: Q = H l 1 W Ql (2) K = H l 1 W Kl (3) V = H l 1 W Vl (4) A sl = softmax( QKT d k ) V (5) where H l 1 are the previous layer's outputs, W Ql , W Kl , W Vl are the parameter matrices of self-attention modules.",
"For the cross-attention module, the queries come from the previous layer, and the keys and values come from the last layer's representations of paired input.",
"Specifically, for the l -th layer, the output of a cross-attention head A cl is computed via: Q = S l 1 U Ql (6) K = HLU Kl (7) V = HLU Vl (8) A cl = softmax( QKT d k ) V (9) where S l 1 are the previous layer's outputs, U Ql , U Kl , U Vl are the parameter matrices of cross-attention modules.",
"Finally, the output HL of the last layer is used to recover the masked tokens of x , conditioning on its own context.",
"P ( x | x ) = softmax( f ( H Lx )) (10) P ( y | y ) = softmax( f ( H Ly )) (11) where f is the feed-forward network that maps the output vectors into the dictionary.",
"H Lx and H Ly are computed via Eq 2 5 when H 0 x and H 0 y are the word embeddings of x and y , respectively.",
"Meanwhile, SL , conditioning on the context of the paired sequence x and y , is used to predict the masked tokens of y .",
"where S Lx and S Ly are computed via Eq 6 9 with the corresponding word embeddings and HL .",
"12 Figure 3: The overview of VECO .",
"During pre-training, a plug-and-play cross-attention module is jointly pre-trained along with the self-attention module.",
"When fine-tuning on natural language understanding (NLU) tasks, the cross-attention module can be either plug-in or plug-out on demand.",
"When fine-tuning on natural language generation (NLG) tasks, VECO can initialize an encoder-decoder module (the mainstream backbone model of generation tasks) since all those necessary modules in the encoder and decoder are already pre-trained.",
"Note that when optimizing the objectives based on Eq 12 and Eq 13, we apply a stop-gradients operation (Chen and He, 2020) to HL (i.e., HL is treated as a constant in this term).",
"This operation can largely speed up the training by avoiding the backpropagation on a 2 L -layer network.",
"Moreover, it even stabilizes the training of deep post-layernorm Transformer, which requires non-trivial efforts regarding carefully designing learning rate schedulers and cutting-edge optimizers (Liu et al., 2020a; Bachlechner et al., 2020).",
"As Figure 3 illustrated, when fine-tuning on various downstream tasks, one advantage of VECO is its flexibility for initializing both the encoder-only Transformer for understanding tasks and encoder-decoder Transformer for generation tasks.",
"Beyond it, we also explore a fine-tuning approach combined with the characteristics of VECO .",
"Due to the plug-and-play cross-attention module, we explore two fine-tuning approaches:",
"Plug-Out fine-tuning is to unplug the cross-attention module from the pre-trained model.",
"In other words, the architecture of the fine-tuned model is almost the same as mBERT or XLM.",
"Specifically, the contextual representations from the last layer H Lx is used to predict the label of input x .",
"the bilingual or automatically translated training data y is available in the downstream task.",
"Specifically, we concatenated the two representations [ HL x : SL x ] to predict the label of x , [ H Ly : S Ly ] to predict the label of y .",
"4 .",
"For pre-trained encoders like XLM, it is not a trivial problem to incorporate them into the sequence-to-sequence architecture the mainstream backbone model of generation tasks (Zhu et al., 2020).",
"One of the drawbacks or challenges could be that the encoder-to-decoder attention is not pre-trained.",
"Therefore, the parameters of the decoder need to be re-adjusted along with the encoder in the following fine-tuning process (Ren et al., 2019).",
"However, under the framework of VECO , the cross-attention is jointly pre-trained along with the whole network, making it easy to provide full initialization for sequence-to-sequence models.",
"Specifically, the self-attention module is used to initialize both the corresponding modules in the encoder and decoder for contextual modeling, while the cross-attention module is used to initialize the encoder-to-decoder attention.",
"It's okay whether you continue to tie the self-attention parameters during fine-tuning.",
"Directly pre-training a sequence-to-sequence model like mBART (Liu et al., 2020b) could be another solution for NLG tasks, but we found mBART is not so effective in cross-lingual NLU tasks.",
"We refer the reader to the Section 7 for detailed experiments and analysis.",
"4 Plug-In fine-tuning is not suitable for the zero-shot setting (also called cross-lingual transfer ) due to the lack of bilingual or translated pair ( x , y ) Model Architecture #Parameters Enc Layers Dec Layers #Languages #Vocab Training Data mBERT (Devlin et al., 2019) Encoder-only 110M 12 104 110k Wikipedia XLM (Lample and Conneau, 2019) Encoder-only 570M 24 100 200k Wikipedia XLM-R (Conneau et al., 2019) Encoder-only 550M 24 100 250k CommonCrawl mRASP (Lin et al., 2020) Encoder-decoder 375M 6 6 32 64k Translation MMTE (Siddhant et al., 2020) Encoder-decoder 375M 6 6 103 64k Translation mBART (Liu et al., 2020b) Encoder-decoder 680M 12 12 25 250k CommonCrawl VECO Flexible 662M 24* 50 250k CommonCrawl + Translation Table 1: Comparison of large cross-lingual models.",
"Model Configuration We pre-train a 24-layer model with 1024 embedding/hidden size and 4096 feed-forward size.",
"We do not use language embeddings to allow our model to better deal with downstream tasks of unseen languages.",
"We adopt the same 250K vocabulary that is also used by XLM-R (Conneau et al., 2019).",
"Table 1 shows the other details of baselines and VECO .",
"Pre-Training Data We collect monolingual and bilingual corpus covering 50 languages.",
"For monolingual training datasets, we reconstruct CommonCrawl Corpus used in XLM-R (Conneau et al., 2019).",
"We extract 1.36TB data in 50 languages, which contains 6.5G sentences and 0.4G documents.",
"We up/down-sample the monolingual text like XLM from each language with a smoothing parameter = 0 .",
"5 .",
"For bilingual data, we collect from the OPUS website 5 like previous works (Lample and Conneau, 2019; Chi et al., 2020b).",
"There are 6.4G parallel sentences, covering 879 language pairs across 50 languages.",
"See more statistics of training data in Appendix A. Optimization Settings For each iteration, we alternately sample a batch of adjacent segments from the monolingual corpus and a batch of parallel sentences from bilingual datasets to conduct a pair of masked input ( x , y ) .",
"We adopt the translation language modeling (TLM) when the inputs are parallel bilingual sentences.",
"Thus the overall training objective is the sum of TLM and the proposed CA-MLM objectives.",
"During training, the model parameters except for cross-attention are initialized by XLM-R.",
"We first freeze the parameters of XLM-R and only update the cross-attention parameters for faster convergence.",
"Then, we jointly train the whole model.",
"We pre-train our model with mixed-precision training using 64 Nvidia Telsa V100 32GB GPUs.",
"Appendix A shows additional details.",
"Downstream Tasks We conduct cross-lingual NLU evaluations on XTREME (Hu et al., 2020), a representative massively multilingual benchmark that consists of 9 understanding tasks over 40 languages.",
"XTREME tasks can be classified into four different categories: (1) sentence-pair classification: XNLI (Conneau et al., 2018), PAWS-X (Yang et al., 2019); (2) structured prediction: POS (Nivre et al., 2018), Wikiann NER (Pan et al., 2017); (3) question answering: XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), TyDiQA (Clark et al., 2020); (4) sentence retrieval: BUCC 2018 (Zweigenbaum et al., 2017), Tatoeba (Artetxe and Schwenk, 2019).",
"Tasks in the first three categories are provided: 1) golden training corpus in English, 2) translated training corpus in other languages, and 3) dev/test set in all languages.",
"For sentence retrieval tasks, no training datasets are provided.",
"We refer the reader to Hu et al. (2020) for additional details about the datasets.",
"Fine-tuning Setting Following previous works (Conneau et al., 2019; Hu et al., 2020), we consider two typical fine-tuning settings: (1) Cross-lingual Transfer which fine-tunes the pre-trained model using English golden data only and directly performs inference on the test data of different target languages; (2) Translate-Train-All fine-tunes a multilingual model on the concatenation of all data (golden training corpus in English and translated training corpus in other languages).",
"Note that for two sequence-labeling tasks (POS, NER), the position of token labels in the translated text generally differs from that in the source text.",
"Following FILTER (Fang et al., 2020), we use the model trained only on the English training dataset as a teacher, to label the translated text.",
"To have a fair comparison with the strong baseline XLM-R (Conneau et al., 2019) Datasets XNLI PAWS-X POS NER XQuAD MLQA TyDiQA BUCC Tatoeba # Languages 15 7 33 40 11 7 9 5 33 Metrics Acc Acc F1 F1 F1/EM F1/EM F1/EM F1 Acc Avg.",
"The detailed test results of nine tasks on the XTREME benchmark are shown in Table",
"2. It demonstrates that the proposed VECO outperforms previous cross-lingual models on all datasets.",
"Compared to XLM-R, it averagely scores 5.0 and 6.6 points higher under the cross-lingual transfer and translation-train-all settings, respectively.",
"In the cross-lingual transfer setting, VECO delivers a large improvement compared to XLM-R, especially on zero-shot sentence retrieval tasks (BUCC, Tatoeba).",
"This phenomenon reflects that our model can better build the interdependence between languages.",
"Thus it can better mine parallel sentences in a multilingual corpus.",
"Under the translation-train-all setting, it can be observed that VECO with Plug-In fine-tuning (VECO in ) is better than Plug-Out fine-tuning (VECO out ).",
"We conclude the reasons as two-fold.",
"On the input side, the Plug-Out fine-tuning individually takes multilingual instances as input, while the Plug-In fine-tuning considers the bilingual instances 6 at each run.",
"On the model side, the Plug-In fine-tuning can encourage correspondence across language via the cross-attention module.",
"Note that the Plug-In fine-tuning method also outperforms FILTER (Fang et al., 2020), an enhanced cross-lingual fine-tuning method that also takes the 6 English instance with its translated one.",
"bilingual instance as the input of XLM-R.",
"It further demonstrates the effectiveness of VECO and its specialized fine-tuning method.",
"We conclude the reasons for the above performance improvement as two-fold: 1) the introduction of bilingual data during pre-training, which is a direct way to enhance the cross-lingual ability of the model; 2) Stronger ability to enhance the interdependence and fusion among languages via the proposed CA-MLM pre-training tasks.",
"To analyze which plays a leading role, we conduct a set of more fair experiments in Section 7.",
"Datasets We choose the machine translation (MT) task, a typical cross-lingual generation scenario.",
"In order to illustrate the generality of our approach and have a fair comparison with the most recent state-of-the-art Transformer work (Liu et al., 2020a), we choose two most widely used datasets: WMT14 English German (En-De) and English French (En-Fr) translation.",
"WMT14 EnDe is a medium-resource dataset that provides 4.5M pairs for training and validation.",
"We adopt standard newstest2014 as the test set.",
"WMT14 En-Fr is a high-resource dataset that contains 36M pairs of parallel sentences.",
"We use new-stest2012+newstest2013 for validation and new-stest2016 for test.",
"We measure case-insensitive tokenized BLEU with multi-bleu.perl and de-Model WMT14 En-Fr WMT14 En-De BLEU SacreBLEU BLEU SacreBLEU Randomly Initialize Baseline 42.9 40.4 28.7 27.8 Liu et al. (2020a) 43.8 41.8 30.1 29.5 Randomly Initialize + More Bilingual Data * Baseline* -30.6 29.5 Cross-lingual Model Initialize mBART 43.2 41.0 30.0 29.1 mRASP 44.3 41.7 30.3 XLM-R 43.8 41.2 30.9 29.9 VECO 44.5 42.0 31.7 30.6 10 15 20 25 30 35 Epochs 25 26 27 28 29 30 s a c r e BLEUVECO Init.",
"tokenized SacreBLEU 7 to avoid the influence of different tokenization and normalization between models (Post, 2018).",
"Fine-tuning Setting We fine-tune our model using fairseq 8 toolkit and adopt comparable training settings with baselines.",
"We run WMT 14 EnDe and En-Fr MT experiments on 16 and 32 V100 GPUs, respectively.",
"The batch size is 64k for EnDe and 256k for En-Fr.",
"The total training updates are set to 100k.",
"The learning rate is 1e-4/2e-4, with linear warm-up over the first 16k steps and linear decay.",
"We average the last 10 checkpoints and use beam search with a beam size of",
"5. Baselines We consider two types of Transformer baselines: randomly initialized and cross-lingual models initialized.",
"For random initialization, we reproduce a Transformer baseline that adopts the same architecture and fine-tuning hyperparameters as VECO but with random initialization.",
"Besides, we compare to the state-of-the-art Deep Transformer (Liu et al., 2020a).",
"For cross-lingual encoder-decoder models, we include mBART (Liu et al., 2020b) and mRASP (Lin et al., 2020), which show impressive results on MT. Note that since we tied the self-attention weights of each encoder layer with each decoder layer, the whole parameters of mBART and VECO are comparable.",
"We also conduct the WMT experiments for XLM-R, following the totally same fine-tuning settings as VECO , but leaving the encoder-to-decoder attention un-initialized.",
"Table 3 (left) shows the results on the machine translation.",
"We can observe that VECO can largely outperform the randomly initialized same-sized Transformer baseline by 2.3 BLEU points.",
"Moreover, it even beats the (randomly initialized) state-of-the-art Deep-Transformer (Liu et al., 2020a), which is three times deep as VECO .",
"Among the cross-lingual models, VECO can consistently outperform the best models, averaged on two datasets, by 0.8 BLEU points.",
"Table 3 (right) displays the BLEU scores of same-sized models during training.",
"We find that VECO initialized model can get a surprising more than 28 SacreBLEU score just after 10 epochs, which is better than the final score of the randomly initialized model at 35 epochs.",
"It reveals that VECO can provide a fairly good initialization for the machine translation model, which can converge quickly and further boost the results.",
"One might suspect that the main reason for the performance improvement is leveraging parallel corpus during pre-training.",
"To figure it out, we conduct a more comparable experiment.",
"We first train an out-of-domain Transformer model using the whole En-De parallel data ( 68M) used in VECO pre-training, and then continue to train the model on the in-domain WMT14 En-De training dataset.",
"Results are shown in Table 3 (left) marked with *.",
"Under this set of a totally fair comparison, VECO still maintains a lead of 1.1 BLEU score.",
"This directly confirms that the improvement in MT is not only due to the use of bilingual data.",
"More importantly, CA-MLM ensures better use of bilingual and large-scale unlabeled multilingual corpus.",
"Online translation applications usually have a restriction of inference time.",
"The most direct way is to reduce the decoder layers since previous MT works (Liu et al., 2020a) have shown that deeper encoders are more worthwhile than deeper decoders.",
"Based on this, we also explore the potential of the VECO to initialize deep encoder and shallow decoder Transformers, which is a blank in the cross-lingual pre-training works.",
"Table 4 contrasts two ways of initializing a Transformer with n decoder layers ( n < 24) via selecting: (1) the first n layers; (2) the last n layers from a 24-layer pre-trained VECO model.",
"We consider n = { 3 , 6 } to conduct experiments.",
"We find that selecting the last n layers exhibits better performance than selecting the first n layers.",
"It reveals that the last several layers play a more important role in making predictions over the whole vocabulary.",
"Moreover, we can find that there is 0.2 0.3 BLEU gain when increasing the decoder layers from 3 to",
"6. However, we observe that only marginal improvement can be gained when further increasing the decoder layers from 6 to 24, which is also in line with the findings in Liu et al. (2020a).",
"Regardless of the initialization method, the VECO initialized model can gain consistent 1 2 BLEU improvement over the randomly initialized model.",
"We perform an ablation study to investigate where the improvement in cross-lingual NLU and NLG tasks mainly comes from.",
"Specifically, there are three main aspects we have studied:",
"1. How much performance improvement comes from the parallel translation corpus used in pre-training?",
"2. How effective of the CA-MLM pre-training Data Models Tasks XNLI IWSLT Mono.",
"task, especially compared to the MLM and TLM pre-training tasks?",
"3. How about pre-training a sequence-to-sequence model like mBART for NLU and NLG tasks?",
"To figure out these questions, we train XLM, mBART and VECO model from scratch using the same datasets and parameter settings (see Appendix A for more details).",
"All of them is pre-trained via MLM and TLM tasks.",
"Note that the MLM task generally refers to predict the masked words of source language, while the TLM task generally refers to predict the words of the target language.",
"Specifi-cally for mBART that is under the framework of encoder-decoder, the input of encoder is masked sequence x , and the target of decoder is the masked words of source input x (for MLM task), or the parallel sentence y (for TLM task).",
"Table 5 shows the results of two representative datasets of cross-lingual NLU and NLG.",
"We can observe that, when using monolingual corpus only, VECO can outperform XLM by 0.8 points on the XNLI dataset and 0.3 BLEU scores on the IWSLT14 De-En translation dataset.",
"It suggests that the CA-MLM can still benefit from adjacent sentences in monolingual corpus 9 , to be equipped with a stronger ability of contextual modeling.",
"Moreover, when pre-training both on the monolingual and bilingual corpus, VECO can even achieve a larger improvement compared to XLM, with 3.2 and 2.1 points improvement on two datasets, respectively.",
"It reveals that CA-MLM objective of VECO can better utilize the bilingual corpus, compared to only optimized by TLM and MLM of XLM.",
"Moreover, we find that pre-training a sequence-to-sequence model like mBART (Liu et al., 2020b) 9 As noted in Section 4, we take two adjacent sentences in the monolingual corpus as ( x , y ) .",
"performs worst on NLU tasks like XNLI 10 , almost 6 points worse than VECO and near 2 points worse than XLM.",
"One possible explanation could be that the unidirectional language modeling in the decoder might be sub-optimal for NLU tasks.",
"And even on the machine translation task, mBART still performs worse than VECO when pre-training on the same bilingual datasets.",
"We conclude that it is because that VECO can do better in the contextual modeling of source input x via a explicit masked language modeling objective in Eq 10 applied to x 2 in Figure 2",
"(c).",
"mBERT (Devlin et al., 2019) is a key step towards building a unified contextual language representation over multiple languages.",
"It simply shares all languages' vocabulary and trains a bidirectional Transformer encoder, achieving promising results in various cross-lingual NLU tasks.",
"There have been several extensions that follow the same encoder-only backbone as mBERT.",
"The main difference is the introduction of more training corpus (e.g., bilingual data) and pre-training tasks.",
"XLM (Lample and Conneau, 2019) utilizes both monolingual and bilingual corpus to perform the masked language modeling.",
"XLM-R (Conneau et al., 2019) extends to be built on RoBERTa (Liu et al., 2019) using larger monolingual training data.",
"Other works (Huang et al., 2019; Yang et al., 2020; Chi et al., 2020b) propose new pre-training tasks to utilize the bilingual data better.",
"However, there are two main drawbacks of these works.",
"First, they mainly rely on the self-attention module in the Transformer encoder to implicitly build the interdependence between languages, leading to few attention patterns across languages due to the lazy network.",
"Second, even though they show impressive performance improvement on cross-lingual understanding tasks like XNLI, only marginal improvement has been gained on cross-lingual generation tasks like machine translation, especially on high-resource languages.",
"A feasible solution for cross-language generation is to pre-train a denoising auto-encoder like mBART (Liu et al., 2020b).",
"It extends BART (Lewis et al., 2019) to the multilingual setting, demonstrating significant gains in low/medium-resource machine translation, but 10 We follow BART (Lewis et al., 2019) by utilizing the final representation from the decoder for classification tasks.",
"with a decrease in high resource languages.",
"Unlike mBART, Chi et al. (2020a) first trains an encoder via MLM and then frozen the encoder to train the decoder only via two generative tasks.",
"A similar approach is also proposed in Liang et al. (2020) and Lin et al. (2020), with the main difference in the joint training of encoder-decoder with code-switch tricks.",
"However, all these cross-lingual models emphasize training a dedicated model for NLG.",
"Thus they may hurt the NLU capabilities of the model.",
"The ablation study in Section 7 also validates that it is sub-optimal to train an encoder-encoder network for NLU tasks.",
"This paper endeavors to build a unified cross-lingual model for NLU and NLG tasks via a plug-and-play cross-attention module.",
"More importantly, the cross-attention module plays a role in the explicit alignment of encoded representations of different languages, thus largely contributing to building a unified cross-lingual model.",
"We present VECO , a variable and flexible cross-lingual pre-training model, targets at explicitly capturing the interdependence between languages via a plug-and-play cross-attention module.",
"Based on the flexible characteristics, VECO can initialize both NLU preferred encoder-only and NLG specialized encoder-decoder Transformer.",
"Moreover, we also introduce a Plug-In fine-tuning approach to encourage the fusion between languages, combining the feature of VECO and cross-language downstream tasks.",
"Taken together, VECO achieves consistent improvements on various language understanding and generation tasks, broadening the way of thinking about pre-trained backbone architecture and fine-tuning methods under the cross-lingual scenario."
] | [
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"method",
"abstain"
] |
[
"Complex Word Identification (CWI) is the task of identifying which words or phrases in a sentence are difficult to understand by a target audience.",
"The latest CWI Shared Task released data for two settings: monolingual (i.e. train and test in the same language) and cross-lingual (i.e. test in a language not seen during training).",
"The best monolingual models relied on language-dependent features, which do not generalise in the cross-lingual setting, while the best cross-lingual model used neural networks with multi-task learning.",
"In this paper, we present monolingual and cross-lingual CWI models that perform as well as (or better than) most models submitted to the latest CWI Shared Task.",
"We show that carefully selected features and simple learning models can achieve state-of-the-art performance, and result in strong baselines for future development in this area.",
"Finally, we discuss how inconsistencies in the annotation of the data can explain some of the results obtained.",
"Complex Word Identification (CWI) consists of deciding which words (or phrases) in a text could be difficult to understand by a specific type of reader.",
"In this work, we follow the CWI Shared Tasks (Paetzold and Specia, 2016; Yimam et al., 2018) and assume that a target word or multi-word expression (MWE 1 ) in a sentence is given, and our goal is to determine if it is complex or not (an example is shown in Table 1).",
"Under this setting, CWI is normally treated using supervised learning and feature engineering to build monolingual models (Paetzold and Specia, 2016; Yimam et al., 2018).",
"Unfortunately, this approach is infeasible for languages with scarce resources of annotated 1 We consider n-grams with n 2 as MWEs, while Yimam et al. (2018) used n 3 .",
"data.",
"In this paper, we are interested in both monolingual and cross-lingual CWI; in the latter, we build models to make predictions for languages not seen during training.",
"While monolingual CWI has been studied extensively (see a survey in Paetzold and Specia (2017)), the cross-lingual setup of the task was introduced only recently by Yimam et al. (2017b), who collected human annotations from native and non-native speakers of Spanish and German, and integrated them with similar data previously produced for three English domains (Yimam et al., 2017a): News, WikiNews and Wikipedia.",
"For the Second CWI Shared Task (Yimam et al., 2018), participants built monolingual models using the datasets previously described, and also tested their cross-lingual capabilities on newly collected French data.",
"In the monolingual track, the best systems for English (Gooding and Kochmar, 2018) differed significantly in terms of feature set size and the model's complexity, to the best systems for German and Spanish (Kajiwara and Komachi, 2018).",
"The latter used Random Forests with eight features, whilst the former used AdaBoost with 5000 estimators or ensemble voting combining AdaBoost and Random Forest classifiers, with about 20 features.",
"In the cross-lingual track, only two teams achieved better scores than the baseline: Kajiwara and Komachi (2018) who used length and frequency based features with Random Forests, and Bingel and Bjerva (2018) who implemented an ensemble of Random Forests and feed-forward neural networks in a multi-task learning architecture.",
"Our approach to CWI differs from previous work in that we begin by building competitive monolingual models, but using the same set of features and learning algorithm across languages.",
"This reduces the possibility of getting high scores due to modelling annotation artifacts present in the dataset of one language.",
"Our monolingual models achieve better scores for Spanish and German than the best systems in the Second CWI Shared Task.",
"After that, we focus on language-independent features, and keep those that achieve good performance in cross-lingual experiments across all possible combinations of languages.",
"This results in a small set of five language-independent features, which achieve a score as high as the top models in the French test set.",
"Finally, we analyse the annotation of the datasets and find some inconsistencies that could explain some of our results.",
"We tackle the binary classification task in the Second CWI Shared Task (Yimam et al., 2018), in which a model decides if a target word/MWE in a sentence is complex or not.",
"Following common practice, we extract features from the target word/MWE and its context, and then use a supervised learning algorithm to train a classifier.",
"For training and testing our models, we use the annotated datasets provided for the Second CWI Shared Task (see Table 2 for some statistics).",
"Our feature set consists of 25 features that can be extracted for all languages considered (English,",
"German, Spanish and French).",
"They can be divided into three broad categories: features based on the target word/MWE, sub-word level features, and sentence-level features to capture information from the target's context.",
"As we intended that our features be applicable across languages, we drew on features found to be useful in previous work on CWI (Yimam et al., 2017b, 2018).",
"We made use of the python libraries spaCy 2 (Honnibal and Mon-tani, 2017) and NLTK 3 (Loper and Bird, 2002).",
"Details on the resources used for extracting each feature can be found in Appendix A. At the target word/MWE level , we experimented with features such as Named Entity (NE) type, part-of-speech, hypernym counts, number of tokens in the target, language-normalised number of characters in each word, and simple unigram probabilities.",
"These features are linguistically motivated.",
"The perceived complexity of a MWE may be higher than that of a single word, as each component word can be complex, or simple component words can be synthesised into a complex whole.",
"Similarly, infrequent words are less familiar, so we would expect low-probability target words to be found more complex.",
"Along these lines, proper nouns could be more complex, as there is a vast number of NEs, and the chance that a person has encountered any one of them is low.",
"We would expect this trend to reverse for the NE type of organisations, in combination with the Enlgish-News dataset, as organisations mentioned in news articles are frequently global, and so the chance that a person has encountered a proper noun that is an organisation is often higher than for proper nouns in general.",
"In total, 14 features were used at the target word/MWE level.",
"Our sub-word level features include prefixes, suffixes, the number of syllables, and the number of complex punctuation marks (i.e. punctuation within the target word/MWE, such as hyphens, that could denote added complexity).",
"We would expect certain affixes to be useful features, as language users use sub-word particles like these to identify unknown words: by breaking up a word like granted into grantand -ed, readers can fall back on their knowledge of these component pieces to clarify the whole.",
"A total of 9 sub-word features were used in the monolingual models.",
"motivations were also considered.",
"Long sentences could be harder to understand, which makes it more difficult to figure out the meaning of unknown words contained within them.",
"Also, long sentences are more likely to include more unknown words or ambiguous references.",
"Therefore, we considered sentence length (i.e., number of tokens in the sentence) as a feature.",
"In addition, we extracted N-grams (unigrams, bigrams and trigrams) from the whole sentence, since certain sentence constructions can help a reader understand the target word/MWE.",
"For example, A of the B suggests a relation between A and B. We used 2 sentence-level features in total.",
"Following Yimam et al. (2018), we used Macro-F1 score to evaluate performance and for comparison with previous work on the datasets.",
"We used Logistic Regression for all our experiments, as it allowed for easy exploration of feature combinations, and in initial experiments we found that it performed better than Random Forests.",
"We evaluated both using the full feature set described before, as well as a two-feature baseline using the number of tokens of the target and its language-normalised number of characters.",
"Results of our monolingual experiments are shown in Table",
"3. Dataset Dev Test BL MA BL MA SotA EN News 83.6 85.5 69.7 86.0 87.4 EN WikiNews 80.4 82.8 65.8 81.6 84.0 EN Wikipedia 74.2 76.6 70.1 76.1 81.2 ES 78.0 77.1 69.6 77.6 77.0 DE 79.5 74.6 72.4 74.8 75.5 Mean 79.1 79.3 69.5 79.2 N/A Table 3: Macro-F1 for the baseline (BL), our monolingual approach (MA), and the state of the art (SotA) on the Dev and Test splits of each dataset.",
"In the test set, our baseline results (BL in Table 3) are strong, especially in German.",
"Our full 25-features model improves on the baseline in all cases, with the biggest increase of over 16 percentage points seen for the EN-News dataset.",
"Our system beats the best performing system from the Shared Task in Spanish (77.0) and German (74.5), both obtained by Kajiwara and Komachi (2018).",
"However, the state of the art for German remains the Shared Task baseline (75.5) (Yimam et al., 2018).",
"The best results for all three English datasets were obtained by Gooding and Kochmar (2018); ours is within two percentage points of their News dataset score.",
"Furthermore, the mean score for our system (79.2) is close to the mean of the best performing models (81.0), which are different systems, while using simpler features and learning algorithm.",
"The best-performing model in English, for example, used Adaboost with 5000 estimators (Gooding and Kochmar, 2018).",
"Linguistically, the cross-lingual approach can be motivated by the relation between certain languages (such as French and Spanish both being Romance languages).",
"In addition, there may be features identifying complex words that are shared even across language families.",
"To be able to test a model on a language that was unseen during training, the features the model works with must be cross-lingual (or language-independent) themselves.",
"For example, the words themselves are unlikely to transfer across languages (apart from those that happen to be spelled identically), but the popularity of the words would transfer.",
"This rules out some of the features we used for the monolingual approach (see Sec. 3.1), as they were language-dependent.",
"One such feature is N-grams for the target word/MWE, which depend on the language, and so will only occur with extreme sparsity outside of their source language.",
"For example, if applying a system trained on English to unseen French, the English phrases `a la mode or film noir might reoccur in the French, since they originate from that language, but these are rare exceptions.",
"What is more, a French loan-phrase may have different complexity characteristics to the same N-grams occurring in their native language.",
"Therefore, we did not use these features in the cross-lingual system.",
"To find out which features were best suited for the cross-lingual approach, we performed an iterative ablation analysis (see Appendix B for de-tails).",
"Using this process, we arrived at our final cross-lingual feature set: number of syllables in the target, number of tokens in the target, number of complex punctuation marks (such as hyphens), sentence length, and unigram probabilities.",
"Furthermore, we analyse the effect of different language combinations on the performance of the cross-lingual model in order to investigate how the relationship between the languages trained and tested on would influence model performance.",
"Recall that we only have training data for English, Spanish and German, but not French.",
"We train models using all possible combinations (each language independently, each pairing, and all three) and evaluate on each of the four languages that have test data (i.e. the former three and French), excluding training combinations that include the test language.",
"Results are shown in Table",
"4. EN ES DE Eval Source Test Dev (cid:88) (cid:88) EN WikiNews 61.8 63.7 (cid:88) EN WikiNews 62.3 63.6 (cid:88) EN WikiNews 61.6 63.8 (cid:88) (cid:88) EN Wikipedia 62.8 64.4 (cid:88) EN Wikipedia 62.6 64.4 (cid:88) EN Wikipedia 63.1 65.2 (cid:88) (cid:88) EN News 67.1 65.6 (cid:88) EN News 67.0 65.6 (cid:88) EN News 67.2 65.9 (cid:88) (cid:88) ES N/A 70.8 71.3 (cid:88) ES N/A 72.6 74.1 (cid:88) ES N/A 69.1 70.0 (cid:88) (cid:88) DE N/A 73.4 78.3 (cid:88) DE N/A 72.6 77.4 (cid:88) DE N/A 73.0 76.0 (cid:88) (cid:88) (cid:88) FR N/A 73.1 N/A (cid:88) (cid:88) FR N/A 75.7 N/A (cid:88) (cid:88) FR N/A 73.4 N/A (cid:88) (cid:88) FR N/A 70.5 N/A (cid:88) FR N/A 75.8 N/A (cid:88) FR N/A 73.4 N/A (cid:88) FR N/A 69.2 N/A Table 4: Comparison of Test and Dev results for all permutations of training languages.",
"When testing on French, we achieved the highest performance by training on German only (75.8), followed closely by training on a combination of German and Spanish (75.7) and only Spanish (75.5).",
"The worst performance was achieved by training only on English (69.2), and the performance also noticeably decreased for all training combinations that included English.",
"When testing on German, language choice had a weaker effect.",
"The highest score came from combining English and Spanish (73.4), but using only one of those languages gave comparable results (72.6 for Spanish, 73.0 for English).",
"For Spanish, the best results were achieved when training only on German (72.6).",
"Adding English to the training languages decreased the Spanish German French Monolingual SotA 77.0 75.5 N/A Cross-lingual SotA N/A N/A 76.0 Our cross-lingual 72.6 73.4 75.8 Table 5: Comparison between the monolingual and cross-lingual state of the art (SotA), and our cross-lingual system.",
"performance (70.8), which was even lower when training only on English (69.1).",
"It is noteworthy that adding English to the training languages noticeably decreases performance for both Spanish and French, but not for German.",
"One possible reason for Spanish and French not benefiting from English when German does is that both English and German are Germanic languages, whereas Spanish and French are Romance languages.",
"Another possible explanation for the decrease of performance caused by training with English is that there are inconsistencies in the way MWEs in the datasets were labelled across languages, which we explore in Sec.",
"5. We finally compare our cross-lingual models against the state of the art: the best monolingual system for Spanish and German, and the best cross-lingual system for French, where no monolingual systems exist.",
"As Table 5 shows, our cross-lingual models come close to the best monolingual models for Spanish and especially for German.",
"This is remarkable given how simple our model and features are, and that the approaches we compare against train complex models for each language.",
"Furthermore, this points towards the possibility of extending CWI to more languages which lack training data.",
"Finally, Table 6 compares the coefficients for models trained on Romance and Germanic languages.",
"Notably, use of complex punctuation (such as the hyphenation in laser-activated or drug-related) and the number of tokens are inversely correlated w.r.t. the word or MWE being complex.",
"More words in the target was correlated with complexity for English and German, and inversely correlated for Spanish.",
"While examining our models' incorrect predictions, we observed inconsistencies in labelling in the datasets between target MWEs and their sub-words/sub-expressions (SWs).",
"The First CWI Shared Task (Paetzold and Specia, 2016) used the annotations of a group (i.e. ten annotators on the training data) to predict the annotation of an individual (i.e. one annotator on the test data).",
"The resulting inconsistencies in labelling may have contributed to the low F-scores of systems in the task (Zampieri et al., 2017).",
"Although the Second CWI Shared Task improved on the first by having multiple annotators for all splits of the data, it contains some labelling inconsistencies arising from annotators now being able to label phrases, and words within them, separately.",
"More concretely, we found that across all datasets, 72% of MWEs contain at least one SW with the opposite label (see Table 7).",
"While this makes sense in some cases, every SW in 25% of MWE instances has the opposite label.",
"For example, numerous falsifications and ballot stuff-ing is not annotated as complex, despite its SWs numerous, numerous falsifications, falsifica-tions, ballot, ballot stuffing and stuffing all being complex.",
"Conversely, crise des marches du credit is complex, despite crise, marches and credit being labelled non-complex.",
"It is difficult to see how classifiers that extract features for MWEs from their individual SWs could predict the labels of both correctly.",
"Furthermore, every target MWE in the Spanish, German and French datasets is labelled complex.",
"This may bias a classifier trained on the Spanish or German datasets towards learning MWEs and long individual words (if length is a feature) are complex.",
"In particular, this observation may help explain why adding English as a training language decreased the performance of our cross-lingual system when testing on French and Spanish (where all MWEs are complex).",
"An analysis in Bingel and Bjerva (2018) further found that their cross-lingual French model was effective at predicting long complex words/MWEs but had diffi-culty predicting long non-complex words.",
"It is also worth noting that considering a word or MWE as complex is subjective and may differ from person to person, even within the same target audience.",
"Bingel et al. (2018) investigated predicting complex words based on the gaze patterns of children with reading difficulties.",
"They found a high degree of specificity in misreadings between children, that is, which words they found complex when reading aloud.",
"This variety of complexity judgements even within one target group points towards the high degree of subjectivity in the task, which may also partly explain the inconsistencies in the dataset.",
"The monolingual and cross-lingual models presented achieve comparable results against more complex, language-specific state-of-the-art models, and thus can serve as strong baselines for future research in CWI.",
"In addition, our analysis of the dataset could help in the design of better guidelines when crowdsourcing annotations for the task.",
"Dataset creators may wish to only allow single words to be chosen as complex to avoid labelling inconsistencies.",
"In case MWEs are being permitted, we suggest instructing annotators to chose the smallest part of a phrase they find complex (French annotators for the Second CWI Shared Task sometimes grouped individual complex words into a complex MWE (Yimam et al., 2018)).",
"This work was initiated in a class project for the NLP module at the University of Sheffield.",
"The authors would like to acknowledge the contributions of Thomas Dakin, Sanjana Khot and Harry Wells who contributed their project code to this work.",
"Andreas Vlachos is supported by the EP-SRC grant eNeMILP (EP/R021643/1)."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"This paper studies the bias problem of multihop question answering models, of answering correctly without correct reasoning.",
"One way to robustify these models is by supervising to not only answer right, but also with right reasoning chains.",
"An existing direction is to annotate reasoning chains to train models, requiring expensive additional annotations.",
"In contrast, we propose a new approach to learn evidentiality, deciding whether the answer prediction is supported by correct evidences, without such annotations.",
"Instead, we compare counterfactual changes in answer confidence with and without evidence sentences, to generate pseudo-evidentiality annotations.",
"We validate our proposed model on an original set and challenge set in HotpotQA, showing that our method is accurate and robust in multi-hop reasoning.",
"Multi-hop Question Answering (QA) is a task of answering complex questions by connecting information from several texts.",
"Since the information is spread over multiple facts, this task requires to capture multiple relevant facts (which we refer as evidences) and infer an answer based on all these evidences.",
"However, previous works (Min et al., 2019; Chen and Durrett, 2019; Trivedi et al., 2020) observe disconnected reasoning in some correct answers.",
"It happens when models can exploit specific types of artifacts ( e . g ., entity type), to leverage them as reasoning shortcuts to guess the correct answer.",
"For example, assume that a given question is: which country got independence when World War II ended? and a passage is: Korea got independence in 1945.",
"Although information (World War II ended in 1945) is insufficient, QA models correspond to [email protected] (cid:2414) (cid:2417) A Evidence-negative A Evidence-positive No reason answer is AA Reasoning chains : Evidentiality (cid:2417) Irrelevant sentences : Answer-positive A Answer-negative No answer answer is A Answerability (cid:2414) Figure 1: Overview of our proposed supervision: using Answerability and Evidentiality predict Korea, simply because its answer type is country (or, using shortcut).",
"To address the problem of reasoning shortcuts, we propose to supervise evidentiality deciding whether a model answer is supported by correct evidences (see Figure 1).",
"This is related to the problem that most of the early reader models for QA failed to predict whether questions are not answerable.",
"Lack of answerability training led models to provide a wrong answer with high confidence, when they had to answer unanswerable.",
"Similarly, we aim to train for models to recognize whether their answer is unsupported by evidences, as well.",
"In our work, along with the answerability, we train the QA model to identify the existence of evidences by using passages of two types: (1) Evidence-positive and (2) Evidence-negative set.",
"While the former has both answer and evidence, the latter does not have evidence supporting the answer, such that we can detect models taking shortcuts.",
"Our first research question is: how do we acquire evidence-positive and negative examples for training without annotations?",
"For evidence-positive set, the closest existing approach (Niu et al., 2020) is to consider attention scores, which can be considered as pseudo-annotation for evidence-positive set.",
"In other word, sentence S with high attention scores, often used as an interpretation of whether S is causal for model prediction, can be selected to build evidence-positive set.",
"However, follow-up works (Serrano and Smith, 2019; Jain and Wallace, 2019) argued that attention is limited as an explanation, because causality cannot be measured, without observing model behaviors in a counterfactual case of the same passage without S .",
"In addition, sentence causality should be aggregated to measure group causality of multiple evidences for multi-hop reasoning.",
"To annotate group causality as pseudo-evidentiality, we propose Interpreter module, which removes and aggregates evidences into a group, to compare predictions in observational and counterfactual cases.",
"As a second research question, we ask how to learn from evidence-positive and evidence-negative set.",
"To this end, we identify two objectives: (O1) QA model should not be overconfident in evidence-negative set, while (O2) confident in evidence-positive.",
"A naive approach to pursue the former is to lower the model confidence on evidence-negative set via regularization.",
"However, such regularization can cause violating (O2) due to correlation between confidence distributions for evidence-positive and negative set.",
"Our solution is to selectively regularize, by purposedly training a biased model violating (O1), and decorrelate the target model from the biased model.",
"For experiments, we demonstrate the impact of our approach on HotpotQA dataset.",
"Our empirical results show that our model can improve QA performance through pseudo-evidentiality, outperforming other baselines.",
"In addition, our proposed approach can orthogonally combine with another SOTA model for additional performance gains.",
"Since multi-hop reasoning tasks, such as HotpotQA, are released, many approaches for the task have been proposed.",
"These approaches can be categorized by strategies used, such as graph-based networks (Qiu et al., 2019; Fang et al., 2020), external knowledge retrieval (Asai et al., 2019), and supporting fact selection (Nie et al., 2019; Groeneveld et al., 2020).",
"Our focus is to identify and alleviate reasoning shortcuts in multi-hop QA, without evidence annotations.",
"Models taking shortcuts were widely observed from various tasks, such as object detection (Singh et al., 2020), NLI (Tu et al., 2020), and also for our target task of multi-hop QA (Min et al., 2019; Chen and Durrett, 2019; Trivedi et al., 2020), where models learn simple heuristic rules, answering correctly but without proper reasoning.",
"To mitigate the effect of shortcuts, adversarial examples (Jiang and Bansal, 2019) can be generated, or alternatively, models can be robus-tifed (Trivedi et al., 2020) with additional supervision for paragraph-level sufficiency to identify whether a pair of two paragraphs are sufficient for right reasoning or not, which reduces shortcuts on a single paragraph.",
"While the binary classification for paragraph-sufficiency is relatively easy (96.7 F1 in Trivedi et al. (2020)), our target of capturing a finer-grained sentence-evidentiality is more challenging.",
"Existing QA model (Nie et al., 2019; Groeneveld et al., 2020) treats this as a supervised task, based on sentence-level human annotation.",
"In contrast, ours requires no annotation and focuses on avoiding reasoning shortcuts using evidentiality, which was not the purpose of evidence selection in the existing model.",
"In this section, to prevent reasoning shortcuts, we introduce a new approach for data acquiring and learning.",
"We describe this task (Section 3.1) and address two research questions, of generating labels for supervision (Section 3.2) and learning (Section 3.3), respectively.",
"Our task definition follows distractor setting, between distractor and full-wiki in HotpotQA dataset (Yang et al., 2018), which consists of 112k questions requiring the understanding of corresponding passages to answer correctly.",
"Each question has a candidate set of 10 paragraphs (of which two are positive paragraphs P + and eight are negative P ), where the supporting facts for reasoning are scattered in two positive paragraphs.",
"Then, given a question Q , the objective of this task is to aggregate relevant facts from the candidate set and estimate a consecutive answer span A .",
"For task evaluation, the estimated answer span is compared with the ground truth answer span in terms of F1 score at word-level.",
"For answerability training in single-hop QA, datasets such as SQuAD 2.0 (Rajpurkar et al., 2018) provide labels of answerability, so that models can be trained not to be overconfident on unanswerable text.",
"Similarly, we build triples of question Q , answer A , and passage D , to be labeled for answerability.",
"HotpotQA dataset pairs Q with 10 paragraphs, where evidences can be scattered to two paragraphs.",
"Based on such characteristic, concatenating two positive paragraphs is guaranteed to be answerable/evidential and concatenating two negative paragraphs (with neither evidence nor answer) is guaranteed to be unanswerable.",
"We define a set of answerable triplets ( Q , A , D ) as answer-positive set A + , and an unanswerable set as answer-negative set A .",
"From the labels, we train a transformer-based model to classify the answerability (the detail will be discussed in the next section).",
"However, answerability cannot supervise whether the given passage has all of these relevant evidences for reasoning.",
"This causes a lack of generalization ability, especially on examples with an answer but no evidence.",
"While learning the answerability, we aim to capture the existence of reasoning chains in the given passage.",
"To supervise the existence of evidences, we construct examples: evidence-positive and evidence-negative set, as shown in Figure 1.",
"Specifically, let E be the ground truth of evidences to infer A , and S be a sentence containing an answer A , corresponding to Q .",
"Given Q and A , expected labels VE of evidentiality, indicating whether the evidences for answering are sufficient in the passage, are as follow: VE ( Q , A , D ) | = T rue E = D , A D VE ( Q , A , D ) | = F alse E (cid:54) D , A D (1) We define a set of passages satisfying VE | = T rue as evidence-positive set E + , and a set satisfying VE | = F alse as evidence-negative set E .",
"Since we do not use human-annotations, we aim to generate pseudo-evidentiality annotation.",
"First, for evidence-negative set, we modify answer sentence S and unanswerable passages, and generate examples with the three following types: 1) Answer Sentence Only: we remove all sentences in answerable passage except S , such that the input passage D becomes S , which contains a correct answer but no other evidences.",
"That is, VE ( Q , A , S ) | = F alse .",
"2) Answer Sentence + Irrelevant Facts: we use irrelevant facts with answers as context, by concatenating S and unanswerable D .",
"That is, VE ( Q , A , ( S ; D )) | = F alse , where D P .",
"3) Partial Evidence + Irrelevant Facts: we use partially-relevant and irrelevant facts as context, by concatenating D 1 P + and D 2 P .",
"That is, VE ( Q , A , ( D 1 ; D 2 )) | = F alse .",
"These evidence-negative examples do not have all relevant evidences, thus if a model predicts the correct answer on such examples, it means that the model learned reasoning shortcuts.",
"Second, building an evidence-positive set is more challenging, because it is difficult to capture multiple relevant facts, with neither annotations E nor supervision.",
"Our distinction is obtaining the above annotation from model itself, by interpreting the internal mechanism of models.",
"On a trained model, we aim to find influential sentences in predicting correct answer A , among sentences in an answerable passage.",
"Then, we consider them as a pseudo evidence-positive set.",
"Since such pseudo labels relies on the trained model which is not perfect, 100% recall of VE ( Q , A , D ) | = T rue in Eq.",
"(1) is not guaranteed, though we observe 87% empirical recall (Table 1).",
"Section 1 discusses how interpretation, such as attention scores (Niu et al., 2020), can be pseudo-evidentiality.",
"For QA tasks, an existing approach (Perez et al., 2019) uses answer confidence for find-ing pseudo-evidences, as we discuss below: (A) Accumulative interpreter: to consider multiple sentences as evidences, the existing approach (Perez et al., 2019) iteratively inserts sentence S i into set E t 1 , with a highest probability at t -th iteration, as follows: PS i = P ( A|Q , S i E t 1 ) P ( A|Q , E t 1 ) E t = argmax S i PS i , E t = E t E t 1 (2) where E 0 starts with the sentence S containing answer A , which is minimal context for our task.",
"This method can consider multiple sentences as evidence by inserting iteratively into a set, but cannot consider the effect of erasing sentences from reasoning chain.",
"(B) Our proposed Interpreter : to enhance the interpretability, we consider both erasing and inserting each sentence, in contrast to accumulative interpreter considering only the latter.",
"Intuitively, erasing evidence would change the prediction significantly, if such evidence is causally salient, which we compute as follows: PS i = P ( A|Q , D ) P ( A|Q , ( D\\S i )) (3) where ( D\\S i ) is a passage out of sentence S i .",
"We hypothesize that breaking reasoning chain, by erasing S i , should significantly decrease P ( A | ) .",
"In other words, S i with higher PS i is salient.",
"Combining the two saliency scores in Eq.",
"(2),(3), our final saliency is as follows: PS i = P ( A|Q , S i E t 1 ) (cid:40)(cid:40)(cid:40)(cid:40)(cid:40)(cid:40)(cid:40) P ( A|Q , E t 1 ) + (cid:24)(cid:24)(cid:24)(cid:24)(cid:24)(cid:24) P ( A|Q , D ) P ( A|Q , ( D\\ ( S i E t 1 ))) (4) where the constant values can be omitted in argmax .",
"At each iteration, the sentence that maximize PS i is selected, as done in Eq.",
"(2).",
"This promotes selection that increases confidence P ( A| ) on important sentences, and decreases confidence on unimportant sentences.",
"We stop the iterations if PS i < 0 or t = T , then the final sentences in E t = T are a pseudo evidence-positive set E + .",
"To reduce the search space, we empirically set T = 5 1 .",
"Briefly, we obtain the labels of answerability and evidentiality, as follows: Answer-positive A + and negative A set: the former has both answer and evidences, and the latter has neither.",
"Evidence-positive E + and negative E set: the former is expected to have all the evidences, and the latter has an answer with no evidence.",
"1 Based on observations that 99% in HotpotQA require less than 6 evidence sentences for reasoning.",
"In this section, our goal is to learn the above labels of answerability and evidentiality.",
"As optimizing QA model is not our focus, we adopt the existing model in (Min et al., 2019).",
"As the architecture of QA modal, we use a powerful transformer-based model RoBERTa (Liu et al., 2019), where the input is [CLS] question [SEP] passage [EOS] .",
"The output of the model is as follows: h = RoBERTa (Input) R n d O s = f 1 ( h ) , O e = f 2 ( h ) P s = softmax ( O s ) , P e = softmax ( O e ) (5) where f 1 and f 2 are fully connected layers with the trainable parameters R d , P s and P e are the the probabilities of start and end positions, d is the output dimension of the encoder, n is the size of the input sequence.",
"For answerability, they build a classifier through the hidden state h [0 , :] of [CLS] token that represents both Q and D .",
"As HotpotQA dataset covers both yes-or-no and span-extraction questions, which we follow the convention of (Asai et al., 2019) to support both as a multi-class classification problem of predicting the four probabilities: P cls = softmax ( W 1 h [0 , :] ) = [ p span , p yes , p no , p none ] (6) where p span , p yes , p no , and p none denote the probabilities of the answer type being span, yes , no , and no answer, respectively, and W 1 R 4 d is the trainable parameters.",
"For training answer span and its class, the loss function of example i is the sum of cross entropy losses ( DCE ), as follows: DCE ( P i , A i ) = (cid:16) log ( P ss i ) + log ( P ee i ) (cid:17) DCE ( P clsi , C i ) = log ( P clsc i ) LA ( i ) = DCE ( P i , A i ) + DCE ( P clsi , C i ) (7) where s i and e i are the starting and ending position of answer A , respectively, and c i is the index of the actual class C i in example i .",
"As overviewed in Section 1, Base model is reported to take a shortcut, or a direct path between answer A and question Q , neglecting implicit intermediate",
"paths (evidences).",
"Specifically, we present the two objectives for unbiased models: (O1): QA model should not be overconfident on passages with no evidences ( i . e ., on E ).",
"(O2): QA model should be confident on passages with both answer/evidences ( i . e ., on E + ) For (O1), as a naive approach, one may consider a regularization term to avoid overconfidence on evidence-negative set E .",
"Overconfident answer distribution would be diverged from uniform distribution, such that KullbackLeibler (KL) divergence KL ( p || q ) , where p and q are the answer probabilities and the uniform distribution, respectively, is high when overconfident: R = (cid:88) i E DKL ( P ( A i |Q i , D i ) || P uniform ) (8) where P uniform indicates uniform distribution.",
"This regularization term R forces the answer probabilities on E to be closer to the uniform one.",
"However, one reported risk (Utama et al., 2020; Grand and Belinkov, 2019) is that suppressing data with biases has a side-effect of lowering confidence on unbiased data (especially on in-distribution).",
"Similarly, in our case, regularizing to keep the confidence low for E , can cause lowering that for E + , due to their correlation.",
"In other words, pursuing (O1) violates (O2), which we observe later in Figure 3.",
"Our next goal is thus to decorrelate two distributions on E + and E to satisfy both (O1) and (O2).",
"Figure",
"2(b) shows how we feed the hidden states h into two predictors.",
"Predictor f is for learning the target distribution and predictor g is purposedly trained to be overconfident on evidence-negative set E , where this biased answer distribution is denoted as P .",
"We regularize target distribution P to diverge from the biased distribution of P .",
"Formally, the biased answer distributions P ( P s and P e ) are as follows: O s = g 1 ( h ) , O e = g 2 ( h ) P s = softmax ( O s ) , P e = softmax ( O e ) (9) where g 1 and g 2 are fully connected layers with the trainable parameters R d .",
"Then, we optimize P to predict answer A on evidence-negative set E , which makes layer g biased (taking shortcuts), and regularize f by maximizing KL divergence between P and fixed P .",
"The regularization term of example i E is as follows: R ( i ) = DCE ( P i , A i ) D KL ( P i || P i ) (10) where is a hyper-parameter.",
"This loss R is optimized on only evidence-negative set E .",
"Lastly, to pursue (O2), we train on E + , as done on A + .",
"However, in initial steps of training, our Interpreter is not reliable, since the QA model is not trained enough yet.",
"We thus train without E + for the first K epochs, then extract E + at K epoch and continue to train on all sets, as shown in Figure",
"2(a).",
"In the final loss function, we apply different losses as set E and A : L total = (cid:88) i A + , LA ( i ) + (cid:88) i E R ( i ) + (cid:88) i E + u ( t K ) LA ( i ) (11) where the function u is a delayed step function (1 when epoch t is greater than K , 0 otherwise).",
"For our multi-hop QA task, it requires to find answerable passages with both answer and evidence, from candidate passages.",
"While we can access the ground-truth of answerability in training set, we need to identify the answerability of ( Q , D ) at inference time.",
"For this, we consider two directions: (1) Paragraph Pair Selection, which is specific to HotpotQA, and (2) Supervised Evidence Selector trained on pseudo-labels.",
"For (1), we consider the data characteristic, mentioned in Section 3.1; we know one pair of paragraphs is answerable/evidential (when both paragraphs are positive, or P + ).",
"Thus, the goal is to identify the answerable pair of paragraphs, from all possible pairs P ij = { ( p i , p j ) : p i P , p j P} (denoted as paired-paragraph ).",
"We can let the model select one pair with highest estimated answerability, 1 p none in Eq.",
"(6), and predict answers on the paired passage, which is likely to be evidential.",
"For (2), some pipelined approaches (Nie et al., 2019; Groeneveld et al., 2020) design an evidence selector, extracting top k sentences from all candidate paragraphs.",
"While they supervise the model using ground-truth of evidences, we assume there is no such annotation, thus train on pseudo-labels E + .",
"We denote this setting as selected-evidences .",
"For evidence selector, we follow an extracting method in (Beltagy et al., 2020), where the special token [S] is added at ending position of each sentence, and h [ S i ] from BERT indicates i -th sentence embedding.",
"Then, a binary classifier f evi ( h [ S i ] ) is trained on the pseudo-labels, where f evi is a fully connected layer.",
"During training, the classifier identi-fies whether each sentence is evidence-positive (1) or negative (0).",
"At inference time, we first select top 5 sentences 2 on paragraph candidates, and then insert the selected evidences into QA model for testing.",
"2 Table 1 shows the precision and recall of top5 sentences.",
"While we discuss how to get the answerable passage above, we can use the passage setting for evaluation.",
"To show the robustness of our model, we construct a challenge test set by excluding easy examples ( i . e ., easy to take shortcuts).",
"To detect such easy examples, we build a set of single-paragraph P i , that none of it is evidential in HotpotQA, as the dataset avoids having all evidences in a single paragraph, to discourage single-hop reasoning.",
"If QA model predicts the correct answer on the (uneviden-tial) single-paragraph, we remove such examples in HotpotQA, and define the remaining set as the challenge set.",
"In this section, we formulate our research questions to guide our experiments and describe evaluation results corresponding to each question.",
"the effectiveness of our method, we address the following research questions:",
"RQ1 : How effective is our proposed method for a multi-hop QA task?",
"RQ2 : Does our Interpreter effectively extract pseudo-evidentiality annotations for training?",
"RQ3 : Does our method avoid reasoning shortcuts in unseen data?",
"Implementation Our implementation settings for QA model follow RoBERTa (Base version with 12 layers) (Liu et al., 2019).",
"We use the Adam optimizer with a learning rate of 0.00005 and a batch-size of 8 on RTX titan.",
"We extract the evidence-positive set after 3 epoch ( K =3 in Eq.",
"(11)) and retrain for 3 epochs.",
"As a hyper-parameter, we search among { 1 , 0 .",
"1 , 0 .",
"01 } , and found the best value ( =0.01), based on 5% hold-out set sampled from the training set.",
"Metrics We report standard F1 score for HotpotQA, to evaluate the overall QA accuracy to find the correct answers.",
"For evidence selection, we also report F1 score, Precision, and Recall to evaluate the sentence-level evidence retrieval accuracy.",
"Original Set : We evaluate our proposed approach on multi-hop reasoning dataset, HotpotQA 3 (Yang et al., 2018).",
"HotpotQA contains 112K examples of multi-hop questions and answers.",
"For evaluation, we use the HotpotQA dev set (distractor setting) with 7405 examples.",
"Challenge Set : To validate the robustness, we construct a challenge set where QA model on single-paragraph gets zero F1, while such model achieves 67 F1 in the original set.",
"That is, we exclude instances with F1 > 0, where the QA model predicts an answer without right reasoning.",
"The exclusion makes sure the baseline obtains zero F1 on the challenge set.",
"The number of surviving examples in our challenge set is 1653 (21.5% of dev set).",
"Baselines, Our models, and Competitors As a baseline, we follow the previous QA model (Min et al., 2019) trained on single-paragraphs.",
"We test our model on single-paragraphs, paired-paragraphs and selected evidences settings discussed in Section 3.4.",
"As a strong competitor, among released models for HotpotQA, we implement a state-of-the-art model (Asai et al., 2019) 4 , using external knowledge and a graph-based retriever.",
"Main Results This section includes the results of our model for multi-hop reasoning.",
"As shown in Table 2, our full model outperforms baselines on both original and challenge set.",
"We can further observe that",
"i) when tested on single-paragraphs, where forced to take shortcuts, our model (O-I) is worse than the baseline (B-I), which indicates that B-I learned the shortcuts.",
"In contrast, O-II outperforms B-II on paired-paragraphs where at least one passage candidate has all the evidences.",
"ii) When tested on evidences selected by our method (O-III), we can improve F1 scores on both original set and challenge set.",
"This noise filtering effect of evidence selection, by eliminating irrelevant sentences, was consistently observed in a supervised setting (Nie et al., 2019; Groeneveld et al., 2020; Beltagy et al., 2020), which we could reproduce without annotation.",
"iii) Combining our method with SOTA (C-I) (Asai et al., 2019) leads to accuracy gains in both sets.",
"C-I has distinctions of using external knowledge of reasoning paths, to outperform models without such advantages, but our method can contribute to complementary gains.",
"Ablation Study As shown in Table 3, we conduct an ablation study of O-III in Table 2.",
"In (A), we remove E + from Interpreter , in training time.",
"On the QA model without E + , the performance decreased significantly, suggesting the importance of evidence-positive set.",
"In (B), we remove evi-dentaility labels of both E + and E , and observed that the performance drop is larger compared to other variants.",
"Through (A) and (B), we show that training our evidentiality labels can increase QA performance.",
"In (C), we replace R with R , removing layer g to train biased features.",
"On the replaced regularization, the performance also decreased, suggesting that training R is effective for a multi-hop QA task.",
"In this section, we evaluate the effectiveness of our Interpreter , which generates evidences on training set, without supervision.",
"We compare the pseudo evidences with human-annotation, by sentence-level.",
"For evaluation, we measure sentence-level F1 score, Precision and Recall, following the evidence selection evaluation in (Yang et al., 2018).",
"As a baseline, we implement the retrieval-based model, AIR (Yadav et al., 2020), which is an unsupervised method as ours.",
"As shown in Table 4, our Interpreter on our QA model outperforms the retrieval-based method, in terms of F1 and Recall, while the baseline (AIR) achieves the highest precision (63.06%).",
"We argue recall, aiming at identifying all evidences, is much critical for multi-hop reasoning, for our goal of avoiding disconnected reasoning, as long as precision remains higher than precision of answerable A + (36.94%), in Table 1.",
"As variants of our method, we test our Interpreter on various models.",
"First, when comparing",
"(a) and",
"(c), our full model",
"(c) outperforms the baseline",
"(a) over all metrics.",
"The baseline",
"(a) trained on single-paragraphs got biased, thus the evidences generated by the biased model are less accurate.",
"Second, the variant",
"(b) trained by R outperforms",
"(c) our full model.",
"In Eq.",
"(8), the loss term R does not train layer g for biased features, unlike R in Eq.",
"(10).",
"This shows that learning g results in performance degradation for evidence selection, despite performance gain in QA.",
"In this section, to show that our model avoids reasoning shortcuts for unseen data, we analyze the confidence distribution of models on the evidence-positive and negative set.",
"In dev set, we treat the ground truth of evidences as E + , and a single sentence containing answer as E (each has 7K Q-D pairs).",
"On these set, Figure 3 shows confidence P ( A|Q , D ) of three models;",
"(a),",
"(b), and",
"(c) mentioned in Section 4.2.",
"We sort the confidence scores in ascending order, where y-axis indicates the confidence and x-axis refers to the sorted index.",
"Thus, the colored area indicates the dominance of confidence distribution.",
"Ideally, for a debiased model, the area on evidence-positive set should be large, while that on evidence-negative should be small.",
"Desirably, in Figure",
"3(a), the area under the curve for E should decrease for pursuing (O1), moving along blue arrow, while that of E + should increase for (O2), as red arrow shows.",
"In Figure",
"3(b), our model with R follows blue arrow, with a smaller area under the curve for E , while keeping that of E + comparable to Figure",
"3(a).",
"For the comparison, Figure",
"3(d) shows all curves on E + .",
"In Figure",
"3(c), our full model follows both directions of blue and red arrows, which indicates that ours satisfied both (O1) and (O2).",
"In this paper, we propose a new approach to train multi-hop QA models, not to take reasoning shortcuts of guessing right answers without sufficient evidences.",
"We do not require annotations and generate pseudo-evidentiality instead, by regularizing QA model from being overconfident when evidences are insufficient.",
"Our experimental results show that our method outperforms baselines on HotpotQA and has the effectiveness to distinguish between evidence-positive and negative set.",
"This research was supported by IITP grant funded by the Korea government (MSIT) (No.2017-0-01779, XAI) and ITRC support program funded by the Korea government (MSIT) (IITP-2021-2020-0-01789)."
] | [
"method",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other"
] |
[
"Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context.",
"To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries.",
"We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document.",
"This model is then used to select the best candidate as the final output summary.",
"Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations.",
"We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.",
"Abstractive Summarization is the task of producing a concise and fluent summary that is salient and faithful to the source document(s).",
"Data-driven, neural methods (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017), and the more recent, pretrained transformer language models (Vaswani et al., 2017; Devlin et al., 2019; Liu and Lapata, 2019), have shown improvements in the fluency and salience of generated summaries.",
"However, less progress has been made on improving the faithfulness of the generated summaries, that is, producing a summary that is entailed by the information presented in the source document.",
"Despite the increased level of performance under automatic metrics such as ROUGE Most of the work done while the authors were at Google.",
"Source : He was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011 , with effect from 1 January 2012.",
"Mr. Ban describes his priorities as mobilising world leaders to deal with climate change, economic upheaval, pandemics and increasing pressures involving food, energy and water...",
"Unfaithful Summary : The United Nations SecretaryGeneral Ban Ki-moon was elected for a second term in 2007 .",
"Our Summary : The United Nations Secretary-General Ban Ki-moon was elected for a second term in 21 June 2011 .",
"(Lin, 2004) or BERTSCORE (Zhang et al., 2020), current state of the art models (Liu and Lapata, 2019; Lewis et al., 2020) produce summaries that suffer from intrinsic and extrinsic hallucinations the fabrication of untruthful text spans containing information either present or absent from the source (Maynez et al., 2020).",
"Table 1 shows an example of such summary, generated by BART (Lewis et al., 2020), an auto-regressive, transformer-based sequence-to-sequence model.",
"The article describes an event where the former UN-Secretary-General Ban Ki-Moon was re-elected for a second term.",
"The model hallucinates \"2007\", which never appears in the source document, leading to inconsistency with the correct date of the event presented.",
"In this work, we focus on the problem of correcting such hallucinations as a post processing step 1 .",
"A post processing correction step allows us to rely on the fluency of the text generated by SOTA systems, that gain from huge pretrained models and large fine-tuning datasets, and correct it using small 1 Our code and data is available at http://cogcomp.",
"amounts of automatically generated training data.",
"Under the setting where a large fraction of ground truth summarization data is hallucinated, as we show in Table 2, we study the method of contrast candidate generation and selection .",
"In the generation step, we replace named entities in a potentially hallucinated summary with ones with compatible semantic types that are present in the source, and create variants of candidate summaries.",
"In the selection step, we rank the generated candidates with a discriminative model trained to distinguish between faithful summaries and synthetic negative candidates generated given the source.",
"We experiment on a range of RNNand transformer-based abstractive summarization models.",
"Our preliminary results on the XSum corpus (Narayan et al., 2018a), which contains substantial presence of hallucinated ground truth examples, show the effectiveness of our method in correcting unfaithful summaries with extrinsic hallucinations .",
"Our main contributions are as follows.",
"First, our work is the first to study the effectiveness of contrast candidate generation and selection as a model-agnostic method for correcting hallucinations, under the setting where a large fraction of ground truth summarization data suffers from hallucinations.",
"Second, we validate our method on various neural summarization systems trained on XSum, and provide detailed analysis on the typical types of hallucinations from each system.",
"Our proposed method is built on the observation that a large fraction of extrinsic hallucinations happen on named entities and quantities.",
"Table 2 shows the human analysis by Maynez et al. (2020) on the hallucinations of 500 randomly sampled gold summaries from the XSum corpus .",
"We break down each category and annotate the proportion of hallucinations that happen on entity and num-ber/quantity spans.",
"As Maynez et al. (2020) further show that the hallucinations in training data translate to similar issues for the generated outputs across different summarization models, we want to study a model-agnostic, post-processing method that can correct such entity and quantity hallucinations.",
"We frame the problem as a correction task and make it conceptually a less complex problem than summarization.",
"Modeling correction as a standalone task would Type % Ent.",
"require less training data, which becomes crucial when a large proportion of ground truth summarization data suffer from hallucinations, and inherit the fluency of data intensive SOTA models.",
"From a model-generated summary, we first identify any potentially hallucinated entities or quantities by checking whether entities with similar surface forms have appeared in the source document.",
"We use a neural Named Entity Recognition (NER) system from the Stanza NLP toolkit (Qi et al., 2020) trained on the OntoNotes corpus (Weischedel et al., 2013) to extract named entities of different semantic types from the source document and summary.",
"Each named entity present in the summary is replaced with a different entity present in the document with the same NER label.",
"This gives us different variants of the original summary with the same level of fluency , but not necessarily faithful.",
"For the candidate selection step, we want to identify the best candidate among the variants generated in the previous step as the final output summary.",
"As the contrast candidates vary in no more than a few tokens from the original summary, it requires a model with more delicate local decision boundaries (Gardner et al., 2020) to select the correct candidate.",
"For example, we observe that MNLI models (Williams et al., 2018) fail to produce satisfactory results.",
"To create training data for that purpose, we sample examples from the XSum training set where all entities in the ground truth summary appear in the source document.",
"We then follow the same procedure in the generation step, and produce unfaithful variants from the ground truth summary by replacing entities with others that have the same semantic type but different surface form in the source text.",
"With the ground truth and synthetic negative summaries, we train a text classifier with a discriminative objective to score and rank the variants of the summaries.",
"We use BART (Lewis et al., 2020) plus a linear layer as our classification model.",
"We adopt a similar learning objective to contrastive learning (Khosla et al., 2020).",
"For each pair of positive and negative summary candidate, we use cross entropy loss LXE to handle the correctness of the label predictions.",
"We add a margin ranking loss term LRANK to encourage the model to assign higher probability to the positive than the negative candidate.",
"The margin is a tunable hyperparameter in training.",
"L = LXE ( y + , 1) + LXE ( y , 0) + LRANK ( y + , y ) LRANK = max (0 , y y + + ) During test time, we use the trained model to score the generated contrast candidate summaries, as well as the original version generated by the summarization model.",
"We take the candidate with the highest score as the final summary.",
"Our experiments focus on the aforementioned XSum corpus, where the target summary is highly abstractive and likely hallucinated .",
"We first consider the summaries generated by a BART model trained on the XSum corpus.",
"By applying our method, we are able to change 13 .",
"3% of all model generated summaries.",
"For 38 .",
"4% of all summaries, the original summary does not have a hallucinated entity, or there is no entity with compatible type Method Faith.",
"in the source text.",
"Our model decides to keep the original summary in the rest 48 .",
"3% .",
"We first verify that our method does not hurt the fluency and salience of the generated summaries, for which we assume ROUGE (Lin, 2004) and BERTSCORE (Zhang et al., 2020) are suitable metrics.",
"We report the results in Table",
"3. We observe though both the baseline and our method do well in both ROUGE and BERTSCORE , our method trails behind in both metrics slightly.",
"This is due to the existence of extrinsic hallucinations in the ground truth summary, and the model manages to generate a part of the hallucinations, and gets incorrectly rewarded by such.",
"To test whether our correction method can improves the faithfulness of the summaries, we evaluate the summaries with FEQA (Durmus et al., 2020), a QA-based metric for summary faithfulness.",
"Given a summary, FEQA automatically generates questions on noun phrase and named entity spans in the summary, and uses a pretrained QA model to verify if the answer derived from the source document exact-matches the span in the summary.",
"We run FEQA and compute the macro-averaged percentage of questions answered correctly for each of the 1510 summaries that our system made corrections to, and report the results in Table",
"3. The results suggest that the corrected summaries present statistically significant improvements over the original ones ( p < 0 . 001 , with a two-tailed, paired t-test).",
"Table 4 shows the human evaluation results on the 95 randomly sampled subset of changed summaries.",
"Two expert annotators assign each summary into three faithfulness categories and adjudicate the decisions.",
"Additional annotations from Good Corrections Type System Original Summary and Our Change Correcting NE Hallucination BERT S2S Tranmere Rovers have signed midfielder [Alfreton] PER [Mooney] PER on loan until the end of the season.",
"a third expert is then used to calculate the inter-annotator agreement.",
"As the results show, our model is able to improve the faithfulness of the summaries, but at the cost of incurring intrinsic hallucinations on mistakes, which we will discuss more in detail in section 4.2.",
"Table 6 shows our selection model's performance when measuring P, R, F 1 w.r.t all the hallucinated instances.",
"We use the test set from Maynez et al. (2020), who have annotated hallucination categories of generated summaries from four neural summariazaiton models: PTGEN (See et al., 2017) TCONV S2S (Narayan et al., 2018a), BERT S2S and TRAN S2S (Rothe et al., 2020).",
"Our system achieves consistently high level of precision across models.",
"The system achieves high relative recall with respect to the % of entity and quantity hallucinations among all hallucinations.",
"As our method only targets entities and quantities, the overall recall varies by the typical type of hallucinations each summarization system makes.",
"We also observe while our method achieves high recall on models with lower ROUGE and BERTSCORE , the recall drops on pretrained models such as BERT S2S.",
"This is potentially due to the decreased percentage of en-tity/quantity hallucinations exist in generated summaries from the models with pretraining.",
"As our method detects and corrects extrinsic-hallucinated entities, naturally any entities replaced wrong would introduce intrinsic hallucinations in the changed summary, as indicated by the results in Table",
"4. To speculate why the mistakes happen, we analyzed the typical mistakes by the model, and listed a few representative examples in Table",
"5. For example, our method could not find the correct replacement for a hallucinated entity when no such one exists in the source text.",
"We observe that the models with pretraining, such as BERT S2S, (Rothe et al., 2020) and BART , suffer from the issue by most, as they tend to be affected by artifacts/priors from the pretraining process.",
"From the observation that models often hallucinate entities with no correct replacement in the source, we suspect that solving entity faithfulness alone does not guarantee the faithfulness of the summary.",
"In the last example from Table 5, the BERT S2S system correctly identifies that three fugitives are involved in the event described by the source text, even though the number \"three\" has never been explicitly mentioned in the source context in any surface forms.",
"Furthermore, statistics provided by Maynez et al. (2020) show that abstractive summarization models often produces factual statements, i.e. verifiable in the real world independent of the source text.",
"Such findings imply that identifying hallucinations often requires more complex objectives such as commonsense reasoning and knowledge retrieval.",
"The solution we propose here that focuses only on entites and quantities would likely be insufficient to solve the entire problem.",
"There have been growing interests in quantitatively measuring the faithfulness of text generation models.",
"Most widely-adopted evaluation metrics for text generation, such as ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2020), correlate poorly with the human perceived faithfulness of the generated text (Kryscinski et al., 2019; Durmus et al., 2020).",
"Recent studies explore categorical, content-based analysis for measuring the faithfulness of summaries (Goyal and Durrett, 2020; Deutsch and Roth, 2020).",
"Narayan et al. (2018b); Deutsch et al. (2020); Durmus et al. (2020) propose to use question answering to test the consistency of summary content to the information presented in the source text.",
"There have been efforts to study preor postprocessing methods to improving faithfulness of generated summaries.",
"Falke et al. (2019) attempt to use textual entailment models to re-rank the summary candidates generated from beam search or different neural systems.",
"As Maynez et al. (2020) highlight the existence of hallucinations in training data, truncating potentially unfaithful gold summaries during training is an effective strategy (Kang and Hashimoto, 2020; Filippova, 2020).",
"Kryscinski et al. (2020) take similar apporach as in this work to identify the hallucinations in summary.",
"A concurrent study to this work (Cao et al., 2020) uses similar strategies as in this paper on a dataset with a very small fraction of hallucinations present.",
"Our study instead focuses on the more challenging setting (Goyal and Durrett, 2021) where a large part of training data suffers from extrinsic and intrinsic hallucinations, and provides cross-system analysis on the both hallucinations categories.",
"We study contrast candidate generation and selection as a method to apply post-hoc fixes to extrinsically hallucinated summary on entities and quantities, under the setting where the summarization dataset suffers from intrinsic and extrinsic hallucinations.",
"hallucinations.",
"We conduct our experiments on the XSum dataset, and show that our method is able to correct extrinsic hullucinations, but incurs a small fraction of intrinsic hallucinations on mistakes.",
"We also provide detailed analysis and discussions on the capabilities and limitations of our method.",
"We hope our findings in the paper will provide insights to future work in this direction.",
"We thank Sunita Verma and Sugato Basu for valuable input and feedback on drafts of the paper.",
"This work was supported in part by a Focused Award from Google, a gift from Tencent, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).",
"The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government."
] | [
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"objective",
"abstain",
"result",
"method",
"method",
"other",
"other",
"other"
] |
[
"Zero-shot learning aims to recognize unseen objects using their semantic representations.",
"Most existing works use visual attributes labeled by humans, not suitable for large-scale applications.",
"In this paper, we revisit the use of documents as semantic representations.",
"We argue that documents like Wikipedia pages contain rich visual information, which however can easily be buried by the vast amount of non-visual sentences.",
"To address this issue, we propose a semi-automatic mechanism for visual sentence extraction that leverages the document section headers and the clustering structure of visual sentences.",
"The extracted visual sentences, after a novel weighting scheme to distinguish similar classes, essentially form semantic representations like visual attributes but need much less human effort.",
"On the ImageNet dataset with over 10,000 unseen classes, our representations lead to a 64% relative improvement against the commonly used ones.",
"Algorithms for visual recognition usually require hundreds of labeled images to learn how to classify an object (He et al., 2016).",
"In reality, however, the frequency of observing an object follows a long-tailed distribution (Zhu et al., 2014): many objects do not appear frequently enough for us to collect sufficient images.",
"Zero-shot learning (ZSL) (Lam-pert et al., 2009), which aims to build classifiers for unseen object classes using their semantic representations , has thus emerged as a promising paradigm for recognizing a large number of classes.",
"Being the only information of unseen objects, how well the semantic representations describe the visual appearances plays a crucial role in ZSL.",
"One popular choice is visual attributes (Lampert et al., 2009; Patterson and Hays, 2012; Wah et al., 2011) carefully annotated by humans.",
"For example, the bird Red bellied Woodpecker has the capped head pattern and pointed wing shape.",
"While Figure 1: An illustration of our ZSL approach, which recognizes the input image by comparing it to the visual sentences of documents.",
"strictly tied to visual appearances, visual attributes are laborious to collect, limiting their applicability to small-scale problems with hundreds of classes.",
"For large-scale problems like ImageNet (Deng et al., 2009) that has more than 20 , 000 classes, existing ZSL algorithms (Frome et al., 2013; Norouzi et al., 2013) mostly resort to word vectors of classes names (Mikolov et al., 2013; Pennington et al., 2014) that are automatically extracted from large corpora like Common Crawl.",
"While almost labor free, word vectors are purely text-driven and barely aligned with visual information.",
"As a result, the state-of-the-art ZSL accuracy on ImageNet falls far behind being practical (Changpinyo et al., 2020).",
"Is it possible to develop semantic representations that are as powerful as visual attributes without significant human effort?",
"A feasibility study by representing a class with its Wikipedia page shows some positive signs Wikipedia pages do capture rich attribute information.",
"For example, the page Red-bellied Woodpecker contains phrases red cap going from the bill to the nape and black and white barred patterns on their back, wings and tail that exactly match the visual attributes mentioned above.",
"In other words, if we can identify visual sentences from a document to represent a class, we are likely to attain much higher ZSL accuracy 1 .",
"To this end, we present a simple yet effective semi-automatic approach for visual sentence extraction , which leverages two informative semantic cues.",
"First, we leverage the section structures of Wikipedia pages: the section header indicates what kind of sentences (visual or not) appear in the section.",
"Concretely, we search Wikipedia pages of common objects following the sysnsets in ImageNet (e.g., fish, room), and manually identify sections that contain visual information (e.g., characteristics, appearance).",
"We then apply these visual headers to the Wikipedia pages of the remaining ImageNet classes.",
"Second, we observe that visual sentences share some common contextual patterns: for example, they contain commonly used words or phrases of visual attributes (e.g., red color, furry surface).",
"To leverage these patterns, we perform K-means sentence clustering using the BERT features (Devlin et al., 2018) and manually select clusters that contain visual information.",
"We keep sentences in these clusters and combine them with those selected by section headers to represent a document.",
"See Figure 1 for an illustration.",
"To further increase the discriminative ability of the visual sentences between similar object classes (e.g., breeds of dogs), we introduce a novel scheme to assign weights to sentences, emphasizing those that are more representative for each class.",
"We validate our approach on three datasets: ImageNet Fall 2011 dataset (Deng et al., 2009), which contains 14 , 840 unseen classes with Wikipedia pages; Animals with Attributes 2 (AwA2) (Xian et al., 2018a), which has 50 animal classes; Attribute Pascal and Yahoo (aPY) (Farhadi et al., 2009), which has 32 classes.",
"Our results are promising: compared to word vectors on ImageNet, we improve by 64% using visual sentences.",
"On AwA2 and aPY, compared to visual attributes annotated by humans, we improve by 8% and 5% , respectively.",
"Moreover, our new semantic representations can be easily incorporated into any ZSL algorithms.",
"Our code and data will be available at https: //github.com/heendung/vs-zsl .",
"Semantic representations.",
"Visual attributes are the most popular semantic representations (Lam-pert et al., 2009; Patterson and Hays, 2012; Wah et al., 2011; Zhao et al., 2019).",
"However, due to the need of human annotation, the largest dataset has only 717 classes.",
"Reed et al. (2016b,a) collect visual sentences for each image, which is not scalable.",
"For large-scale recognition, word vectors (Mikolov et al., 2013) have been widely used.",
"Lu (2015); Kampffmeyer et al. (2019); Wang et al. (2018) explore the use of WordNet hierarchy (Miller, 1995), which may not be available in other applications.",
"Similar to ours, Akata et al. (2015b); Elhoseiny et al. (2013); Qiao et al. (2016); Zhu et al. (2018) represent classes by documents, by counting word frequencies but not extracting visual sentences.",
"Al-Halah and Stiefelhagen (2017) extract single word attributes, which are not discriminative enough (e.g., red cap becomes red, cap).",
"None of them works on ZSL with over 1,000 classes.",
"Hessel et al. (2018); Le Cacheux et al. (2020) collect images and tags of a class and derives its semantic representation from tags, which is not feasible for unseen classes on ZSL.",
"Zero-shot learning algorithms.",
"The most popular way is to learn an embedding space in which visual features and semantic representations are aligned and nearest neighbor classifiers can be applied (Changpinyo et al., 2017; Romera-Paredes and Torr, 2015; Akata et al., 2015a; Kodirov et al., 2017; Schonfeld et al., 2019; Zhu et al., 2019; Xie et al., 2019; Socher et al., 2013).",
"These algorithms consistently improve accuracy on datasets with attributes.",
"Their accuracy on ImageNet, however, is saturated, mainly due to the poor quality of semantic representations (Changpinyo et al., 2020).",
"ZSL algorithms learn to align visual features and semantic representations using a set of seen classes S .",
"The alignment is then applied to the test images of unseen classes U .",
"We denote by D = { ( x n , y n S ) } N n =1 the training data (i.e., image feature and label pairs) with the labels coming from S .",
"Suppose that we have access to a semantic representation a c (e.g., word vectors) for each class c S U , one popular algorithm DeViSE (Frome et al., 2013) proposes the learning objective (cid:88) n (cid:88) c (cid:54) = y n max { 0 , f (cid:62) ( x n ) M g ( a y n ) + f (cid:62) ( x n ) M g ( a c ) } , (1) where 0 is a margin.",
"That is, DeViSE tries to learn transformations f and g and a matrix M to maximize the visual and semantic alignment of Section headers Characteristics, Description, Appearance, Habitat, Diet, Construction and Mechanics, Materials for utensil, Design for appliance, Furnishings for room, Fabrication, Feature for geological formation, Design, Equipment for sport History, Health, Terminology, Mythology, Conservation, Culture, References, External links, Further reading Table 1: Visual (top) & Non-Visual (bottom) sections.",
"the same classes while minimizing that between classes.",
"We can then classify a test image x by arg max c U f (cid:62) ( x ) M g ( a c ) .",
"Here, we consider that every class c S U is provided with a document H c = { h ( c ) 1 , , h ( c ) | H c | } rather than a c , where | H c | is the amount of sentences in document H c and h ( c ) j is the j th sentence, encoded by BERT (Devlin et al., 2018).",
"We mainly study DeViSE, but our approach can easily be applied to other ZSL algorithms.",
"We aim to filter out sentences in H c that are not describing visual information.",
"We first leverage the section headers in Wikipedia pages, which indicate what types of sentences (visual or not) are in the sections.",
"For example, the page Lion has sections Description and Colour variation that are likely for visual information, and Health and Cultural significance that are for non-visual information.",
"To efficiently identify these section headers, we use ImageNet synsets (Deng et al., 2009), which group objects into 16 broad categories.",
"We randomly sample 30 35 classes per group, resulting in a set of 500 classes.",
"We then retrieve the corresponding Wikipedia pages by their names and manually identify section headers related to visual sentences.",
"By sub-sampling classes in this way, we can quickly find section headers that are applicable to other classes within the same groups.",
"Table 1 shows some visual/non-visual sections gathered from the 500 classes.",
"For example, Characteris-tics frequently appears in pages of animals to describe their appearances.",
"In contrast, sections like History or Mythology do not contain visual information.",
"Investigating all the 500 Wikipedia pages carefully, we find 40 distinct visual sections.",
"We also include the first paragraph of a Wikipedia page, which often contains visual information.",
"Our second approach uses K-means for sentence clustering: visual sentences often share common",
"words and phrases of visual attributes, naturally forming clusters.",
"We represent each sentence using the BERT features (Devlin et al., 2018), and perform K-means (with K = 100 ) over all the sentences from Wikipedia pages of ImageNet classes.",
"We then manually check the 100 clusters and identify 40 visual clusters.",
"Table 2 shows a visual (top) and a non-visual (bottom) cluster.",
"We highlight sentences related to two classes: kit-fox (red) and tiger (blue).",
"The visual cluster describes the animals' general appearances, especially about visual attributes dark, black, tail, large, etc.",
"In contrast, the non-visual cluster describes mating and lifespan that are not related to visual aspects.",
"After we obtain a filtered document H c , which contains sentences of the visual sections and clusters, the next step is to represent H c by a vector a c so that nearly all the ZSL algorithms can leverage it.",
"A simple way is average , a c = 1 | H c | (cid:80) h H c h , where h is the BERT feature.",
"This, however, may not be discriminative enough to differentiate similar classes that share many common descriptions (e.g., dog classes share common phrase like a breed of dogs and having a coat or a tail).",
"We therefore propose to identify informative sentences that can enlarge the difference of a c between classes.",
"Concretely, we learn to assign each sentence a weight , such that the resulting weighted average a c = 1 | H c | (cid:80) h H c ( h ) h can be more distinctive.",
"We model ( ) R by a multi-layer perceptron (MLP) b ( h ) = exp( b ( h )) (cid:80) h (cid:48) H c exp( b ( h (cid:48) )) .",
"We learn b to meet two criteria.",
"On the one hand, for very similar classes c and c (cid:48) whose similarity cos( a c , a c (cid:48) ) is larger than a threshold , we want cos( a c , a c (cid:48) ) to be smaller than so they can be discriminable.",
"On the other hand, for other pair of less similar classes, we want their similarity to follow the average semantic representation a c 2 .",
"To this end, we initialize b such that the initial a c is close to a c .",
"We do so by first learning b to minimize the following objective (cid:88) c SU max { 0 , (cid:15) cos( a c , a c ) } .",
"We set (cid:15) = 0 .",
"9 , forcing a c and a c of the same class to have cos( a c , a c ) > 0 .",
"9 .",
"We then fine-tune b by minimizing the following objective SU (cid:88) c SU (cid:88) c (cid:54) = c (cid:48) max { 0 , cos( a c , a c (cid:48) ) } .",
"(5) We assign a high value (e.g., 0 . 95 ) to only penalize overly similar semantic representations.",
"Please see the appendix for details.",
"Comparison.",
"Our approach is different from DAN (Iyyer et al., 2015).",
"First, we learn an MLP to assign weights to sentences so that their embeddings can be combined appropriately to differentiate classes.",
"In contrast, DAN computes the averaged embedding and learns an MLP to map it to another (more discriminative) embedding space.",
"Second, DAN leans the MLP with a classification loss.",
"In contrast, we learn the MLP to reduce the embedding similarity between similar classes while maintaining the similarity for other pairs of classes.",
"We use the ImageNet Fall 2011 dataset (Deng et al., 2009) with 21 , 842 classes.",
"We use the 1K classes in ILSVRC 2012 (Russakovsky et al., 2015) for DeViSE training and validation (cf. Equation 1), leaving the remaining 20 , 842 classes as unseen classes for testing.",
"We follow (Changpinyo et al., 2016) to consider three tasks, 2-Hop, 3-Hop, and ALL, corresponding to 1,290 , 5,984 , and 14,840 unseen classes that have Wikipedia pages and word vectors and are within two, three, and arbitrary tree hop distances (w.r.t. the ImageNet hierarchy) to the 1K classes.",
"On average, each page contains 80 sentences.",
"For images, we use the 2 , 048 -dimensional ResNet visual features (He et al., 2016) provided 2 The purpose of introducing ( ) is to improve a c from the average representation a c to differentiate similar classes.",
"by Xian et al. (2018a).",
"For sentences, we use a 12 layer pre-trained BERT model (Devlin et al., 2018).",
"We denote by BERT p the pre-trained BERT and BERT f the one fine-tuned with DeViSE.",
"Please see the appendix for details.",
"Word vectors of class names are the standard semantic representations for ImageNet.",
"Here we compare to the state-of-the-art w2v-v2 provided by Changpinyo et al. (2020), corresponding to a skip-gram model (Mikolov et al., 2013) trained with ten passes of the Wikipedia dump corpus.",
"For ours, we compare using all sentences (NO) , visual sections (Vis sec ) or visual clusters (Vis clu ) , and both (Vis sec-clu ) .",
"On average, Vis sec-clu filters out 57 % of the sentences per class.",
"We denote weighted average (Section 3.4) by BERT pw and BERT fw .",
"The original DeViSE (Frome et al., 2013) has f and g as identity functions.",
"Here, we consider a stronger version, DeViSE (cid:63) , in which we model f and g each by a two-hidden layers multi-layer perceptron (MLP).",
"We also experiment with two state-of-the-art ZSL algorithms, EXEM (Chang-pinyo et al., 2020) and HVE (Liu et al., 2020).",
"We use the average per-class Top-1 classification accuracy as the metric (Xian et al., 2018a).",
"Table 3 summarizes the results on ImageNet.",
"In combining with each ZSL algorithm, our semantic representations Vis sec-clu that uses visual sections Model Type AwA2 aPY ZSL GZSL ZSL GZSL U S H U S H DeViSE Visual attributes 59.70 17.10 74.70 27.80 37.02 3.54 78.41 6.73 w2v-v2 39.56 2.18 69.29 4.22 27.67 1.68 85.53 3.22 BERT p + Vis sec-clu 64.32 19.79 72.46 31.09 38.79 3.94 71.60 7.51 Table 4: Results on AwA2 and aPY.",
"We compare different semantic representations.",
"GZSL is the generalized ZSL setting (Xian et al., 2018a).",
"In GZSL, U , S , H denote unseen class accuracy, seen class accuracy, and their harmonic mean, respectively.",
"We use per-class Top-1 accuracy (%).",
"and visual clusters for sentence extraction outperforms w2v-v2 .",
"Visual attributes are annotated by humans.",
"More discussions are as follows.",
"BERT vs. w2v-v2.",
"For both DeViSE (cid:63) and DeViSE, BERT p by averaging all the sentences in a Wikipedia page outperforms w2v-v2, suggesting that representing a class by its document is more powerful than its word vector.",
"DeViSE (cid:63) vs. DeViSE.",
"Adding MLPs to DeViSE largely improves its accuracy: from 0 .",
"78% (De-ViSE + w2v-v2) to 1 .",
"48% (DeViSE (cid:63) + w2v-v2) at ALL.",
"In the following, we then focus on DeViSE (cid:63) .",
"Visual sentence extraction.",
"Comparing different strategies for BERT p , we see both Vis clu and Vis sec largely improves NO , demonstrating the effectiveness of sentence selection.",
"Combining the two sets of sentences ( Vis sec-clu ) leads to a further boost.",
"Fine-tuning BERT.",
"BERT can be fine-tuned together with DeViSE (cid:63) .",
"The resulting BERT f has a notable gain over BERT p (e.g., 2 . 39% vs. 2 . 05% ).",
"Weighted average.",
"With the weighted average (BERT pw , BERT fw ), we obtain the best accuracy.",
"ZSL algorithms.",
"EXEM + w2v-v2 outperforms DeViSE (cid:63) + w2v-v2, but falls behind DeViSE (cid:63) + BERT pw (or BERT f , BERT fw ).",
"This suggests that algorithm design and semantic representations are both crucial.",
"Importantly, EXEM and HVE can be improved using our proposed semantic representations, demonstrating the applicability and generalizability of our approach.",
"Table 4 summarizes the results on AwA2 (Xian et al., 2018a) and aPY (Farhadi et al., 2009).",
"The former has 40 seen and 10 unseen classes; the latter has 20 seen and 12 unseen classes.",
"We apply DeViSE together with the 2 , 048 -dimensional ResNet features (He et al., 2016) provided by Xian et al. (2018a).",
"Our proposed semantic representations (i.e., BERT p + Vis sec-clu ) outperform w2-v2 and the manually annotated visual attributes on both the ZSL and generalized ZSL (GZSL) settings.",
"Please see the appendix for the detailed experimental setup.",
"These improved results on Ima-Model Type Filter 2-Hop 3-Hop ALL BERT p No 13.84 4.05 1.75 BERT pw direct No 14.85 4.25 1.79 Par 1st 13.48 4.10 1.78 DeViSE (cid:63) Cls name 14.82 3.31 1.40 BERT p Vis sec 15.56 4.41 1.82 Vis clu 15.72 4.49 2.01 Vis sec-clu 15.86 4.65 2.05 BERT pw Vis sec-clu 16.32 4.73 2.10 Table 5: The effectiveness of our visual sentence extraction.",
"geNet, AwA2, and aPY demonstrate our proposed method's applicability to multiple datasets.",
"BERT p-w-direct : it directly learns b (Equation 3) as part of the DeViSE objective.",
"Namely, we directly learn b to identify visual sentences, without our proposed selection mechanisms, such that the resulting a c optimizes Equation 1.",
"Par 1st : it uses the first paragraph of a document.",
"Cls name : it uses the sentences of a Wikipedia page that contain the class name.",
"As shown in Table 5, our proposed sentence selection mechanisms (i.e., Vis sec , Vis clu , and Vis sec-clu ) outperform all the three baselines.",
"ZSL relies heavily on the quality of semantic representations.",
"Most recent work, however, focuses solely on algorithm design, trying to squeeze out the last bit of information from the pre-define, likely poor semantic representations.",
"Changpinyo et al. (2020) has shown that existing algorithms are trapped in the plateau of inferior semantic representations.",
"Improving the representations is thus more crucial for ZSL.",
"We investigate this direction and show promising results by extracting distinctive visual sentences from documents for representations, which can be easily used by any ZSL algorithms.",
"This research is supported by the OSU GI Development funds.",
"We are thankful for the support of computational resources by Ohio Supercomputer Center and AWS Cloud Credits for Research."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"method",
"result",
"method",
"method",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user's request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST).",
"This information is typically represented as a semantic frame that captures the intent and slot-labels provided by the user.",
"We first show that such a shallow representation is insufficient for complex dialog scenarios, because it does not capture the recursive nature inherent in many domains.",
"We propose a recursive, hierarchical frame-based representation and show how to learn it from data.",
"We formulate the frame generation task as a template-based tree decoding task, where the decoder recursively generates a template and then fills slot values into the template.",
"We extend local tree-based loss functions with terms that provide global supervision and show how to optimize them end-to-end.",
"We achieve a small improvement on the widely used ATIS dataset and a much larger improvement on a more complex dataset we describe here.",
"The output of an NLU component is called a semantic or dialog frame (Hakkani-T ur et al., 2016).",
"The frame consists of intents which capture information about the goal of the user and slot-labels which capture constraints that need to be satisfied in order to fulfill the users' request.",
"For example, in Figure 1, the intent is to book a flight ( atis flight ) and the slot labels are the from location, to location and the date .",
"The intent detection task can be modeled as a classification problem and slot labeling as a sequential labeling problem.",
"The ATIS (Airline Travel Information System) dataset (Hakkani-Tur et al., 2010) is widely used for evaluating the NLU component.",
"We focus on complex aspects of dialog that occur in real-world Intent: atis_flight Slot-labels: from pittsburgh i'dlike to travel to atlanta on september fourth O fromloc.city_name O O O O O toloc.city_name O depart_date.monthdepart_date.day Figure 1: Flat structures used to represent Intents and slot labels in ATIS.",
"scenarios but are not captured in ATIS or other alternatives such as, DSTC (Henderson et al., 2014) or SNIPS 1 .",
"As an example, consider a reasonable user utterance, can i get two medium veggie pizza and one small lemonade (Figure 2A).",
"The intent is OrderItems .",
"There are two items mentioned, each with three properties.",
"The properties are the name of the item ( veggie pizza, lemonade ), the quantity of the item ( two, one ) and size of the item ( medium, small ).",
"These properties need to be grouped together accurately to successfully fulfill the customer's request the customer would not be happy with one small veggie pizza.",
"This structure occurs to a limited extent in the ATIS dataset (Figure 2B), which has specific forms such as, from loc.city name and to loc.city name , which must be distinguished.",
"However, the scale is small enough that these can be separate labels and multi-class slot-labeling approaches that predict each specific form as a separate class (Figure 1) have had success.",
"In more open domains, this hierarchy-to-multi-class conversion increases the number of classes exponentially vs. an approach that appropriately uses available structure.",
"Further, hierarchical relationships, e.g. between fromloc and city name , are ignored, which limits the sharing of data and statistical strength across labels.",
"The contributions of this paper are as follows: We propose a recursive, hierarchical frame-based representation that captures complex relationships between slots labels, and show how to 1 https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines atis_flight fromloc toloc depart_date pittsburgh atlanta month_name day_name september fourth city_name city_name OrderItems item item item item quantity size one small quantity size two medium item name name veggie pizza lemonade from pittsburghi'dlike to travel to atlantaon septemberfourth can iget twomediumveggie pizzaand one small lemonade A B Figure 2: Hierarchical relationships between slot labels and intents.",
"learn this representation from raw user text.",
"This enables sharing statistical strength across labels.",
"Such a representation (Figure 3) also allows us to include multiple intents in a single utterance (Gan-gadharaiah and Narayanaswamy, 2019; Kim et al., 2017; Xu and Sarikaya, 2013).",
"We formulate frame generation as a template-based tree-decoding task (Section 3).",
"The value or positional information at each terminal (repre-sented by a $ ) in the template generated by the tree decoder is predicted (or filled in) using a pointer to the tokens in the input sentence (Vinyals et al., 2015; Jia and Liang, 2016).",
"This allows the system to copy over slot values directly from the input utterance.",
"We extend (local) tree-based loss functions with global supervision (Section 3.5), optimize jointly for all loss functions end-to-end and show that this improves performance (Section 4).",
"Encoder-Decoder architectures, e.g. Seq2Seq models (Sutskever et al., 2014), are a popular class of approaches to the problem of mapping source sequences (here words) to target sequences (here slot labels) of variable length.",
"Seq2Seq models have been used to generate agent responses without the need for intermediate dialog components such as the DST or the Natural Language Generator (Gan-gadharaiah et al., 2018).",
"However, there has not been much work that uses deeper knowledge of semantic representations in task-oriented dialog.",
"A notable exception is recent work by Gupta et.al (2018), who used a hierarchical representation for dialog that can be easily parsed by off-the-shelf constituency-based parsers.",
"Neural constituency parsers (Socher et al., 2011; Shen et al., 2018) work directly off the input sentence, and as a result, different sentences with the same meaning end up having different syntactic structures.",
"We define a recursive, hierarchical, frame-based representation allows us to exploit some of the structure in natural language while allowing end-to-end training.",
"Our template-based generation is similar to sketch-based Seq2Tree decoding (Dong and Lapata, 2018) developed for SQL query generation, where the decoder predicts a rough sketch of the meaning, omitting low-level details such as arguments and variable names.",
"Here, we generate templates that generalize slot values by their labels.",
"We learn to map a user's utterance x = { x 1 , x 2 , ...x n } to a template-based tree representation (Figure 2), specifically the bracketed representation in Figure",
"3. We denote the symbols in the bracketed representation by y = { y 1 , y 2 ,",
"..y m } .",
"The translation from x to y is performed using four components that are jointly trained end-to-end, (1) an encoder, (2) a slot decoder, (3) a tree decoder (Figure 4) and (4) a pointer network.",
"Each of these components is briefly explained below.",
"We use BERT (Devlin et al., 2019) as the encoder to obtain token embeddings which are fine-tuned during the end-to-end learning.",
"This can be replaced with any other choice of embedding.",
"The slot decoder accepts embeddings from the encoder, is deep, and has a dense final layer which predicts the slot label for each token position a = a 1 , a 2 , ... a n .",
"The true slot label a = a 1 , a 2 , ...a n is the general form of the label.",
"For example, city name , month name and day name are the general forms obtained Encoder Slot Decoder Tree Decoder template-based Tree Decoder a t i s _ f li g h t NT f r o m l o c t o l o c $ c i t y _ n a m e p o s i t i o n = 2 c i t y _ n a m e d e p a r t _ d a t e m o n t h _ n a m e d a y _ n a m e $ m o n t h _ n a m e p o s i t i o n = 11 $ d a y _ n a m e p o s i t i o n = 12 NTNTNTNTNTROOT c i t y _ n a m e $ c i t y _ n a m e p o s i t i o n = 9 [ hid ] [CLS] from pittsburghiwould like to travel to atlanta on september fourth .",
"from fromloc.city name , toloc.city name , depart date.month name , depart date.day name .",
"The decoder learns to predict Begin-Inside-Outside (BIO) tags, since this allows the tree decoder to focus on producing a tree form and requires the slot decoder to perform boundary detection.",
"The slot decoder is trained to minimize a supervised loss, loss SL = 1 n n (cid:88) i =1 log SL ( a i | a <i , x ) (1) where, SL is the output of the softmax layer at output position i .",
"a <i represents slot labels predicted upto position i 1 .",
"The tree decoder works top down as shown in Figure",
"4. Long Short Term Memory (LSTM) (Hochre-iter and Schmidhuber, 1997) models are used to generate tokens and symbols.",
"In the example shown in Figure 4, the decoder generates atis flight NT .",
"Here, the NT symbol stands for a non-terminal.",
"When a non-terminal is predicted, the subsequent symbol or token is predicted by applying the decoder to the hidden vector representation of the non-terminal.",
"Table 1 walks through this process with an example.",
"Each of the predicted NT s enter a queue and are expanded when popped from the queue.",
"This process continues until no more NT s are left to expand.",
"The loss function is, loss T = 1 SS (cid:88) s =1 1 T s T s (cid:88) t =1 log TD ( z st | z s<t , z s , x ) (2) S refers to the size of the queue for a given training example.",
"T s refers to the number of nodes (or children) to be generated for a non terminal in the queue, z s .",
"z st represents the t th child of the non terminal z s .",
"z s<t refers to left siblings of z st .",
"Children of z s are generated conditioned on the hidden vector of z s and the left siblings of that child.",
"The tree decoder is initialized with the [CLS] representation of the BERT encoder.",
"The tree decoder generates templates which are then filled with slot values from the user's utterance.",
"In the example, atlanta and pittsburgh are replaced by $ city name , september is replaced by $ month name and fourth is replaced by $ day name during training.",
"The $ symbol indicates a terminal.",
"We predict positions for every terminal, pointing to a specific token in the user's utterance.",
"We perform element-wise multiplication between the terminal node's hidden representation ( h ) and the encoder representations ( e ) obtained from the encoder.",
"This is followed by a feed forward layer ( g ) and a dense layer to finally assign probabilities to each position ( p ) in the input utterance.",
"That is, p t = arg max i softmax ( g ( h ( z st ) (cid:12) e ( x i ))) (3) The pointer network loss, loss PT , is the categorical cross entropy loss between p t and the true positions.",
"The four components are trained jointly end-to-end to minimize a total loss, loss G = loss SL + loss T + loss PT (4) 3.5 Global Context We found that the tree decoder tends to repeat nodes, since representations may remain similar from parent to child.",
"We overcome this by providing global supervision.",
"This global supervision does not consider the order of nodes, but rather rewards predictions if a specific node is present or not in the final tree.",
"If the model fails to predict that a node is present, the model is penalized based on the number of times it appears in the reference (or ground truth) tree.",
"Say, z 1 , ...z K is the unique set of nodes present in the reference tree and N ( z k ) is the number of times node z k occurs in the reference.",
"The representation of the [CLS] token is used to predict the presence of these nodes with the loss function, loss G = K (cid:88) k =1 N ( z k ) (cid:80) j N ( z j ) log G ( z k | x ) (5) parent children Queue contents Partially generated frame head ROOTNT 1 [ NT 1 ] ROOT ( ) NT 1 atis flight NT 2 [ NT 2 ] ROOT ( atis flight ( ) ) NT 2 fromloc NT 3 toloc NT 4 [ NT 3 , NT 4 , NT 5 ] ROOT ( atis flight ( fromloc ( ) toloc ( ) depart date NT 5 depart date ( ) ) ) NT 3 city name NT 6 [ NT 4 , NT 5 , NT 6 ] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( ) depart date ( ) ) ) NT 4 city name NT 7 [ NT 5 , NT 6 , NT 7 ] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( city name ( ) ) depart date ( ) ) ) NT 5 month name NT 8 [ NT 6 , NT 7 , NT 8 , NT 9 ] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( day name NT 9 city name ( ) ) depart date ( month name ( ) day name ( ) ) ) ) NT 6 $city name [ NT 7 , NT 8 , NT 9 ] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( ) ) depart date ( month name ( ) day name ( ) ) ) ) NT 7 $city name [ NT 8 , NT 9 ] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( ) day name ( ) ) ) ) NT 8 $month name [ NT 9 ] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( $month name ) day name ( ) ) ) ) NT 9 $day name [ ] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( $month name ) day name ( $day name ) ) ) ) Table 1: Actions taken to generate the frame representation of the sentence, from pittsburgh i'd like to travel to atlanta on september fourth .",
"with overall loss, loss weighted G = loss G + loss G (6) 4 Datasets and Results We start with ATIS, the only public dataset that has even a shallow hierarchy.",
"The ATIS dataset contains audio recordings of people requesting flight reservations, with 21 intent types and 120 slot labels.",
"There are 4,478 utterances in the training set, 893 in the test set and 500 utterances in the development set.",
"We transform the ATIS dataset to the bracketed tree format (Figure 3).",
"We also evaluate the proposed approach using a simulated ordering dataset (example in Figure 3).",
"The dataset contains 2 intents and 7 slot labels, 4767 training examples, 1362 test examples and 681 development examples.",
"We manually created templates for every intent (i.e, OrderItems, GetTotal).",
"An intent is randomly sampled, then a template along with a number of items and slot values for each of the properties of the items are randomly drawn to generate an utterance and a bracketed representation for the utterance 2 .",
"We evaluate both the generalized and the specific forms generated by the proposed model (Figure 5) in Table 2.",
"The exact match criteria requires that the predicted tree completely match the reference tree.",
"As this metric does not assign any credit to partial matches, we also compare all parent child relationships between the reference and the predicted trees and compute micro-f1 scores (Lipton et al., 2014).",
"As seen in Table 2, the best performance both on f-measure and accuracy is obtained with the weighted G loss function.",
"We also compare with a reasonable baseline that extends the traditional flat structured frame (Figure 1) in a way that captures hierarchies.",
"We learn to predict group information along with the slot labels (Baseline in Table 3) by appending indices to the labels that indicate which group the slot label belongs to.",
"Consider, i want to fly from milwaukee to orlando on either wednesday evening or thursday morning .",
"This example requires capturing two groups of information as shown in Figure 6.",
"Group0 contains all the necessary pieces of information for traveling on wednesday evening and Group1 contains information for traveling on thursday morning .",
"As shown, milwaukee and orlando are present in both the groups.",
"We can represent the two day name s (and period of day ) with Batis flight.depart date.day name 0 and Batis flight.depart date.day name 1 .",
"We can then use B-atis flight.fromloc.city name 01 and B-atis flight.toloc.city name 01 to indicate that they belong to both the groups.",
"Such an approach increases the number of unique slot labels, resulting in fewer training examples for each slot label, but allows multi-class classification methods from prior work to be used as is.",
"We then train and test the model using the approach that provided highest slot labeling scores which used BERT (Chen et al., 2019).",
"We also convert the generated output of the hierarchical method proposed in this paper to the flat format above.",
"Note, the f1 scores we obtain here are different from those reported in Table 2 as here we only consider the most specific label (eg. Batis flight.toloc.city name 01 ) as the true slot label for a token versus the f1 measure over all the parent child relationships in Table 2.",
"Since adding group information increases the number of unique slot labels, the results reported for the Baseline are different from what has been reported in (Chen et al., 2019).",
"We notice a large improvement with the proposed approach on the simulated dataset.",
"This implies that modeling hierarchical relationships between slot labels via a tree decoder is indeed helpful.",
"The small improvement we see on ATIS can be attributed to the fact that only a small fraction of the test data required grouping information ( 1.7%).",
"With this preliminary work, we showed cases where traditional flat semantic representations fail to capture slot label dependencies and we highlighted the need for deep hierarchical semantic representations for dialog frames.",
"The proposed recursive, hierarchical frame-based representation captures complex relationships between slots labels.",
"We also proposed an approach using a template-based tree decoder to generate these hierarchical representations from users' utterances.",
"We also introduced global supervision by extending the tree-based loss function, and showed that it is possible to learn all this end-to-end.",
"As future work, we are extending the proposed approach and test its efficacy on real human conversations.",
"More broadly, we continue to explore strategies that combine semantic parsing and neural networks for frame generation."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"objective",
"objective",
"objective"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.