doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1708.00489
18
We start with presenting this bound for any loss function which is Lipschitz for a fixed true label y and parameters w, and then show that loss functions of CNNs with ReLu non-linearities satisfy this property. We also rely on the zero training error assumption. Although the zero training error is not an entirely realistic assumption, our experiments suggest that the resulting upper bound is very effective. We state the following theorem; Theorem 1. Given n i.i.d. samples drawn from pZ as {xi, yi}i∈[n], and set of points s. If loss function l(·, y, w) is λl-Lipschitz continuous for all y, w and bounded by L, regression function is λη-Lipschitz, s is δs cover of {xi, yi}i∈[n], and l(xs(j), ys(j); AS) = 0 ∀j ∈ [m]; with probability at least 1 − γ, 1 . 1 : l, L? log(1/7) - So UK, ys As) - jg oe Or As) < 6(' + MIC) 4 . L 2n i€[n] jes
1708.00489#18
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
19
Since we assume a zero training error for core-set, the core-set loss is equal to the average er- Diein| Ui, Ys As) — pay Dyes Oj. Ys} As)| =F Dicefny Ui. yi As)- We state the theorem in this form to be consistent with (3). We visualize this theorem in Figure[T]and defer its proof to the appendix. In this theorem, “a set s is a 6 cover of a set s*” means a set of balls with radius 5 centered at each member of s can cover the entire s*. Informally, this theorem suggests that we can bound the core-set loss with covering radius and a term which goes to zero with rate depends solely on n. This is an interesting result since this bound does not depend on the number of labelled points. In other words, a provided label does not help the core-set loss unless it decreases the covering radius. ror over entire dataset as In order to show that this bound applies to CNNs, we prove the Lipschitz-continuity of the loss function of a CNN with respect to input image for a fixed true label with the following lemma where max-pool and restricted linear units are the non-linearities and the loss is defined as the l2 5 Published as a conference paper at ICLR 2018
1708.00489#19
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
20
5 Published as a conference paper at ICLR 2018 distance between the desired class probabilities and the soft-max outputs. CNNs are typically used with cross-entropy loss for classification problems in the literature. Indeed, we also perform our experiments using the cross-entropy loss although we use l2 loss in our theoretical study. Although our theoretical study does not extend to cross-entropy loss, our experiments suggest that the resulting algorithm is very effective for cross-entropy loss. Lemma 1. Loss function defined as the 2-norm between the class probabilities and the softmax output of a convolutional neural network with nc convolutional (with max-pool and ReLU) and nf c fully connected layers defined over C classes is -Lipschitz function of input for fixed class probabilities and network parameters. Here, α is the maximum sum of input weights per neuron (see appendix for formal definition). Although it is in general unbounded, it can be made arbitrarily small without changing the loss function behavior (i.e. keeping the label of any data point s unchanged). We defer the proof to the appendix and conclude that CNNs enjoy the bound we presented in Theorem 1.
1708.00489#20
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
21
In order to computationally perform active learning, we use this upper bound. In other words, the practical problem of interest becomes mins1:|s1≤b| δs0∪s1. This problem is equivalent to the k-Center problem (also called min-max facility location problem) (Wolf, 2011). In the next sec- tion, we explain how we solve the k-Center problem in practice using a greedy approximation. # 4.3 SOLVING THE K-CENTER PROBLEM We have so far provided an upper bound for the loss function of the core-set selection problem and showed that minimizing it is equivalent to the k-Center prob- lem (minimax facility location (Wolf, 2011)) which can intuitively be defined as follows; choose b center points such that the largest distance between a data point and its nearest center is minimized. Formally, we are trying to solve:
1708.00489#21
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
23
Unfortunately this problem is NP-Hard (Cook et al., 1998). However, it is possible to obtain a 2 − OP T solution efficiently using a greedy approach shown in Algorithm 1. If OP T = mins1 maxi minj∈s1∪s0 ∆(xi, xj), the greedy algorithm shown in Algorithm 1 is proven to have a solution (s1) such that; maxi minj∈s1∪s0 ∆(xi, xj) ≤ 2 × OP T . Although the greedy algorithm gives a good initialization, in practice we can improve the 2 − OP T solution by iteratively querying upper bounds on the optimal value. In other words, we can design an algorithm which decides if OP T ≤ δ. In order to do so, we define a mixed integer program (MIP) parametrized by δ such that its feasibility indicates mins1 maxi minj∈s1∪s0 ∆(xi, xj) ≤ δ. A straight-forward algorithm would be to use this MIP as a sub-routine and performing a binary search between the result of the greedy algorithm and its half since the optimal solution is guaranteed to be included in
1708.00489#23
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
24
MIP as a sub-routine and performing a binary search between the result of the greedy algorithm and its half since the optimal solution is guaranteed to be included in that range. While constructing this MIP, we also try to handle one of the weaknesses of k-Center algorithm, namely robustness. To make the k-Center problem robust, we assume an upper limit on the number of outliers Ξ such that our algorithm can choose not to cover at most Ξ unsupervised data points. This mixed integer program can be written as:
1708.00489#24
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
25
Feasible(b,s°, 5,2) Souy.= |\s°| +6, i <= j tj Does =1 Vi, wiys<ujy Wig u=1 Wies®, ui € {0,1} Vi wig =Ei,j Ving | A(xi, xj) >. In this formulation, ui is 1 if the ith data point is chosen as center, ωi,j is 1 if the ith point is covered by the jth, point and ξi,j is 1 if the ith point is an outlier and covered by the jth point without the δ 6 (6) Published as a conference paper at ICLR 2018 constraint, and 0 otherwise. And, variables are binary as ui, ωi,j, ξi,j ∈ {0, 1}. We further visualize these variables in a diagram in Figure 2, and give the details of the method in Algorithm 2. # Algorithm 2 Robust k-Center
1708.00489#25
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
26
# Algorithm 2 Robust k-Center Input: data xi, existing pool s0, budget b and outlier bound Ξ Initialize sg = k-Center-Greedy(xi, s0, b) δ2−OP T = maxj mini∈sg ∆(xi, xj) lb = δ2−OP T repeat , ub = δ2−OP T 2 if F easible(b, s0, lb+ub , Ξ) then 2 ub = maxi,j|∆(xi,xj )≤ lb+ub ∆(xi, xj) 2 else lb = mini,j|∆(xi,xj )≥ lb+ub 2 end if until ub = lb return {i s.t. ui = 1} ∆(xi, xj) bye 6 ° Figure 2: Visualizations of the variables. In this solution, the 4th node is chosen as a cen- ter and nodes 0, 1, 3 are in a δ ball around it. The 2nd node is marked as an outlier. IMPLEMENTATION DETAILS
1708.00489#26
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
27
IMPLEMENTATION DETAILS One of the critical design choices is the distance metric ∆(·, ·). We use the l2 distance between activations of the final fully-connected layer as the distance. For weakly-supervised learning, we used Ladder networks (Rasmus et al., 2015) and for all experiments we used VGG-16 (Simonyan & Zisserman, 2014) as the CNN architecture. We initialized all convolutional filters according to He et al. (2016). We optimized all models using RMSProp with a learning rate of 1e−3 using Tensorflow (Abadi et al., 2016). We train CNNs from scratch after each iteration. We used the Gurobi (Inc., 2016) framework for checking feasibility of the MIP defined in (6). As an upper bound on outliers, we used Ξ = 1e−4 × n where n is the number of unlabelled points. # 5 EXPERIMENTAL RESULTS
1708.00489#27
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
28
# 5 EXPERIMENTAL RESULTS We tested our algorithm on the problem of classification using three different datasets. We per- formed experiments on CIFAR (Krizhevsky & Hinton, 2009) dataset for image classification and on SVHN(Netzer et al., 2011) dataset for digit classification. CIFAR (Krizhevsky & Hinton, 2009) dataset has two tasks; one coarse-grained over 10 classes and one fine-grained over 100 classes. We performed experiments on both.
1708.00489#28
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
29
We compare our method with the following baselines: i)Random: Choosing the points to be labelled uniformly at random from the unlabelled pool. ii)Best Empirical Uncertainty: Following the em- pirical setup in (Gal et al., 2017), we perform active learning using max-entropy, BALD and Variation Ratios treating soft-max outputs as probabilities. We only report the best performing one for each dataset since they perform similar to each other. iii) Deep Bayesian Active Learning (DBAL)(Gal et al., 2017): We perform Monte Carlo dropout to obtain improved uncertainty measures and report only the best performing acquisition function among max-entropy, BALD and Variation Ratios for each dataset. iv) Best Oracle Uncertainty: We also report a best performing oracle algorithm which uses the label information for entire dataset. We replace the uncertainty with l(xi, yi, As0) for all unlabelled examples. We sample the queries from the normalized form of this function by setting the probability of choosing the ith point to be queried as pi = l(xi,yi,As0 ) j l(xj ,yj ,As0 ) . v)k-Median: Choosing the points to be labelled
1708.00489#29
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
30
as pi = l(xi,yi,As0 ) j l(xj ,yj ,As0 ) . v)k-Median: Choosing the points to be labelled as the cluster centers of k-Median (k is equal to the budget) al- gorithm. vi)Batch Mode Discriminative-Representative Active Learning(BMDR)(Wang & Ye, 2015): ERM based approach which uses uncertainty and minimizes MMD between iid. samples from the dataset and the actively chosen points. vii)CEAL (Wang et al., 2016): CEAL (Wang et al., 2016) is a weakly-supervised active learning method proposed specifically for CNNs. we include it in the weakly-supervised analysis.
1708.00489#30
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
31
7 Published as a conference paper at ICLR 2018 90 Classification Accuracy (%) 01 02 03 04 CIFAR - 10 + Random + Empirical-Une. + Oracle-Une. — DBAL[GIG 17] ++ BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median “+ Our Method 05 06 O7 08 09 10 01 02 03 CIFAR - 100 + Random + Empirical-Une. + Oracle-Une. -- DBAL[GIG 17] + BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median + Our Method 04 05 06 O7 08 09 10 Number of Labelled Images (ratio) 90 ES + Random + Empirical-Une. -+- Oracle-Une. - DBALIGIG 17] + BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median + Our Method 06 08 10 Figure 3: Results on Active Learning for Weakly-Supervised Model (error bars are std-dev)
1708.00489#31
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
32
Figure 3: Results on Active Learning for Weakly-Supervised Model (error bars are std-dev) Classification Accuracy (%) 0102 03 04 CIFAR - 10 + Random + Empirical-Une. + Oracle-Une. + DBAL[GIG 17] + BMDR [WY 15] + K-Median + Our Method 05 08 O7 08 09 10 65 60 55 50 45 01 02 03 CIFAR - 100 -# Random + Empirical-Une. + Oracle-Une. | DBAL{GIG 17] + BMDR [WY 15] + K-Median + Our Method 04 05 08 O7 08 09 10 4 SVHN -# Random + Empirical-Une. + Oracle-Une. “+ DBAL{GIG 17] + BMDR [WY 15] + K-Median + Our Method 06 08 1.0 Number of Labelled Images (ratio) Figure 4: Results on Active Learning for Fully-Supervised Model (error bars are std-dev)
1708.00489#32
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
33
Figure 4: Results on Active Learning for Fully-Supervised Model (error bars are std-dev) We conducted experiments on active learning for fully-supervised models as well as active learning for weakly-supervised models. In our experiments, we start with small set of images sampled uniformly at random from the dataset as an initial pool. The weakly-supervised model has access to labeled examples as well as unlabelled examples. The fully-supervised model only has access to the labeled data points. We run all experiments with five random initializations of the initial pool of labeled points and use the average classification accuracy as a metric. We plot the accuracy vs the number of labeled points. We also plot error bars as standard deviations. We run the query algorithm iteratively; in other words, we solve the discrete optimization problem minsk+1:|sk+1|≤b Ex,y∼pZ [l(x, y; As0∪...,sk+1)] for each point on the accuracy vs number of labelled examples graph. We present the results in Figures 3 and 4.
1708.00489#33
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
34
Figures 3 and 4 suggests that our algorithm outperforms all other baselines in all experiments; for the case of weakly-supervised models, by a large margin. We believe the effectiveness of our approach in the weakly-supervised case is due to the better feature learning. Weakly-supervised models provide better feature spaces resulting in accurate geometries. Since our method is geometric, it performs significantly better with better feature spaces. We also observed that our algorithm is less effective in CIFAR-100 when compared with CIFAR-10 and SVHN. This can easily be explained using our theoretical analysis. Our bound over the core-set loss scales with the number of classes, hence it is better to have fewer classes.
1708.00489#34
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
35
One interesting observation is the fact that a state-of-the-art batch mode active learning baseline (BMDR (Wang & Ye, 2015)) does not necessarily perform better than greedy ones. We believe this is due to the fact that it still uses an uncertainty information and soft-max probabilities are not a good proxy for uncertainty. Our method does not use any uncertainty. And, incorporating uncertainty to our method in a principled way is an open problem and a fruitful future research direction. On the other hand, a pure clustering based batch active learning baseline (k-Medoids) is also not effective. We believe this is rather intuitive since cluster sentences are likely the points which are well covered with initial iid. samples. Hence, this clustering based method fails to sample the tails of the data distribution. Our results suggest that both oracle uncertainty information and Bayesian estimation of uncertainty is helpful since they improve over empirical uncertainty baseline; however, they are still not effective in the batch setting since random sampling outperforms them. We believe this is due to the correlation in the queried labels as a consequence of active learning in batch setting. We further investigate this with a qualitative analysis via tSNE (Maaten & Hinton, 2008) embeddings. We compute embeddings for all points using the features which are learned using the labelled examples and visualize the points 8 Published as a conference paper at ICLR 2018
1708.00489#35
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
36
(a) Uncertainty Oracle (b) Our Method Figure 5: tSNE embeddings of the CIFAR dataset and behavior of uncertainty oracle as well as our method. For both methods, the initial labeled pool of 1000 images are shown in blue, 1000 images chosen to be labeled in green and remaining ones in red. Our algorithm results in queries evenly covering the space. On the other hand, samples chosen by uncertainty oracle fails to cover the large portion of the space. Table 1: Average run-time of our algorithm for b = 5k and |s0| = 10k in seconds. Distance Greedy Matrix (2-OPT) (iteration) MIP MIP (total) Total 104.2 2 7.5 244.03 360.23 Figure 6: We compare our method with k-Center- Greedy. Our algorithm results in a small but im- portant accuracy improvement. sampled by our method as well as the oracle uncertainty. This visualization suggests that due to the correlation among samples, uncertainty based methods fail to cover the large portion of the space confirming our hypothesis.
1708.00489#36
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
37
Optimality of the k-Center Solution: Our proposed method uses the greedy 2-OPT solution for the k-Center problem as an initialization and checks the feasibility of a mixed integer program (MIP). We use LP-relaxation of the defined MIP and use branch-and-bound to obtain integer solutions. The utility obtained by solving this expensive MIP should be investigated. We compare the average run-time of MIP1 with the run-time of 2-OPT solution in Table 1. We also compare the accuracy obtained with optimal k-Center solution and the 2-OPT solution in Figure 6 on CIFAR-100 dataset. As shown in the Table 1; although the run-time of MIP is not polynomial in worst-case, in practice it converges in a tractable amount of time for a dataset of 50k images. Hence, our algorithm can easily be applied in practice. Figure 6 suggests a small but significant drop in the accuracy when the 2-OPT solution is used. Hence, we conclude that unless the scale of the dataset is too restrictive, using our proposed optimal solver is desired. Even with the accuracy drop, our active learning strategy using 2-OPT solution still outperforms the other baselines. Hence, we can conclude that our algorithm can scale to any dataset size with small accuracy drop even if solving MIP is not feasible.
1708.00489#37
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
38
# 6 CONCLUSION We study the active learning problem for CNNs. Our empirical analysis showed that classical uncertainty based methods have limited applicability to the CNNs due to the correlations caused by batch sampling. We re-formulate the active learning problem as core-set selection and study the core-set problem for CNNs. We further validated our algorithm using an extensive empirical study. Empirical results on three datasets showed state-of-the-art performance by a large margin. # REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine # 1On Intel Core [email protected] and 64GB memory 9 Published as a conference paper at ICLR 2018 learning on heterogeneous distributed systems. arXiv:1603.04467, 2016. C. Berlind and R. Urner. Active nearest neighbors in changing environments. In ICML, 2015. Klaus Brinker. Incorporating diversity in active learning with support vector machines. In ICML, volume 3, pp. 59–66, 2003.
1708.00489#38
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
39
Klaus Brinker. Incorporating diversity in active learning with support vector machines. In ICML, volume 3, pp. 59–66, 2003. William J Cook, William H Cunningham, William R Pulleyblank, and Alexander Schrijver. Combi- natorial optimization, volume 605. Springer, 1998. Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In NIPS, 2004. In L. K. Saul, Information Processing Sys- Y. Weiss, tems 17, pp. 337–344. MIT Press, 2005. URL http://papers.nips.cc/paper/ 2636-analysis-of-a-greedy-active-learning-strategy.pdf. Beg¨um Demir, Claudio Persello, and Lorenzo Bruzzone. Batch-mode active-learning methods for the interactive classification of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 49(3):1014–1031, 2011. Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv:1605.09782, 2016.
1708.00489#39
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
40
Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv:1605.09782, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016. Ehsan Elhamifar, Guillermo Sapiro, Allen Yang, and S Shankar Sasrty. A convex optimization framework for active learning. In ICCV, 2013. Yoav Freund, H Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using the query by committee algorithm. Machine learning, 28(2-3), 1997. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. arXiv preprint arXiv:1703.02910, 2017. Ravi Ganti and Alexander Gray. Upal: Unbiased pool based active learning. In Artificial Intelligence and Statistics, pp. 422–431, 2012.
1708.00489#40
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
41
Ravi Ganti and Alexander Gray. Upal: Unbiased pool based active learning. In Artificial Intelligence and Statistics, pp. 422–431, 2012. Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42:427–486, 2011. Alon Gonen, Sivan Sabato, and Shai Shalev-Shwartz. Efficient active learning of halfspaces: an aggressive approach. The Journal of Machine Learning Research, 14(1):2583–2615, 2013. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. arXiv:1002.3345, 2010. Yuhong Guo. Active instance sampling via matrix partition. In Advances in Neural Information Processing Systems, pp. 802–810, 2010. Yuhong Guo and Dale Schuurmans. Discriminative batch mode active learning. In Advances in neural information processing systems, pp. 593–600, 2008.
1708.00489#41
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
42
Yuhong Guo and Dale Schuurmans. Discriminative batch mode active learning. In Advances in neural information processing systems, pp. 593–600, 2008. Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pp. 353–360. ACM, 2007. Sariel Har-Peled and Akash Kushal. Smaller coresets for k-median and k-means clustering. In Annual Symposium on Computational geometry. ACM, 2005. 10 Published as a conference paper at ICLR 2018 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Steven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch mode active learning and its application to medical image classification. In Proceedings of the 23rd international conference on Machine learning, pp. 417–424. ACM, 2006. Gurobi Optimization Inc. Gurobi optimizer reference manual, 2016. URL http://www.gurobi. com.
1708.00489#42
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
43
Gurobi Optimization Inc. Gurobi optimizer reference manual, 2016. URL http://www.gurobi. com. Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classification. In CVPR, 2009. A. J. Joshiy, F. Porikli, and N. Papanikolopoulos. Multi-class batch-mode active learning for image classification. In 2010 IEEE International Conference on Robotics and Automation, pp. 1873–1878, May 2010. doi: 10.1109/ROBOT.2010.5509293. Ashish Kapoor, Kristen Grauman, Raquel Urtasun, and Trevor Darrell. Active learning with gaussian processes for object categorization. In ICCV, 2007. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Xin Li and Yuhong Guo. Adaptive active learning for image classification. In CVPR, 2013. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
1708.00489#43
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
44
Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. David JC MacKay. Information-based objective functions for active data selection. Neural computa- tion, 4(4):590–604, 1992. Andrew Kachites McCallumzy and Kamal Nigamy. Employing em and pool-based active learning for text classification. In ICML, 1998. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, pp. 5, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015. Nicholas Roy and Andrew McCallum. Toward optimal active learning through monte carlo estimation of error reduction. ICML, 2001.
1708.00489#44
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
45
Nicholas Roy and Andrew McCallum. Toward optimal active learning through monte carlo estimation of error reduction. ICML, 2001. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Burr Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. Fabian Stark, Caner Hazırbas, Rudolph Triebel, and Daniel Cremers. Captcha recognition with active deep learning. In GCPR Workshop on New Challenges in Neural Computation, 2015. Simon Tong and Daphne Koller. Support vector machine active learning with applications to text classification. JMLR, 2(Nov):45–66, 2001. Ivor W Tsang, James T Kwok, and Pak-Ming Cheung. Core vector machines: Fast svm training on very large data sets. JMLR, 6(Apr):363–392, 2005. 11 Published as a conference paper at ICLR 2018
1708.00489#45
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
46
11 Published as a conference paper at ICLR 2018 Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classification. Transactions on Circuits and Systems for Video Technology, 2016. Zheng Wang and Jieping Ye. Querying discriminative and representative samples for batch mode active learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 9(3):17, 2015. Kai Wei, Yuzong Liu, Katrin Kirchhoff, and Jeff A Bilmes. Using document summarization tech- niques for speech data subset selection. In HLT-NAACL, 2013. Kai Wei, Rishabh K Iyer, and Jeff A Bilmes. Submodularity in data subset selection and active learning. In ICML, 2015. Gert W Wolf. Facility location: concepts, models, algorithms and case studies., 2011. Huan Xu and Shie Mannor. Robustness and generalization. Machine learning, 86(3):391–423, 2012. Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113–127, 2015.
1708.00489#46
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
47
In Proceedings of the 23rd international conference on Machine learning, pp. 1081–1088. ACM, 2006. A PROOF FOR LEMMA 1 √ Proof. We will start with showing that softmax function defined over C class is continuous. It is easy to show that for any differentiable function f : Rn → Rm, C−1 C -Lipschitz lf) — Fv)lo < Wie (lk - vile Yx.y ER” where ||J||j- = max ||.J|| , and J is the Jacobian matrix of f. x Softmax function is defined as f(a); = expt) j_19¢ Cc exp(x; j=l For brevity, we will denote fi(x) as fi. The Jacobian matrix will be, f1(1 − f1) −f2f1 ... −fCf1 −f1f2 f2(1 − f2) ... −fCf2 ... ... ... ... −fC(1 − fC) −f1fC −f2fC ... J = Now, Frobenius norm of above matrix will be, Cc Cc Cc Wp= |S) SS RR+>ORO- fi) i=1 i=1 j=1 Fj
1708.00489#47
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
48
Cc Cc Cc Wp= |S) SS RR+>ORO- fi) i=1 i=1 j=1 Fj % is the optimal solution for ||.J||;- 7 get || J||j, = vw It is straightforward to show that fi = 1 putting fi = 1 # F = max max<||.J|| » Hence, It is straightforward to show that f; = % is the optimal solution for ||.J||;- = max<||.J|| » Hence, # x √ % in the above equation , we get || J||j, F = putting f; = % in the above equation , we get || J||j, = vw Now, consider two inputs x and X, such that their representation at layer d is x¢ and x. Let’s consider any convolution or fully-connected layer as xf =>; wij a If we assume, Y; |wij| <a Vi, Jj, d, for any convolutional or fully connected layer, we can state: IIx" — X||2 < ax! — Rg 12 Published as a conference paper at ICLR 2018
1708.00489#48
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
49
IIx" — X||2 < ax! — Rg 12 Published as a conference paper at ICLR 2018 On the other hand, using |a − b| ≤ | max(0, a) − max(0, a)| and the fact that max pool layer can be written as a convolutional layer such that only one weight is 1 and others are 0, we can state for ReLU and max-pool layers, [x — ¥° [2 < |x t — RT IIp Combining with the Lipschitz constant of soft-max layer, √ ICN N(x; w) — CNN(%w)ll2 < Younes x — X|l2 Using the reverse triangle inequality as Gx, ys w) IG, ys W)| = [|CNN Gs; w)—ylla-|ICNN Ge; w)—ylla] < ||CN N(x; w)-CN Nw) 2, √ we can conclude that the loss function is C−1 C αnc+nf c-Lipschitz for any fixed y and w. # B PROOF FOR THEOREM 1 Before starting our proof, we state the Claim 1 from|Berlind & Urner|(2015). Fix some p, p’ € [0,1 and y’ € {0, 1}. Then,
1708.00489#49
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
50
# Pym ¥ FY’) S Pur (y Ay’) + IP -P'| Proof. We will start our proof with bounding Eyi∼η(xi)[l(xi, yi; As)]. We have a condition which states that there exists and xj in δ ball around xi such that xj has 0 loss. is As)] = SY Pycome (ei) (Yi = RU(%e, kj As) ue < DYE Puree; (Yi = RUG: BsAs) +S lone) — me (xy) |UO%e, Fs As) ke[C] ke[C] (e) < Ss Pysmna(x;) Ys = k)U(x:, k; As) + 5X"LC ke[C] Eyi∼η(xi)[l(xi, yi; As)] = With abuse of notation, we represent {yi = k} ∼ ηk(xi) with yi ∼ ηk(xi). We use Claim 1 in (d), and Lipschitz property of regression function and bound of loss in (d). Then, we can further bound the remaining term as;
1708.00489#50
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00489
51
S> Pysmmecas) (Yi = AU: Bs As) = S> Pysmn (xs) (Ys = AYILGK:, ki As) — UK;, Fs As)] ke[C] ke[C] + Ss Pyeng (x;) (Yi = k)UK;, ks As) ke[C <6N k∈[C] where last step is coming from the fact that the trained classifier assumed to have 0 loss over training points. If we combine them, Eyi∼η(xi)[l(xi, yi, As)] ≤ δ(λl + λµLC) We further use the Hoeffding’s Bound and conclude that with probability at least 1 − γ, 1 1, _ | log(1/7) = ui A s) Is ql xj,9jiAs)] < O(N! + MLC) +] —S i¢€[n] Jes 13
1708.00489#51
Active Learning for Convolutional Neural Networks: A Core-Set Approach
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (ie. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin.
http://arxiv.org/pdf/1708.00489
Ozan Sener, Silvio Savarese
stat.ML, cs.CV, cs.LG
ICLR 2018 Paper
null
stat.ML
20170801
20180601
[ { "id": "1605.09782" }, { "id": "1603.04467" }, { "id": "1703.02910" }, { "id": "1606.00704" }, { "id": "1511.06434" } ]
1708.00055
0
7 1 0 2 l u J 1 3 ] L C . s c [ 1 v 5 5 0 0 0 . 8 0 7 1 : v i X r a # SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Cross-lingual Focused Evaluation Daniel Cera, Mona Diabb, Eneko Agirrec, I ˜nigo Lopez-Gazpioc, and Lucia Speciad aGoogle Research Mountain View, CA bGeorge Washington University Washington, DC cUniversity of the Basque Country Donostia, Basque Country dUniversity of Sheffield Sheffield, UK # Abstract
1708.00055#0
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
1
cUniversity of the Basque Country Donostia, Basque Country dUniversity of Sheffield Sheffield, UK # Abstract Semantic Textual Similarity (STS) mea- sures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, se- mantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track ex- ploring MT quality estimation (MTQE) data. The task obtained strong participa- tion from 31 teams, with 17 participating in all language tracks. We summarize per- formance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017). # Introduction
1708.00055#1
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
2
# Introduction Semantic Textual Similarity (STS) assesses the degree to which two sentences are semantically equivalent to each other. The STS task is moti- vated by the observation that accurately modeling the meaning similarity of sentences is a founda- tional language understanding problem relevant to numerous applications including: machine trans- lation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. STS en- ables the evaluation of techniques from a diverse set of domains against a shared interpretable perfor- mance criteria. Semantic inference tasks related to
1708.00055#2
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
3
STS include textual entailment (Bentivogli et al., 2016; Bowman et al., 2015; Dagan et al., 2010), semantic relatedness (Bentivogli et al., 2016) and paraphrase detection (Xu et al., 2015; Ganitkevitch et al., 2013; Dolan et al., 2004). STS differs from both textual entailment and paraphrase detection in that it captures gradations of meaning overlap rather than making binary classifications of par- ticular relationships. While semantic relatedness expresses a graded semantic relationship as well, it is non-specific about the nature of the relationship with contradictory material still being a candidate for a high score (e.g., “night” and “day” are highly related but not particularly similar).
1708.00055#3
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
4
To encourage and support research in this area, the STS shared task has been held annually since 2012, providing a venue for evaluation of state-of- the-art algorithms and models (Agirre et al., 2012, 2013, 2014, 2015, 2016). During this time, di- verse similarity methods and data sets1 have been explored. Early methods focused on lexical se- mantics, surface form matching and basic syntac- tic similarity (B¨ar et al., 2012; ˇSari´c et al., 2012a; Jimenez et al., 2012a). During subsequent evalua- tions, strong new similarity signals emerged, such as Sultan et al. (2015)’s alignment based method. More recently, deep learning became competitive with top performing feature engineered systems (He et al., 2016). The best performance tends to be obtained by ensembling feature engineered and deep learning models (Rychalska et al., 2016). Significant research effort has focused on STS over English sentence pairs.2 English STS is a
1708.00055#4
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
5
Significant research effort has focused on STS over English sentence pairs.2 English STS is a 1i.a., news headlines, video and image descriptions, glosses from lexical resources including WordNet (Miller, 1995; Fellbaum, 1998), FrameNet (Baker et al., 1998), OntoNotes (Hovy et al., 2006), web discussion fora, plagia- rism, MT post-editing and Q&A data sets. Data sets are sum- marized on: http://ixa2.si.ehu.es/stswiki. 2The 2012 and 2013 STS tasks were English only. The 2014 and 2015 task included a Spanish track and 2016 had a well-studied problem, with state-of-the-art systems often achieving 70 to 80% correlation with human judgment. To promote progress in other languages, the 2017 task emphasizes performance on Arabic and Spanish as well as cross-lingual pairings of English with material in Arabic, Spanish and Turk- ish. The primary evaluation criteria combines per- formance on all of the different language condi- tions except English-Turkish, which was run as a surprise language track. Even with this departure from prior years, the task attracted 31 teams pro- ducing 84 submissions.
1708.00055#5
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
6
STS shared task data sets have been used exten- sively for research on sentence level similarity and semantic representations (i.a., Arora et al. (2017); Conneau et al. (2017); Mu et al. (2017); Pagliardini et al. (2017); Wieting and Gimpel (2017); He and Lin (2016); Hill et al. (2016); Kenter et al. (2016); Lau and Baldwin (2016); Wieting et al. (2016a,b); He et al. (2015); Pham et al. (2015)). To encourage the use of a common evaluation set for assessing new methods, we present the STS Benchmark, a publicly available selection of data from English STS shared tasks (2012-2017). # 2 Task Overview
1708.00055#6
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
7
# 2 Task Overview STS is the assessment of pairs of sentences accord- ing to their degree of semantic similarity. The task involves producing real-valued similarity scores for sentence pairs. Performance is measured by the Pearson correlation of machine scores with human judgments. The ordinal scale in Table 1 guides human annotation, ranging from 0 for no meaning overlap to 5 for meaning equivalence. Intermediate values reflect interpretable levels of partial overlap in meaning. The annotation scale is designed to be accessible by reasonable human judges with- out any formal expertise in linguistics. Using rea- sonable human interpretations of natural language semantics was popularized by the related textual entailment task (Dagan et al., 2010). The result- ing annotations reflect both pragmatic and world knowledge and are more interpretable and useful within downstream systems. # 3 Evaluation Data The Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) is the primary evalu- ation data source with the exception that one of the pilot track on cross-lingual Spanish-English STS. The English tracks attracted the most participation and have the largest use of the evaluation data in ongoing research.
1708.00055#7
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
8
pilot track on cross-lingual Spanish-English STS. The English tracks attracted the most participation and have the largest use of the evaluation data in ongoing research. The two sentences are completely equivalent, as they mean the same thing. The bird is bathing in the sink. Birdie is washing itself in the water basin. The two sentences are mostly equivalent, but some unimportant details differ. Two boys on a couch are playing video games. Two boys are playing a video game. The two sentences are roughly equivalent, but some important information differs/missing. John said he is considered a witness but not a suspect. “He is not a suspect anymore.” John said. The two sentences are not equivalent, but share some details. They flew out of the nest in groups. They flew into the nest together. The two sentences are not equivalent, but are on the same topic. The woman is playing the violin. The young lady enjoys listening to the guitar. The two sentences are completely dissimilar. The black dog is running through the snow. A race car driver is driving his car through the mud. Table 1: Similarity scores with explanations and English examples from Agirre et al. (2013).
1708.00055#8
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
9
Table 1: Similarity scores with explanations and English examples from Agirre et al. (2013). cross-lingual tracks explores data from the WMT 2014 quality estimation task (Bojar et al., 2014).3 Sentences pairs in SNLI derive from Flickr30k image captions (Young et al., 2014) and are labeled with the entailment relations: entailment, neutral, and contradiction. Drawing from SNLI allows STS models to be evaluated on the type of data used to assess textual entailment methods. However, since entailment strongly cues for semantic relatedness (Marelli et al., 2014), we construct our own sen- tence pairings to deter gold entailment labels from informing evaluation set STS scores. Track 4b investigates the relationship between STS and MT quality estimation by providing STS labels for WMT quality estimation data. The data includes Spanish translations of English sentences from a variety of methods including RBMT, SMT, hybrid-MT and human translation. Translations are annotated with the time required for human cor- rection by post-editing and Human-targeted Trans- lation Error Rate (HTER) (Snover et al., 2006).4 Participants are not allowed to use the gold quality estimation annotations to inform STS scores.
1708.00055#9
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
10
3Previous years of the STS shared task include more data sources. This year the task draws from two data sources and includes a diverse set of languages and language-pairs. 4HTER is the minimal number of edits required for cor- rection of a translation divided by its length after correction. Track 1 2 3 4a 4b 5 6 Language(s) Arabic (ar-ar) Arabic-English (ar-en) Spanish (es-es) Spanish-English (es-en) Spanish-English (es-en) English (en-en) Turkish-English (tr-en) Total Pairs 250 250 250 250 250 WMT QE 250 250 1750 Source SNLI SNLI SNLI SNLI SNLI SNLI Table 2: STS 2017 evaluation data. # 3.1 Tracks Table 2 summarizes the evaluation data by track. The six tracks span four languages: Arabic, En- glish, Spanish and Turkish. Track 4 has subtracks with 4a drawing from SNLI and 4b pulling from WMT’s quality estimation task. Track 6 is a sur- prise language track with no annotated training data and the identity of the language pair first an- nounced when the evaluation data was released. # 3.2 Data Preparation
1708.00055#10
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
11
# 3.2 Data Preparation This section describes the preparation of the eval- uation data. For SNLI data, this includes the se- lection of sentence pairs, annotation of pairs with STS labels and the translation of the original En- glish sentences. WMT quality estimation data is directly annotated with STS labels. # 3.3 Arabic, Spanish and Turkish Translation Sentences from SNLI are human translated into Arabic, Spanish and Turkish. Sentences are trans- lated independently from their pairs. Arabic trans- lation is provided by CMU-Qatar by native Arabic speakers with strong English skills. Translators are given an English sentence and its Arabic ma- chine translation5 where they perform post-editing to correct errors. Spanish translation is completed by a University of Sheffield graduate student who is a native Spanish speaker and fluent in English. Turkish translations are obtained from SDL.6 # 3.4 Embedding Space Pair Selection
1708.00055#11
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
12
# 3.4 Embedding Space Pair Selection We construct our own pairings of the SNLI sen- tences to deter gold entailment labels being used to inform STS scores. The word embedding sim- ilarity selection heuristic from STS 2016 (Agirre et al., 2016) is used to find interesting pairs. Sen- tence embeddings are computed as the sum of in# 5Produced by the Google Translate API. 6http://www.sdl.com/languagecloud/ managed-translation/ dividual word embeddings, v(s) = )7,,c, V(w).” Sentences with likely meaning overlap are identi- fied using cosine similarity, Eq. (1). sime(s1, $2) = TCs yTliv(ea)l qd) # 4 Annotation Annotation of pairs with STS labels is performed using Crowdsourcing, with the exception of Track 4b that uses a single expert annotator. # 4.1 Crowdsourced Annotations
1708.00055#12
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
13
Annotation of pairs with STS labels is performed using Crowdsourcing, with the exception of Track 4b that uses a single expert annotator. # 4.1 Crowdsourced Annotations Crowdsourced annotation is performed on Amazon Mechanical Turk.8 Annotators examine the STS pairings of English SNLI sentences. STS labels are then transferred to the translated pairs for cross- lingual and non-English tracks. The annotation in- structions and template are identical to Agirre et al. (2016). Labels are collected in batches of 20 pairs with annotators paid $1 USD per batch. Five anno- tations are collected per pair. The MTurk master9 qualification is required to perform the task. Gold scores average the five individual annotations. # 4.2 Expert Annotation
1708.00055#13
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
14
# 4.2 Expert Annotation Spanish-English WMT quality estimation pairs for Track 4b are annotated for STS by a University of Sheffield graduate student who is a native speaker of Spanish and fluent in English. This track differs significantly in label distribution and the complex- ity of the annotation task. Sentences in a pair are translations of each other and tend to be more se- mantically similar. Interpreting the potentially sub- tle meaning differences introduced by MT errors is challenging. To accurately assess STS perfor- mance on MT quality estimation data, no attempt is made to balance the data by similarity scores. # 5 Training Data The following summarizes the training data: Ta- ble 3 English; Table 4 Spanish;10 Table 5 Spanish- English; Table 6 Arabic; and Table 7 Arabic- English. Arabic-English parallel data is supplied by translating English training data, Table 8. 7We use 50-dimensional GloVe word embeddings (Pen- nington et al., 2014) trained on a combination of Gigaword 5 (Parker et al., 2011) and English Wikipedia available at http://nlp.stanford.edu/projects/glove/. 8https://www.mturk.com/ 9A designation that statistically identifies workers who perform high quality work across a diverse set of tasks.
1708.00055#14
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
15
8https://www.mturk.com/ 9A designation that statistically identifies workers who perform high quality work across a diverse set of tasks. 10Spanish data from 2015 and 2014 uses a 5 point scale that collapses STS labels 4 and 3, removing the distinction between unimportant and important details. Year Data set 2012 MSRpar 2012 MSRvid 2012 OnWN 2012 2012 2013 HDL 2013 FNWN 2013 OnWN 2013 SMT 2014 HDL 2014 OnWN 2014 Deft-forum 2014 Deft-news 2014 2014 2015 HDL 2015 2015 Ans.-student 2015 Ans.-forum 2015 Belief 2016 HDL 2016 2016 2016 Ans.-Ans. 2016 Quest.-Quest. Trial 2017 SMTnews SMTeuroparl Images Tweet-news Images Plagiarism post-editing Pairs 1500 1500 750 750 WMT eval. 750 WMT eval. 750 189 561 750 MT eval. newswire headlines 750 glosses 750 forum posts 450 news summary 300 image descriptions 750 tweet-news pairs 750 newswire headlines 750 image descriptions 750 750 student answers 375 Q&A forum answers 375 249 230 244 MT postedits 254 Q&A forum answers 209 Q&A forum questions Source newswire videos glosses newswire glosses glosses committed belief newswire headlines short-answer plag. 23 Mixed STS 2016 Table 3: English training data.
1708.00055#15
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
16
Table 3: English training data. Data set Year 2014 Trial 2014 Wiki 2014 News 2015 Wiki 2015 News Trial 2017 Pairs 56 324 480 Newswire 251 500 Newswire 23 Mixed STS 2016 Source Spanish Wikipedia Spanish Wikipedia Table 4: Spanish training data. English, Spanish and Spanish-English training data pulls from prior STS evaluations. Arabic and Arabic-English training data is produced by trans- lating a subset of the English training data and transferring the similarity scores. For the MT qual- ity estimation data in track 4b, Spanish sentences are translations of their English counterparts, dif- fering substantially from existing Spanish-English STS data. We release one thousand new Spanish- English STS pairs sourced from the 2013 WMT translation task and produced by a phrase-based Moses SMT system (Bojar et al., 2013). The data is expert annotated and has a similar label distribu- tion to the track 4b test data with 17% of the pairs scoring an STS score of less than 3, 23% scoring 3, 7% achieving a score of 4 and 53% scoring 5. # 5.1 Training vs. Evaluation Data Analysis
1708.00055#16
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
17
# 5.1 Training vs. Evaluation Data Analysis Evaluation data from SNLI tend to have sentences that are slightly shorter than those from prior years of the STS shared task, while the track 4b MT qualData set Year Trial 2016 2016 News 2016 Multi-source Pairs 103 301 294 Source Sampled ≤ 2015 STS en-es news articles en news headlines, short-answer plag., MT postedits, Q&A forum answers, Q&A forum questions 2017 Trial 2017 MT 23 Mixed STS 2016 1000 WMT13 Translation Task Table 5: Spanish-English training data. Year 2017 2017 MSRpar 2017 MSRvid 2017 Data set Trial SMTeuroparl Pairs Source 23 Mixed STS 2016 newswire 510 368 videos 203 WMT eval. Table 6: Arabic training data. ity estimation data has sentences that are much longer. The track 5 English data has an average sentence length of 8.7 words, while the English sentences from track 4b have an average length of 19.4. The English training data has the following average lengths: 2012 10.8 words; 2013 8.8 words (excludes restricted SMT data); 2014 9.1 words; 2015 11.5 words; 2016 13.8 words.
1708.00055#17
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
18
Similarity scores for our pairings of the SNLI sentences are slightly lower than recent shared task years and much lower than early years. The change is attributed to differences in data selection and filtering. The average 2017 similarity score is 2.2 overall and 2.3 on the track 7 English data. Prior English data has the following average similarity scores: 2016 2.4; 2015 2.4; 2014 2.8; 2013 3.0; 2012 3.5. Translation quality estimation data from track 4b has an average similarity score of 4.0. # 6 System Evaluation This section reports participant evaluation results for the SemEval-2017 STS shared task. # 6.1 Participation The task saw strong participation with 31 teams producing 84 submissions. 17 teams provided 44 systems that participated in all tracks. Table 9 sum- marizes participation by track. Traces of the focus on English are seen in 12 teams participating just in track 5, English. Two teams participated exclu- sively in tracks 4a and 4b, Spanish-English. One team took part solely in track 1, Arabic. # 6.2 Evaluation Metric Systems are evaluated on each track by their Pear- son correlation with gold labels. The overall rankYear 2017 2017 MSRpar 2017 MSRvid 2017 Data set Trial SMTeuroparl Pairs Source 23 Mixed STS 2016 newswire 1020 736 videos 406 WMT eval.
1708.00055#18
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
19
Table 7: Arabic-English training data. Year Data set 2017 MSRpar 2017 MSRvid 2017 SMTeuroparl Source Pairs newswire 1039 videos 749 422 WMT eval. Table 8: Arabic-English parallel data. ing averages the correlations across tracks 1-5 with tracks 4a and 4b individually contributing. # 6.3 CodaLab As directed by the SemEval workshop organizers, the CodaLab research platform hosts the task.11 # 6.4 Baseline The baseline is the cosine of binary sentence vec- tors with each dimension representing whether an individual word appears in a sentence.12 For cross- lingual pairs, non-English sentences are translated into English using state-of-the-art machine trans- lation.13 The baseline achieves an average corre- lation of 53.7 with human judgment on tracks 1-5 and would rank 23rd overall out the 44 system sub- missions that participated in all tracks. # 6.5 Rankings
1708.00055#19
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
20
# 6.5 Rankings Participant performance is provided in Table 10. ECNU is best overall (avg r: 0.7316) and achieves the highest participant evaluation score on: track 2, Arabic-English (r: 0.7493); track 3, Spanish (r: 0.8559); and track 6, Turkish-English (r: 0.7706). BIT attains the best performance on track 1, Arabic (r: 0.7543). CompiLIG places first on track 4a, SNLI Spanish-English (r: 0.8302). SEF@UHH exhibits the best correlation on the difficult track 4b WMT quality estimation pairs (r: 0.3407). RTV has the best system for the track 5 English data (r: 0.8547), followed closely by DT Team (r: 0.8536). Especially challenging tracks with SNLI data are: track 1, Arabic; track 2, Arabic-English; and track 6, English-Turkish. Spanish-English perfor- mance is much higher on track 4a’s SNLI data than 11https://competitions.codalab.org/ competitions/16051 12Words obtained using Arabic (ar), Spanish (es) and En- glish (en) Treebank tokenizers. # 13http://translate.google.com
1708.00055#20
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
21
12Words obtained using Arabic (ar), Spanish (es) and En- glish (en) Treebank tokenizers. # 13http://translate.google.com Track 1 2 3 4a 4b 5 6 Language(s) Arabic Arabic-English Spanish Spanish-English Spanish-English MT English Turkish-English Participants 49 45 48 53 53 77 48 44 Primary All except Turkish Table 9: Participation by shared task track. track 4b’s MT quality estimation data. This high- lights the difficulty and importance of making fine grained distinctions for certain downstream appli- cations. Assessing STS methods for quality estima- tion may benefit from using alternatives to Pearson correlation for evaluation.14 Results tend to decrease on cross-lingual tracks. The baseline drops > 10% relative on Arabic- English and Spanish-English (SNLI) vs. mono- lingual Arabic and Spanish. Many participant sys- tems show smaller decreases. ECNU’s top ranking entry performs slightly better on Arabic-English than Arabic, with a slight drop from Spanish to Spanish-English (SNLI). # 6.6 Methods
1708.00055#21
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
22
# 6.6 Methods Participating teams explore techniques ranging from state-of-the-art deep learning models to elabo- rate feature engineered systems. Prediction signals include surface similarity scores such as edit dis- tance and matching n-grams, scores derived from word alignments across pairs, assessment by MT evaluation metrics, estimates of conceptual simi- larity as well as the similarity between word and sentence level embeddings. For cross-lingual and non-English tracks, MT was widely used to convert the two sentences being compared into the same language.15 Select methods are highlighted below. 14e.g., Reimers et al. (2016) report success using STS labels with alternative metrics such as normalized Cumulative Gain (nCG), normalized Discounted Cumulative Gain (nDCG) and F1 to more accurately predict performance on the downstream tasks: text reuse detection, binary classification of document relatedness and document relatedness within a corpus.
1708.00055#22
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
23
15Within the highlighted submissions, the following use a monolingual English system fed by MT: ECNU, BIT, HCTI and MITRE. HCTI submitted a separate run using ar, es and en trained models that underperformed using their en model with MT for ar and es. CompiLIG’s model is cross-lingual but includes a word alignment feature that depends on MT. SEF@UHH built ar, es, en and tr models and use MT for the cross-lingual pairs. LIM-LIG and DT Team only participate in monolingual tracks.
1708.00055#23
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
24
Team Primary 73.16 70.44 69.40 67.89 67.03 66.62 65.98 65.90 65.87 61.90 61.71 61.66 60.58 60.50 59.88 59.80 59.60 57.25 56.76 56.44 56.33 55.28 51.95 50.25 49.75 49.02 47.92 47.90 47.73 47.25 47.04 44.38 40.70 36.69 35.21 32.91 32.78 31.48 29.38 21.45 10.67 9.26 4.80 2.94 Track 1 AR-AR 74.40 73.80 72.71 74.17 75.35 75.43• 71.30 72.94 73.04 71.58 68.21 71.58 67.81 67.13 43.73 66.89 68.60 61.04 57.90 55.88 61.43 57.74 13.69 3.69 57.03 51.93 47.53 55.06 45.87 60.52 55.08 62.87 53.27 33.65 39.05 33.65 41.56 28.92 31.20 0.33 4.71 2.14 4.12 4.77 74.63 73.09 59.57 68.60 Track 2 AR-EN 74.93• 71.26 69.75
1708.00055#24
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
25
4.71 2.14 4.12 4.77 74.63 73.09 59.57 68.60 Track 2 AR-EN 74.93• 71.26 69.75 69.65 70.07 69.53 68.36 67.53 67.40 67.82 64.59 67.81 63.07 55.95 68.36 54.82 54.64 59.10 53.84 47.89 48.32 48.13 62.59 62.07 43.40 53.13 49.39 43.69 51.99 18.29 13.57 18.05 47.73 17.11 37.13 0.25 13.32 10.45 12.88 10.98 7.69 12.92 6.39 2.04 Track 3 SP-SP 85.59• 84.56 82.47 84.99 83.23 82.89 82.63 82.02 82.01 84.84 79.28 84.89 77.13 74.85 67.09 74.24 76.14 72.04 74.23 74.56 68.63 69.79 77.92 76.90 67.86 66.42 51.65 63.81 51.48 75.74 76.76 73.80 0.16 69.90 45.88 56.82 48.41 66.13 69.20 54.65 15.27 4.58 6.17
1708.00055#25
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
26
76.76 73.80 0.16 69.90 45.88 56.82 48.41 66.13 69.20 54.65 15.27 4.58 6.17 7.63 76.16 63.85 76.14 Track 4a SP-EN 81.31 74.95 76.49 78.28 78.13 77.61 76.21 78.02 77.99 69.26 71.69 68.54 72.01 70.50 76.21 69.99 71.18 63.38 58.66 57.39 61.40 56.60 69.30 69.47 55.63 51.44 56.15 50.79 52.32 43.27 48.25 44.47 55.06 60.04 34.82 50.54 45.83 23.89 10.02 22.62 17.19 1.20 2.04 0.46 83.02• 76.84 79.10 1.91 15.61 71.18 42.25 28.08 Track 4b SP-EN-WMT 33.63 33.11 26.33 11.07 7.58 5.84 14.83 15.98 15.74 2.54 2.00 2.14 4.81 7.61 14.83 7.34 5.72 12.05 18.02 30.69 8.29 34.07• 0.44 1.47 8.57
1708.00055#26
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
27
4.81 7.61 14.83 7.34 5.72 12.05 18.02 30.69 8.29 34.07• 0.44 1.47 8.57 9.96 16.09 14.14 13.00 1.16 11.12 1.51 14.40 14.55 5.86 13.68 23.47 3.05 1.62 1.99 14.46 1.91 6.24 2.57 15.50 14.64 14.94 5.44 5.24 5.72 0.23 11.33 Track 5 EN-EN 85.18 81.81 83.87 84.00 81.61 82.22 81.13 80.53 80.48 82.72 79.27 82.80 79.89 85.41 81.56 85.41 77.44 73.39 72.56 78.80 85.47• 71.86 75.56 75.35 65.79 62.56 61.74 64.63 62.22 73.76 72.69 73.47 66.81 54.68 47.27 64.05 56.32 69.06 68.77 50.57 7.38 20.38 1.14 0.69 85.36 83.60 83.29 82.17 82.31 82.31 81.59 78.11 69.52 66.61 77.44 Track 6 EN-TR
1708.00055#27
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
29
ECNU (Tian et al., 2017) ECNU (Tian et al., 2017) ECNU (Tian et al., 2017) BIT (Wu et al., 2017)* BIT (Wu et al., 2017)* BIT (Wu et al., 2017) HCTI (Shao, 2017) MITRE (Henderson et al., 2017) MITRE (Henderson et al., 2017) FCICU (Hassan et al., 2017) neobility (Zhuang and Chang, 2017) FCICU (Hassan et al., 2017) STS-UHH (Kohail et al., 2017) RTV HCTI (Shao, 2017) RTV MatrusriIndia STS-UHH (Kohail et al., 2017) SEF@UHH (Duma and Menzel, 2017) SEF@UHH (Duma and Menzel, 2017) RTV SEF@UHH (Duma and Menzel, 2017) neobility (Zhuang and Chang, 2017) neobility (Zhuang and Chang, 2017) MatrusriIndia NLPProxem UMDeep (Barrow and Peskov, 2017) NLPProxem UMDeep (Barrow and Peskov, 2017) Lump (Espa˜na Bonet and
1708.00055#29
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
30
UMDeep (Barrow and Peskov, 2017) NLPProxem UMDeep (Barrow and Peskov, 2017) Lump (Espa˜na Bonet and Barr´on-Cede˜no, 2017)* Lump (Espa˜na Bonet and Barr´on-Cede˜no, 2017)* Lump (Espa˜na Bonet and Barr´on-Cede˜no, 2017)* NLPProxem RTM (Bic¸ici, 2017b)* UMDeep (Barrow and Peskov, 2017) RTM (Bic¸ici, 2017b)* RTM (Bic¸ici, 2017b)* ResSim (Bjerva and ¨Ostling, 2017) ResSim (Bjerva and ¨Ostling, 2017) ResSim (Bjerva and ¨Ostling, 2017) LIPN-IIMAS (Arroyo-Fern´andez and Meza Ruiz, 2017) LIPN-IIMAS (Arroyo-Fern´andez and Meza Ruiz, 2017) hjpwhu hjpwhu compiLIG (Ferrero et al., 2017) compiLIG (Ferrero et al., 2017) compiLIG (Ferrero et al., 2017) DT TEAM (Maharjan et al., 2017)
1708.00055#30
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
31
al., 2017) compiLIG (Ferrero et al., 2017) compiLIG (Ferrero et al., 2017) DT TEAM (Maharjan et al., 2017) DT TEAM (Maharjan et al., 2017) DT TEAM (Maharjan et al., 2017) FCICU (Hassan et al., 2017) ITNLPAiKF (Liu et al., 2017) ITNLPAiKF (Liu et al., 2017) ITNLPAiKF (Liu et al., 2017) L2F/INESC-ID (Fialho et al., 2017)* L2F/INESC-ID (Fialho et al., 2017) L2F/INESC-ID (Fialho et al., 2017)* LIM-LIG (Nagoudi et al., 2017) LIM-LIG (Nagoudi et al., 2017) LIM-LIG (Nagoudi et al., 2017) MatrusriIndia NRC* NRC OkadaNaoya OPI-JSA ( ´Spiewak et al., 2017) OPI-JSA ( ´Spiewak et al., 2017) OPI-JSA ( ´Spiewak et al., 2017) PurdueNLP (Lee et al., 2017) PurdueNLP (Lee et al., 2017)
1708.00055#31
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
33
77.04 78.50 73.42 67.96 79.28 55.35 53.11 64.33 61.55 49.24 80.47 80.08 79.12 81.34 81.27 80.61 80.93 77.29 80.04 79.01 78.05 # cosine baseline 53.70 60.45 60.45 51.55 71.17 62.20 3.20 72.78 54.56 * Corrected or late submission Table 10: STS 2017 rankings ordered by average correlation across tracks 1-5. Performance is reported by convention as Pearson’s r × 100. For tracks 1-6, the top ranking result is marked with a • symbol and results in bold have no statistically significant difference with the best result on a track, p > 0.05 Williams’ t-test (Diedenhofen and Musch, 2015).
1708.00055#33
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
34
ECNU (Tian et al., 2017) The best overall sys- tem is from ENCU and ensembles well perform- ing a feature engineered models with deep learn- ing methods. Three feature engineered models use Random Forest (RF), Gradient Boosting (GB) and XGBoost (XGB) regression methods with fea- tures based on: n-gram overlap; edit distance; longest common prefix/suffix/substring; tree ker- nels (Moschitti, 2006); word alignments (Sul- tan et al., 2015); summarization and MT evalua- tion metrics (BLEU, GTM-3, NIST, WER, ME- TEOR, ROUGE); and kernel similarity of bags- of-words, bags-of-dependencies and pooled word- embeddings. ECNU’s deep learning models are differentiated by their approach to sentence em- beddings using either: averaged word embeddings, projected word embeddings, a deep averaging net- work (DAN) (Iyyer et al., 2015) or LSTM (Hochre- iter and Schmidhuber, 1997). Each network feeds the element-wise multiplication, subtraction and concatenation of paired
1708.00055#34
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
36
BIT (Wu et al., 2017) Second place overall is achieved by BIT primarily using sentence informa- tion content (IC) informed by WordNet and BNC word frequencies. One submission uses sentence IC exclusively. Another ensembles IC with Sul- tan et al. (2015)’s alignment method, while a third ensembles IC with cosine similarity of summed word embeddings with an IDF weighting scheme. Sentence IC in isolation outperforms all systems except those from ECNU. Combining sentence IC with word embedding similarity performs best. HCTI (Shao, 2017) Third place overall is ob- tained by HCTI with a model similar to a convolu- tional Deep Structured Semantic Model (CDSSM) (Chen et al., 2015; Huang et al., 2013). Sentence embeddings are generated with twin convolutional neural networks (CNNs). The embeddings are then compared using cosine similarity and element- wise difference with the resulting values fed to additional layers to predict similarity labels. The architecture is abstractly similar to ECNU’s deep learning models. UMDeep (Barrow and Peskov, 2017) took a similar approach using LSTMs rather than CNNs for the sentence embeddings. 16The two remaining ECNU runs only use either RF or GB and exclude the deep learning models.
1708.00055#36
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
37
16The two remaining ECNU runs only use either RF or GB and exclude the deep learning models. MITRE (Henderson et al., 2017) Fourth place overall is MITRE that, like ECNU, takes an ambi- tious feature engineering approach complemented by deep learning. Ensembled components in- clude: alignment similarity; TakeLab STS ( ˇSari´c et al., 2012b); string similarity measures such as matching n-grams, summarization and MT metrics (BLEU, WER, PER, ROUGE); a RNN and recur- rent convolutional neural networks (RCNN) over word alignments; and a BiLSTM that is state-of- the-art for textual entailment (Chen et al., 2016).
1708.00055#37
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
38
FCICU (Hassan et al., 2017) Fifth place overall is FCICU that computes a sense-base alignment us- ing BabelNet (Navigli and Ponzetto, 2010). Babel- Net synsets are multilingual allowing non-English and cross-lingual pairs to be processed similarly to English pairs. Alignment similarity scores are used with two runs: one that combines the scores within a string kernel and another that uses them with a weighted variant of Sultan et al. (2015)’s method. Both runs average the Babelnet based scores with soft-cardinality (Jimenez et al., 2012b). CompiLIG (Ferrero et al., 2017) The best Spanish-English performance on SNLI sentences was achieved by CompiLIG using the following cross-lingual features: conceptual similarity using DBNary (Serasset, 2015), MultiVec word embed- dings (Berard et al., 2016) and character n-grams. MT is used to incorporate a similarity score based on Brychcin and Svoboda (2016)’s improvements to Sultan et al. (2015)’s method.
1708.00055#38
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
39
LIM-LIG (Nagoudi et al., 2017) Using only weighted word embeddings, LIM-LIG took sec- ond place on Arabic.17 Arabic word embeddings are summed into sentence embeddings using uni- form, POS and IDF weighting schemes. Sentence similarity is computed by cosine similarity. POS and IDF outperform uniform weighting. Combin- ing the IDF and POS weights by multiplication is reported by LIM-LIG to achieve r 0.7667, higher than all submitted Arabic (track 1) systems. DT Team (Maharjan et al., 2017) Second place on English (track 5)18 is DT Team using feature en- gineering combined with the following deep learn- ing models: DSSM (Huang et al., 2013), CDSSM (Shen et al., 2014) and skip-thoughts (Kiros et al., 17The approach is similar to SIF (Arora et al., 2017) but without removal of the common principle component 18RTV took first place on track 5, English, but submitted no system description paper. Genre news caption forum total Train 3299 2000 450 5749 Dev 500 625 375 1500 Test 500 525 254 1379 Total 4299 3250 1079 8628
1708.00055#39
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
40
Genre news caption forum total Train 3299 2000 450 5749 Dev 500 625 375 1500 Test 500 525 254 1379 Total 4299 3250 1079 8628 Table 11: STS Benchmark annotated examples by genres (rows) and by train, dev. test splits (columns). 2015). Engineered features include: unigram over- lap, summed word alignments scores, fraction of unaligned words, difference in word counts by type (all, adj, adverbs, nouns, verbs), and min to max ratios of words by type. Select features have a mul- tiplicative penalty for unaligned words. SEF@UHH (Duma and Menzel, 2017) First place on the challenging Spanish-English MT pairs (Track 4b) is SEF@UHH. Paragraph vector mod- els (Le and Mikolov, 2014) are trained for Arabic, English, Spanish and Turkish. MT converts cross- lingual pairs into a single language and similar- ity scores are computed using cosine or the nega- tion of Bray-Curtis dissimilarity. The best perform- ing submission on track 4b uses cosine similarity of Spanish paragraph vectors with MT converting paired English sentences into Spanish.19 # 7 Analysis
1708.00055#40
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
41
# 7 Analysis Figure 1 plots model similarity scores against hu- man STS labels for the top 5 systems from tracks 5 (English), 1 (Arabic) and 4b (Spanish-English MT). While many systems return scores on the same scale as the gold labels, 0-5, others return scores from approximately 0 and 1. Lines on the graphs illustrate perfect performance for both a 0-5 and a 0-1 scale. Mapping the 0 to 1 scores to range from 0-5,21 approximately 80% of the scores from top performing English systems are within 1.0 pt of the gold label. Errors for Arabic are more broadly distributed, particularly for model scores between 1 and 4. The Spanish-English MT plots the weak relationship between the predicted and gold scores. Table 12 provides examples of difficult sentence pairs for participant systems and illustrates com- mon sources of error for even well-ranking systems 19For the cross-lingual tracks with language pair L1-L2, Duma and Menzel (2017) report additional experiments that vary the language choice for the paragraph vector model, us- ing either L1 or L2. Experimental results are also provided that average the scores from the L1 and L2 models as well as that use vector correlation to compute similarity.
1708.00055#41
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
43
including: (i) word sense disambiguation “making” and “preparing” are very similar in the context of “food”, while “picture” and “movie” are not similar when picture is followed by “day”; (ii) attribute importance “outside” vs. “deserted” are smaller details when contrasting “The man is in a deserted field” with “The man is outside in the field”; (iii) compositional meaning “A man is carrying a ca- noe with a dog” has the same content words as “A dog is carrying a man in a canoe” but carries a different meaning; (iv) negation systems score “. . . with goggles and a swimming cap” as nearly equivalent to “. . . without goggles or a swimming cap”. Inflated similarity scores for examples like “There is a young girl” vs. “There is a young boy with the woman” demonstrate (v) semantic blend- ing, whereby appending “with a woman” to “boy” brings its representation closer to that of “girl”.
1708.00055#43
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
44
For multilingual and cross-lingual pairs, these is- sues are magnified by translation errors for systems that use MT followed by the application of a mono- lingual similarity model. For track 4b Spanish- English MT pairs, some of the poor performance can in part be attributed to many systems using MT to re-translate the output of another MT system, ob- scuring errors in the original translation. # 7.1 Contrasting Cross-lingual STS with MT Quality Estimation Since MT quality estimation pairs are translations of the same sentence, they are expected to be min- imally on the same topic and have an STS score ≥ 1.22 The actual distribution of STS scores is such that only 13% of the test instances score be- low 3, 22% of the instances score 3, 12% score 4 and 53% score 5. The high STS scores indicate that MT systems are surprisingly good at preserv- ing meaning. However, even for a human, inter- preting changes caused by translations errors can be difficult due both to disfluencies and subtle er- rors with important changes in meaning.
1708.00055#44
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
45
The Pearson correlation between the gold MT quality scores and the gold STS scores is 0.41, which shows that translation quality measures and STS are only moderately correlated. Differences are in part explained by translation quality scores penalizing all mismatches between the source seg- ment and its translation, whereas STS focuses on differences in meaning. However, the difficult in22The evaluation data for track 4b does in fact have STS scores that are ≥ 1 for all pairs. In the 1,000 sentence training set for this track, one sentence that received a score of zero. (a) Track 5: English (b) Track 1: Arabic (c) Track 4b: Spanish-English MT Figure 1: Model vs. human similarity scores for top systems.
1708.00055#45
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
46
Figure 1: Model vs. human similarity scores for top systems. Pairs There is a cook preparing food. A cook is making food. The man is in a deserted field. The man is outside in the field. A girl in water without goggles or a swimming cap. A girl in water, with goggles and swimming cap. A man is carrying a canoe with a dog. A dog is carrying a man in a canoe. There is a young girl. There is a young boy with the woman. The kids are at the theater watching a movie. it is picture day for the boys Human DT Team ECNU BIT 3.7 5.0 4.1 4.1 4.0 3.0 3.1 3.0 4.8 4.6 1.8 3.2 4.7 1.0 2.6 3.3 0.2 1.0 2.3 3.6 4.0 4.9 3.9 2.0 FCICU ITNLP-AiKF 3.9 4.5 3.1 2.8 4.7 0.1 5.0 4.6 1.9 3.1 0.8 1.7 Table 12: Difficult English sentence pairs (Track 5) and scores assigned by top performing systems.20
1708.00055#46
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
47
Table 12: Difficult English sentence pairs (Track 5) and scores assigned by top performing systems.20 Genre news news news captions MSRvid captions captions forum forum forum File MSRpar headlines deft-news images track5.en-en deft-forum ans-forums ans-ans Yr. 12 13/6 14 12 14/5 17 14 15 16 Train Dev 250 1000 250 1999 0 300 250 1000 250 1000 125 0 0 450 375 0 0 0 Test 250 250 0 250 250 125 0 0 254 development and test sets.23 The development set can be used to design new models and tune hy- perparameters. The test set should be used spar- ingly and only after a model design and hyperpa- rameters have been locked against further changes. Using the STS Benchmark enables comparable as- sessments across different research efforts and im- proved tracking of the state-of-the-art. Table 13: STS Benchmark detailed break-down by files and years.
1708.00055#47
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
48
Table 13: STS Benchmark detailed break-down by files and years. terpretation work required for STS annotation may increase the risk of inconsistent and subjective la- bels. The annotations for MT quality estimation are produced as by-product of post-editing. Hu- mans fix MT output and the edit distance between the output and its post-edited correction provides the quality score. This post-editing based proce- dure is known to produce relatively consistent esti- mates across annotators. # 8 STS Benchmark The STS Benchmark is a careful selection of the English data sets used in SemEval and *SEM STS shared tasks between 2012 and 2017. Tables 11 and 13 provide details on the composition of the benchmark. The data is partitioned into training, Table 14 shows the STS Benchmark results for some of the best systems from Track 5 (EN-EN)24 and compares their performance to competitive baselines from the literature. All baselines were run by the organizers using canonical pre-trained models made available by the originator of each method,25 with the exception of PV-DBOW that 23Similar to the STS shared task, while the training set is provided as a convenience, researchers are encourage to incorporate other supervised and unsupervised data as long as no supervised annotations of the test partitions are used.
1708.00055#48
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
49
24Each participant submitted the run which did best in the development set of the STS Benchmark, which happened to be the same as their best run in Track 5 in all cases. # 25Sent2Vec: https://github.com/epfml/ sent2vec, trained model Sent2Vec twitter unigrams; https://github.com/epfml/sent2vec SIF: Wikipedia trained word frequencies enwiki vocab min200.txt, https://github.com/alexandres/lexvec em- beddings from lexvec.commoncrawl.300d.W+C.pos.vectors, first 15 principle components removed, α = 0.001, dev experiments varied α, principle components removed and
1708.00055#49
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
50
STS 2017 Participants on STS Benchmark Name ECNU BIT Description Ensembles well performing feature eng. models with deep neural networks each using sent. emb. from either LSTM, DAN, prj. word emb. or avg. word emb. (Tian et al., 2017) Ensembles sent. information content (IC) with cosine of sent. emb. derived from summed word emb. with IDF weighting scheme (Wu et al., 2017) Dev 84.7 82.9 Test 81.0 80.9 DT TEAM Ensembles feature eng. and deep learning signals using sent. emb. from DSSM, CDSSM 83.0 79.2 UdL HCTI RTM and skip-thought models (Maharjan et al., 2017) Feature eng. model using cosine of tf-idf weighted char n-grams, num. match, sent. length and avg. word emb. cosine over PoS and NER based alignments (Al-Natsheh et al., 2017) Deep learning model with sent. emb. computed using paired convolutional neural networks (CNN) and then compared using fully connected layers (Shao, 2017) Referential translation machines (RTM) use a feature eng. model with transductive learning and parallel feature decay algorithm
1708.00055#50
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
51
using fully connected layers (Shao, 2017) Referential translation machines (RTM) use a feature eng. model with transductive learning and parallel feature decay algorithm (ParFDA) training instance selection (Bic¸ici, 2017b,a) 72.4 79.0 83.4 78.4 73.2∗ 70.6 SEF@UHH Cosine of paragraph vector (PV-DBOW) sent. emb. (Duma and Menzel, 2017) 61.6 59.2 Sentence Level Baselines Sent. emb. from bi-directional LSTM trained on SNLI (Conneau et al., 2017) InferSent Word & bigram emb. sum from sent. spanning CBOW (Pagliardini et al., 2017) Sent2Vec SIF Weighted word emb. sum with principle component removal (Arora et al., 2017) PV-DBOW Paragraph vectors (PV-DBOW) (Le and Mikolov, 2014; Lau and Baldwin, 2016) C-PHRASE Word emb. sum from model of syntactic constituent context words (Pham et al., 2015) 80.1 78.7 80.1 72.2 74.3 75.8 75.5 72.0 64.9 63.9 Averaged Word
1708.00055#51
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
52
et al., 2015) 80.1 78.7 80.1 72.2 74.3 75.8 75.5 72.0 64.9 63.9 Averaged Word Embedding Baselines LexVec FastText Paragram GloVe Word2vec Weighted matrix factorization of PPMI (Salle et al., 2016a,b) Skip-gram with sub-word character n-grams (Joulin et al., 2016) Paraphrase Database (PPDB) fit word embeddings (Wieting et al., 2015) Word co-occurrence count fit embeddings (Pennington et al., 2014) Skip-gram prediction of words in a context window (Mikolov et al., 2013a,b) 68.9 65.2 63.0 52.4 70.0
1708.00055#52
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
53
55.8 53.9 50.1 40.6 56.5 * 10-fold cross-validation on combination of dev and training data. Table 14: STS Benchmark. Pearson’s r × 100 results for select participants and baseline models. uses the model from Lau and Baldwin (2016) and InferSent which was reported independently. When multiple pre-trained models are available for a method, we report results for the one with the best dev set performance. For each method, input sentences are preprocessed to closely match the tokenization of the pre-trained models.26 Default
1708.00055#53
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
54
whether GloVe, LexVec, or Word2Vec word embeddings were used; C-PHRASE: http://clic.cimec.unitn. it/composes/cphrase-vectors.html; PV- https://github.com/jhlau/doc2vec, DBOW: A P - N E W S trained apnews dbow.tgz; LexVec: https: //github.com/alexandres/lexvec, embedddings FastText: lexvec.commoncrawl.300d.W.pos.vectors.gz; https://github.com/facebookresearch/ fastText/blob/master/pretrained-vectors. md, Wikipedia trained embeddings from wiki.en.vec; Para- http://ttic.uchicago.edu/˜wieting/, gram: embeddings trained on PPDB and tuned to WS353 from Paragram-WS353; GloVe: https://nlp.stanford. edu/projects/glove/, Wikipedia and Gigaword trained 300 dim. from glove.6B.zip; Word2vec: https://code.google.com/archive/ p/word2vec/, Google News trained embeddings from GoogleNews-vectors-negative300.bin.gz.
1708.00055#54
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
55
26Sent2Vec: results shown here tokenized by tweetTok- enize.py constrasting dev experiments used wikiTokenize.py, both distributed with Sent2Vec. LexVec: numbers were converted into words, all punctuation was removed, and text is lowercased; FastText: sentences are prepared us- ing the normalize text() function within FastText’s get-wikimedia.sh script and lowercased; Paragram: Joshua (Matt Post, 2015) pipeline to pre-process and to- kenized English text; C-PHRASE, GloVe, PV-DBOW & inference hyperparameters are used unless noted otherwise. The averaged word embedding base- lines compute a sentence embedding by averaging word embeddings and then using cosine to com- pute pairwise sentence similarity scores. While state-of-the-art baselines for obtaining sentence embeddings perform reasonably well on the benchmark data, improved performance is ob- tained by top 2017 STS shared task systems. There is still substantial room for further improvement. To follow the current state-of-the-art, visit the leaderboard on the STS wiki.27 # 9 Conclusion
1708.00055#55
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
56
# 9 Conclusion We have presented the results of the 2017 STS shared task. This year’s shared task differed sub- stantially from previous iterations of STS in that the primary emphasis of the task shifted from English to multilingual and cross-lingual STS inSIF: PTB tokenization provided by Stanford CoreNLP (Man- ning et al., 2014) with post-processing based on dev OOVs; Word2vec: Similar to FastText, to our knownledge, the pre- processing for the pre-trained Word2vec embeddings is not publicly described. We use the following heuristics for the Word2vec experiment: All numbers longer than a single digit are converted into a ‘#’ (e.g., 24 → ##) then prefixed, suffixed and infixed punctuation is recursively removed from each to- ken that does not match an entry in the model’s lexicon. # 27http://ixa2.si.ehu.es/stswiki/index. php/STSbenchmark
1708.00055#56
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
57
# 27http://ixa2.si.ehu.es/stswiki/index. php/STSbenchmark volving four different languages: Arabic, Spanish, English and Turkish. Even with this substantial change relative to prior evaluations, the shared task obtained strong participation. 31 teams produced 84 system submissions with 17 teams producing a total of 44 system submissions that processed pairs in all of the STS 2017 languages. For lan- guages that were part of prior STS evaluations (e.g., English and Spanish), state-of-the-art sys- tems are able to achieve strong correlations with human judgment. However, we obtain weaker correlations from participating systems for Ara- bic, Arabic-English and Turkish-English. This suggests further research is necessary in order to develop robust models that can both be readily applied to new languages and perform well even when less supervised training data is available. To provide a standard benchmark for English STS, we present the STS Benchmark, a careful selection of the English data sets from previous STS tasks (2012-2017). To assist in interpreting the results from new models, a number of competitive base- lines and select participant systems are evaluated on the benchmark data. Ongoing improvements to the current state-of-the-art is available from an online leaderboard. # Acknowledgments
1708.00055#57
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
58
# Acknowledgments We thank Alexis Conneau for the evaluation of InferSent on the STS Benchmark. This material is based in part upon work supported by QNRF-NPRP 6 - 1020-1-199 OPTDIAC that funded Arabic translation, and by a grant from the Span- ish MINECO (projects TUNER TIN2015-65308-C5-1-R and MUSTER PCIN-2015-226 cofunded by EU FEDER) that funded STS label annotation and by the QT21 EU project (H2020 No. 645452) that funded STS labels and data prepa- ration for machine translation pairs. I˜nigo Lopez-Gazpio is supported by the Spanish MECD. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of QNRF-NPRP, Spanish MINECO, QT21 EU, or the Spanish MECD. # References
1708.00055#58
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
59
# References Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, I˜nigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, Ger- man Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 Task 2: Semantic Textual Similarity, En- glish, Spanish and Pilot on Interpretability. In Proceedings of SemEval 2015. http://www.aclweb.org/anthology/S15- 2045. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 Task 10: Multilingual semantic tex- In Proceedings of SemEval 2014. tual similarity. http://www.aclweb.org/anthology/S14-2010.
1708.00055#59
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
60
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Se- mantic textual similarity, monolingual and cross-lingual the SemEval-2016. evaluation. http://www.aclweb.org/anthology/S16-1081. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez- Agirre. 2012. SemEval-2012 Task 6: A pilot on semantic textual similarity. In Proceedings of *SEM 2012/SemEval 2012. http://www.aclweb.org/anthology/S12-1051. Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Se- mantic Textual Similarity. In Proceedings of *SEM 2013. http://www.aclweb.org/anthology/S13-1004.
1708.00055#60
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
61
Hussein T. Al-Natsheh, Lucie Martinet, Fabrice Muhlen- bach, and Djamel Abdelkader ZIGHED. 2017. UdL at SemEval-2017 Task 1: Semantic textual similarity esti- mation of english sentence pairs using regression model over pairwise features. In Proceedings of SemEval-2017. http://www.aclweb.org/anthology/S17-2013. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. sen- tough-to-beat baseline In Proceedings of ICLR 2017. A simple tence embeddings. https://openreview.net/pdf?id=SyK00v5xx. but for Ignacio Arroyo-Fern´andez and Ivan Vladimir Meza Ruiz. LIPN-IIMAS at SemEval-2017 Task 1: Sub- attention recurrent neural net- for semantic tex- In Proceedings of SemEval-2017. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. In Proceedings 1998. The Berkeley FrameNet Project. of COLING ’98. http://aclweb.org/anthology/P/P98/P98- 1013.pdf.
1708.00055#61
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
62
Daniel B¨ar, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual sim- ilarity by combining multiple content similarity mea- In Proceedings of *SEM 2012/SemEval 2012. sures. http://www.aclweb.org/anthology/S12-1059. Joe Barrow and Denis Peskov. 2017. UMDeep at SemEval- 2017 Task 1: End-to-end shared weight LSTM model for semantic textual similarity. In Proceedings of SemEval- 2017. http://www.aclweb.org/anthology/S17-2026. Luisa Bentivogli, Raffaella Bernardi, Marco Marelli, Ste- fano Menini, Marco Baroni, and Roberto Zamparelli. 2016. SICK through the SemEval glasses. lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Lang Resour Eval 50(1):95–124. https://doi.org/10.1007/s10579-015-9332-5.
1708.00055#62
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
63
Alexandre Berard, Christophe Servan, Olivier Pietquin, and Laurent Besacier. 2016. MultiVec: a multilin- gual and multilevel representation learning toolkit for In Proceedings of LREC 2016. http://www.lrec- NLP. conf.org/proceedings/lrec2016/pdf/666 Paper.pdf. Ergun Bic¸ici. 2017a. Predicting translation performance with referential translation machines. In Proceedings of WMT17 (to appear). RTM at SemEval-2017 Task 1: Referential translation machines for predicting se- In Proceedings of SemEval-2017. mantic similarity. http://www.aclweb.org/anthology/S17-2030. ¨Ostling. 2017. ResSim at SemEval-2017 Task 1: Multilingual word representations for semantic textual similarity. In Proceedings of SemEval- 2017. http://www.aclweb.org/anthology/S17-2021.
1708.00055#63
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
64
Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, and Aleˇs Tamchyna. Radu Soricut, Lucia Specia, Findings of the 2014 workshop on statisti- 2014. In Proceedings of WMT cal machine translation. 2014. http://www.aclweb.org/anthology/W/W14/W14- 3302.pdf. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Chris- tian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. Findings of the 2013 Workshop on Statistical 2013. In Proceedings of WMT 2013. Machine Translation. http://www.aclweb.org/anthology/W13-2201. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of EMNLP 2015. http://aclweb.org/anthology/D/D15/D15- 1075.pdf.
1708.00055#64
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
65
Tomas Brychcin and Lukas Svoboda. 2016. UWB sim- in- In Proceedings of SemEval 2016. at SemEval-2016 Task 1: ilarity using lexical, formation. https://www.aclweb.org/anthology/S/S16/S16-1089.pdf. Semantic textual syntactic, and semantic Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Enhancing and combining sequential and tree LSTM for natural language inference. CoRR abs/1609.06038. http://arxiv.org/abs/1609.06038. and Xiaodong He. 2015. Learning bidirectional intent embeddings by convolutional deep structured semantic models In Proceedings for spoken language understanding. of NIPS-SLU, 2015. https://www.microsoft.com/en- us/research/publication/learning-bidirectional-intent- embeddings-by-convolutional-deep-structured-semantic- models-for-spoken-language-understanding/.
1708.00055#65
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
66
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Bar- Supervised learn- rault, and Antoine Bordes. 2017. ing of universal sentence representations from natu- CoRR abs/1705.02364. ral http://arxiv.org/abs/1705.02364. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evalua- tion and approaches. J. Nat. Language Eng. 16:105–105. https://doi.org/10.1017/S1351324909990234. Birk Diedenhofen and Jochen Musch. 2015. the co- statisti- PLoS ONE 10(4). A comprehensive cor: cal comparison of correlations. http://dx.doi.org/10.1371/journal.pone.0121945. solution for Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsu- pervised construction of large paraphrase corpora: Ex- ploiting massively parallel news sources. In Proceedings of COLING 04. http://aclweb.org/anthology/C/C04/C04- 1051.pdf.
1708.00055#66
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
67
and Wolfgang Menzel. 2017. SEF@UHH at SemEval-2017 Task 1: Unsuper- vised knowledge-free semantic textual similarity via In Proceedings of SemEval-2017. paragraph vector. http://www.aclweb.org/anthology/S17-2024. Cristina Espa˜na Bonet and Alberto Barr´on-Cede˜no. 2017. Lump at SemEval-2017 Task 1: Towards an interlingua In Proceedings of SemEval-2017. semantic similarity. http://www.aclweb.org/anthology/S17-2019. Christiane Fellbaum. 1998. WordNet: MIT Electronic https://books.google.com/books?id=Rehu8OOzMIMC. Lexical Database. An Press. J´er´emy Ferrero, Laurent Besacier, Didier Schwab, and Fr´ed´eric Agn`es. 2017. CompiLIG at SemEval-2017 Task 1: Cross-language plagiarism detection methods for seman- In Proceedings of SemEval-2017. tic textual similarity. http://www.aclweb.org/anthology/S17-2012.
1708.00055#67
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
68
Pedro Fialho, Hugo Patinho Rodrigues, Lu´ısa Coheur, and Paulo Quaresma. 2017. L2f/inesc-id at semeval-2017 tasks 1 and 2: Lexical and semantic features in word and textual similarity. In Proceedings of SemEval-2017. http://www.aclweb.org/anthology/S17-2032. Juri Ganitkevitch, Benjamin Van Durme, and Chris PPDB: The paraphrase In Proceedings of NAACL/HLT 2013. Callison-Burch. 2013. database. http://cs.jhu.edu/ ccb/publications/ppdb.pdf. Basma Hassan, Samir AbdelRahman, Reem Bahgat, and FCICU at SemEval-2017 Task Ibrahim Farag. 2017. 1: Sense-based language independent semantic textual In Proceedings of SemEval-2017. similarity approach. http://www.aclweb.org/anthology/S17-2015. Hua He, Kevin Gimpel, and Jimmy Lin. 2015. Multi- perspective sentence similarity modeling with convolu- tional neural networks. In Proceedings of EMNLP. pages 1576–1586. http://aclweb.org/anthology/D15-1181.
1708.00055#68
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
69
Hua He and Jimmy Lin. 2016. Pairwise word interaction modeling with deep neural networks for semantic sim- In Proceedings of NAACL/HLT. ilarity measurement. http://www.aclweb.org/anthology/N16-1108. Hua He, John Wieting, Kevin Gimpel, Jinfeng Rao, and UMD-TTIC-UW at SemEval- Attention-based multi-perspective similar- In Proceedings of SemEval 2016. John Henderson, Elizabeth Merkhofer, Laura Strickhart, and Guido Zarrella. 2017. MITRE at SemEval-2017 Task 1: Simple semantic similarity. In Proceedings of SemEval- 2017. http://www.aclweb.org/anthology/S17-2027. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from In Proceedings of NAACL/HLT. unlabelled data. http://www.aclweb.org/anthology/N16-1162. Long short-term memory. Neural Comput. 9(8):1735–1780. http://dx.doi.org/10.1162/neco.1997.9.8.1735.
1708.00055#69
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]
1708.00055
70
Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The In Proceedings of NAACL/HLT 2006. 90% solution. http://aclweb.org/anthology/N/N06/N06-2015.pdf. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of CIKM. https://www.microsoft.com/en- us/research/publication/learning-deep-structured- semantic-models-for-web-search-using-clickthrough- data/. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of ACL/IJCNLP. http://www.aclweb.org/anthology/P15- 1162. Sergio Jimenez, Claudia Becerra, and Alexander Gelbukh. 2012a. Soft cardinality: A parameterized similarity func- tion for text comparison. In Proceedings of *SEM 2012/Se- mEval 2012. http://www.aclweb.org/anthology/S12-1061.
1708.00055#70
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and Cross-lingual Focused Evaluation
Semantic Textual Similarity (STS) measures the meaning similarity of sentences. Applications include machine translation (MT), summarization, generation, question answering (QA), short answer grading, semantic search, dialog and conversational systems. The STS shared task is a venue for assessing the current state-of-the-art. The 2017 task focuses on multilingual and cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE) data. The task obtained strong participation from 31 teams, with 17 participating in all language tracks. We summarize performance and review a selection of well performing methods. Analysis highlights common errors, providing insight into the limitations of existing models. To support ongoing work on semantic representations, the STS Benchmark is introduced as a new shared training and evaluation set carefully selected from the corpus of English STS shared task data (2012-2017).
http://arxiv.org/pdf/1708.00055
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, Lucia Specia
cs.CL, 68T50, I.2.7
To appear in proceedings of the SemEval workshop at ACL 2017; 14 pages, 14 Tables, 1 Figure
null
cs.CL
20170731
20170731
[]