doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1704.06440
36
which exactly matches the expression in the least squares problem in Equation (71), corresponding to entropy- regularized natural policy gradient. Hence, the “damped” Q-learning update corresponds to a natural gradient step. # 6 Experiments To complement our theoretical analyses, we designed experiments to study the following questions: 1. Though one-step entropy bonuses are used in PG methods for neural network policies (Williams [1992], Mnih et al. [2016]), how do the entropy-regularized RL versions of policy gradients and Q-learning described in Section 3 perform on challenging RL benchmark problems? How does the “proper” entropy-regularized policy gradient method (with entropy in the returns) compare to the naive one (with one-step entropy bonus)? (Section 6.1) 2. How do the entropy-regularized versions of Q-learning (with logsumexp) compare to the standard DQN of Mnih et al. [2015]? (Section 6.2) 3. The equivalence between PG and soft Q-learning is established in expectation, however, the actual gradient estimators are slightly different due to sampling. Furthermore, soft Q-learning is equivalent to PG with a particular penalty coefficient on the value function error. Does the equivalence hold under practical conditions? (Section 6.3)
1704.06440#36
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
37
+ center loss is implemented by forcing to optimize ||x; — W;|| even if m + ||x; -— wyll3 = ||xi - Weld is less than 0. From Table 2 we can conclude that the loss functions have minor influence on the accuracy, and the normalization is the key factor to promote the performance. When combining the softmax loss with the C- contrastive loss or center loss, we need to add a hyper-parameter to make balance between the two losses. The highest accuracy, 99.2167%, is obtained by softmax + 0.01 * C-contrastive. However, pure softmax with normalization already works reasonably well. We have also designed two ablation experiments of normalizing the features only or normalizing the columns of weight matrix only. During experiments we find that the scale parameter is necessary when normalizing the feature, while normalizing the weight does not need it. We cannot explain it so far. This is tricky but the network will collapse if the scale parameter is not properly added. From Table 2 we can conclude that normalizing the feature causes performance degradation, while normalizing the weight has little influence on the accuracy. Note that these two special cases of softmax loss are also fine-tuned based on Wen’s model. When training from scratch,
1704.06369#37
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
37
# 6.1 A2C on Atari: Naive vs Proper Entropy Bonuses Here we investigated whether there is an empirical effect of including entropy terms when computing returns, as described in Section 3. In this section, we compare the naive and proper policy gradient estimators: n=1 naive / 1-step: V log 79 (a: | 81) (= ra — v9) —TVo6Dxx [7 || 7 (82) (79) d=0 n=1 n=1 proper: V log 79 (a¢ | sz) (= 7" (riaa — TDxx [70 || 7] (si+a)) — veo) —TVoDxt [70 || 7] (sz) (80) d=0 In the experiments on Atari, we take π to be the uniform distribution, which gives a standard entropy bonus up to a constant. We start with a well-tuned (synchronous, deterministic) version of A3C (Mnih et al. [2016]), henceforth called A2C (advantage actor critic), to optimize the entropy-regularized return. We use the parameter τ = 0.01 and train for 320 million frames. We did not tune any hyperparameters for the “proper” algorithm— we used the same hyperparameters that had been tuned for the “naive” algorithm.
1704.06440#37
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
38
In Figure 8, we show the effect of the loss weights when using two loss functions. As shown in the figure, the C-contrastive loss is more robust to the loss weight. This is not surprising because C- contrastive loss can train a model by itself only, while the center loss, which only optimizes the intra-class variance, should be trained with other supervised losses together. To make our experiment more convincing, we also train some of the loss functions on Wu’s model[38]. The results are listed in Table 4. Note that in [38], Wu et. al. did not perform face mirroring when they evaluated their methods. In Table 4, we also present the result of their model after face mirroring and feature merging. As is shown in the table, the normalization operation still gives a significant boost to the performance. On BLUFR protocol, the normalization technique works even better. Here we only compare some of the models with the baseline (Table 3). From Table 3 we can see that normalization could boost the performance significantly, which reveals that normalization technique could perform much better when the false alarm rate (FAR) is low. Table 3: Results on LFW BLUFR[15] protocol
1704.06369#38
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
38
As shown in Figure 1, the “proper” version yields performance that is the same or possibly greater than the “naive” version. Hence, besides being attractive theoretically, the entropy-regularized formulation could lead to practical performance gains. 10 Spacelnvaders Breakout BeamRider 1750 — A2C (1-step) 5000 | —— A2C (1-step) — A2C (proper) 400 — A2C (proper) 1500 4000 1250 300 1000 3000 200 750 2000 500 100 1000 250 — A2C (1-step) 0 — A2C (proper) 0 320M 0 320M 0 320M Frames Frames Frames Pon bert Seaquest 9 a 2000 4 20 6000 |} —— A2C (1-step) — 1750 4000 A2C (proper) 10 2000 1500 000 1250 0 8000 1000 6000 750 -10 4000 500 — A2C (1-step) 2000 250 — A2C (1-step) -20 — A2C (proper) 0 — A2C (proper) 0 320M 0 320M 0 320M Frames Frames Frames
1704.06440#38
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
39
Table 3: Results on LFW BLUFR[15] protocol model loss function Normalization TPR@FAR=0.1% DIR@FAR=1% ResNet ResNet ResNet ResNet softmax + center[36] softmax C-triplet + center softmax + C-contrastive No Yes Yes Yes 93.35% 95.77% 95.73% 95.83% 67.86% 73.92% 76.12% 77.18% MaxOut MaxOut MaxOut softmax[38] softmax C-contrastive No Yes Yes 89.12% 90.64% 90.32% 61.79% 65.22% 68.14% # Table 4: Results on LFW 6,000 pairs using Wu’s model[38] loss function Normalization Accuracy softmax softmax + mirror softmax C-contrastive softmax + C-contrastive No No Yes Yes Yes 98.13% 98.41% 98.75% ± 0.008% 98.78% ± 0.017% 98.71% ± 0.017% # Table 5: Results on YTF with Wen’s model[36]
1704.06369#39
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
39
Figure 1: Atari performance with different RL objectives. EntRL is A2C modified to optimize for return augmented with entropy (instead of KL penalty). Solid lines are average evaluation return over 3 random seeds and shaded area is one standard deviation. # 6.2 DQN on Atari: Standard vs Soft Here we investigated whether soft Q-learning (which optimizes the entropy-augmented return) performs differently from standard “hard” Q-learning on Atari. We made a one-line change to a DQN implementation: Q(s:41,0’) Ye =, + ymax Q(s:41,0’) Standard (81) a y=ret ylog S> exp(Q(si41,0')/7) — log|A| “Soft”: KL penalty (82) a # a Ye =r, + log Ss exp(Q(s141,@’)/T) “Soft”: Entropy bonus (83) a
1704.06440#39
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
40
# Table 5: Results on YTF with Wen’s model[36] loss function Normalization Accuracy softmax + center[36] softmax softmax + HIK-SVM C-triplet + center C-triplet + center + HIK-SVM softmax + C-contrastive softmax + C-contrastive + HIK-SVM No Yes Yes Yes Yes Yes Yes 93.74% 94.24% 94.56% 94.3% 94.58% 94.34% 94.72% The results are listed in Table 5. The models that perform better on LFW also show superior performance on YTF. Moreover, the newly proposed score histogram technique (HIK-SVM in the table) can improve the accuracy further by a significant gap.
1704.06369#40
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
40
# a Ye =r, + log Ss exp(Q(s141,@’)/T) “Soft”: Entropy bonus (83) a The difference between the entropy bonus and KL penalty (against uniform) is simply a constant, however, this constant made a big difference in the experiments, since a positive constant added to the reward encourages longer episodes. Note that we use the same epsilon-greedy exploration in all conditions; the only difference is the backup equation used for computing yt and defining the loss function. The results of two runs on each game are shown in Figure 2. The entropy-bonus version with τ = 0.1 seems to perform a bit better than standard DQN, however, the KL-bonus version performs worse, so the benefit may be due to the effect of adding a small constant to the reward. We have also shown the results for 5-step Q-learning, where the algorithm is otherwise the same. The performance is better on Pong and Q-bert but worse on other games—this is the same pattern of performance found with n-step policy gradients. (E.g., see the A2C results in the preceding section.) # 6.3 Entropy Regularized PG vs Online Q-Learning on Atari
1704.06440#40
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
41
6 CONCLUSION AND FUTURE WORK In this paper, we propose to apply L2 normalization operation on the features and the weight of the last inner-product layer when training a classification model. We explain the necessity of the nor- malization operation from both analytic and geometric perspective. Two kinds of loss functions are proposed to effectively train the normalized feature. One is a reformulated softmax loss with a scale layer inserted between the cosine score and the loss. Another is designed inspired by metric learning. We introduce an agent strat- egy to avoid the need of hard sample mining, which is a tricky and time-consuming work. Experiments on two different models both show superior performance over models without normalization. From three theoretical propositions, we also provide some guidance on the hyper-parameter setting, such as the bias term (Proposition 1), the scale parameter (Proposition 2) and the margin (Proposition 3). 5.3 Experiments on YTF The YTF dataset[37] consists of 3,425 videos of 1,595 different peo- ple, with an average of 2.15 videos per person. We follow the unre- stricted with labeled outside data protocol, which takes 5, 000 video pairs to evaluate the performance.
1704.06369#41
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
42
Previous works usually extract face features from all frames or some selected frames in a video. Then two videos can construct a confidence matrix C in which each element Ci j is the cosine distance of face features extracted from the i-th frame of the first video and j-th frame of the second video. The final score is computed by the average of all all elements in C. The one dimension score is then used to train a classifier, say SVM, to get the threshold of same identity or different identity. Here we propose to use the histogram of elements in C as the feature to train the classifier. The bin of the histogram is set to 100 (Figure 9(a)). Then SVM with histogram intersection kernel (HIK-SVM)[2] is utilized to make a two-class classification (Figure 9(b)). This method encodes more information compared to the one dimensional mean value, and leads to better performance on video face verification.
1704.06369#42
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
42
BeamRiderNoFrameskip-v3 BreakoutNoFrameskip-v3 EnduroNoFrameskip-v3 600 — standard, n=5 — standard, n=5 — standard, n=5 q q 2000 q 4000) __ soft (ent), = 0.1 — soft (ent), r= 0.1 ADL — soft (ent), r=0.1 ae 2000 | — soft (KL), r=0.2 500) ___ soft (KL), T$0.1 soft tO. — soft (ent), t= 0.01 — soft (ent), r= 0.01 1500 | — soft (ent), r=0.02 0000 | —— soft (KL), r= 0.01 400) ___ soft (KL), = 0.01 soft (KL), t= 0.01 — standard — standard standard 8000 D fo 300 VV 1000 6000 200 4000 500 100 2000 0 0 0 0 40M fd 40M 0 40m Frames Frames Frames PongNoFrameskip-v3 QbertNoFrameskip-v3 SeaquestNoFrameskip-v3 204 — standard, n=5 == goo | Standard, n=5 — standard, n=5 — soft (ent), = 0.1 — soft (ent), r=0.1 0000 7 —— soft (ent), r= 0.1 — soft
1704.06440#42
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
43
Currently we can only fine-tune the network with normalization techniques based on other models. If we train a model with C- contrastive loss function, the final result is just as good as center loss[36]. But if we fine-tune a model, either Wen’s model[36] or Wu’s model[38], the performance could be further improved as shown in Table 2 and Table 4. More efforts are needed to find a way to train a model from scratch, while preserving at least a similar performance as fine-tuning. Our methods and analysis in this paper are general. They can be used in other metric learning tasks, such as person re-identification or image retrieval. We will apply the proposed methods on these tasks in the future. 7 ACKNOWLEDGEMENT This paper is funded by Office of Naval Research (N00014-15-1- 2356), National Science Foundation (CCF-1317376), the National Natural Science Foundation of China (61671125, 61201271, 61301269) and the State Key Laboratory of Synthetical Automation for Process Industries (NO. PAL-N201401). We thank Chenxu Luo and Hao Zhu for their assistance in formula derivation. REFERENCES [1]
1704.06369#43
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
43
— standard, n=5 — soft (ent), = 0.1 — soft (ent), r=0.1 0000 7 —— soft (ent), r= 0.1 — soft (KL), r=0.1 12000 | — soft (KL), r=0.1 — soft (KL), 101 — soft (ent), r=0.01 — soft (ent), r= 0.01 000 | — soft (ent), — soft (KL), r= 0.01 10000} —— soft (KL), r= 0.01 —— soft (KL), t= 0.01 A lees standard sooo | standard 6000 | — standard 6000 4000 =10 4000 2000 2000 ~20 0 0 fd 40M fd 40M fd 40M Frames Frames Frames
1704.06440#43
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
44
We thank Chenxu Luo and Hao Zhu for their assistance in formula derivation. REFERENCES [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normaliza- tion. arXiv preprint arXiv:1607.06450 (2016). [2] Annalisa Barla, Francesca Odone, and Alessandro Verri. 2003. Histogram in- tersection kernel for image classification. In Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, Vol. 3. IEEE, III–513. [3] Xinyuan Cai, Chunheng Wang, Baihua Xiao, Xue Chen, and Ji Zhou. 2012. Deep nonlinear metric learning with independent subspace analysis for face verifica- tion. In ACM international conference on Multimedia. ACM, 749–752. [4] Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1. IEEE, 539–546.
1704.06369#44
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
44
Figure 2: Different variants of soft Q-learning and standard Q-learning, applied to Atari games. Note that 4 frames = 1 timestep. learning dynamics. For these experiments, we modified the gradient update rule used in A2C while making no changes to any algorithmic component, i.e. parallel rollouts, updating parameters every 5 steps, etc. The Q-function was represented as: Qθ(s, a) = Vθ(s) + τ log πθ(a s), which can be seen as a form of dueling architecture with τ log πθ(a s) being the “advantage stream” (Wang et al. [2015]). Vθ, πθ are parametrized as the same neural network as A2C, where convolutional layers and the first fully connected layer are shared. πθ(a | A2C can be seen as optimizing a combination of a policy surrogate loss and a value function loss, weighted by hyperparameter c: Lpolicy = log πθ(at π]](st) (84) Lyolicy = — log m6 (at | 81) Ae + TD xxx [79 2 Lvaue = 3||Va(sx) — Vil Daze = Lpoticy + CLvatue
1704.06440#44
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
45
[5] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jagannath Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition. 580–587. Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. 2013. Maxout Networks. International Conference on Machine Learning 28 (2013), 1319–1327. [6] [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770–778. [8] Chen Huang, Chen Change Loy, and Xiaoou Tang. 2016. Local similarity-aware deep feature embedding. In Advances in Neural Information Processing Systems. 1262–1270. [9] Gary B Huang and Erik Learned-Miller. 2014. Labeled faces in the wild: Updates and new reporting procedures. Dept. Comput. Sci., Univ. Massachusetts Amherst, Amherst, MA, USA, Tech. Rep (2014), 14–003.
1704.06369#45
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
45
Lvaue = 3||Va(sx) # | ˆVt − (85) (86) In normal A2C, we have found c = 0.5 to be a robust setting that works across multiple environments. On the other hand, our theory suggests that if we use this Q-function parametrization, soft Q-learning has the same expected gradient as entropy-regularized A2C with a specific weighting c = 1 τ . Hence, for the usual entropy bonus coefficient setting τ = 0.01, soft Q-learning is implicitly weighting value function loss a lot more than usual A2C setup (c = 100 versus c = 0.5). We have found that such emphasis on value function (c = 100) results in unstable learning for both soft Q-learning and entropy-regularized A2C. Therefore, to make Q-learning exactly match known good hyperparameters used in A2C, we scale gradients that go into advantage stream by 1
1704.06440#45
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
46
[10] Gary B Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. 2007. La- beled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report. Technical Report 07-49, University of Mas- sachusetts, Amherst. [11] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015). [12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifica- tion with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105. [13] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient- based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278– 2324. [14] Yann LeCun, Corinna Cortes, and Christopher Burges. 1998. The mnist database of handwritten digits. (1998). http://yann.lecun.com/exdb/mnist/
1704.06369#46
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
46
advantage stream by + and scale gradients that go into value function stream by c = 0.5. With the same default A2C hyperparameters, learning curves of PG and QL are a games (Figure 3), which indicates that the learning dynamics of both update rules a: 5 most identical in most re essentially the same even when the gradients are approximated with a small number of samples. Notably, here demonstrates stable learning without the use of target network or € schedule. he Q-learning method 12 Spacelnvaders Breakout BeamRider 1750 — PG 5001 — pc —— eG 7 @ — a 5000; —— QL 1500 400 1250 4000 300 1000 3000 200 750 2000 500 100 1000 250 0 0 0 320M ) 320M Vy) 320M Frames Frames Frames Pong Qbert Seaquest 20 7500} — PG 1750, —— PG —a —a 5000 1500 10 2500 1250 0000 0 1000 7500 750 = 5000 10 500 2500 — PG 250 20 —a 0 0 320M ) 320M Vy) 320M Frames Frames Frames Figure 3: Atari performance with policy gradient vs Q-learning update rules. Solid lines are average evalu- ation return over 3 random seeds and shaded area is one standard deviation. # 7 Related Work Three recent papers have drawn the connection between policy-based methods and value-based methods, which becomes close with entropy regularization.
1704.06440#46
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
47
[15] Shengcai Liao, Zhen Lei, Dong Yi, and Stan Z Li. 2014. A benchmark study of large-scale unconstrained face recognition. In IEEE International Joint Conference on Biometrics. IEEE, 1–8. [16] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. 2016. Large-Margin Softmax Loss for Convolutional Neural Networks. In International Conference on Machine Learning. 507–516. [17] Yu Liu, Hongyang Li, and Xiaogang Wang. 2017. Learning Deep Features via Congenerous Cosine Loss for Person Recognition. arXiv preprint arXiv:1702.06890 (2017). [18] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision. 3730–3738. Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition. 3431–3440. [19]
1704.06369#47
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
47
# 7 Related Work Three recent papers have drawn the connection between policy-based methods and value-based methods, which becomes close with entropy regularization. O’Donoghue et al. [2016] begin with a similar motivation as the current paper: that a possible expla- nation for Q-learning and SARSA is that their updates are similar to policy gradient updates. They decompose the Q-function into a policy part and a value part, inspired by dueling Q-networks (Wang et al. [2015]): Q(s, a) = V (s) + τ (log π(a s) + τ S[π( s)]) (87) | | This form is chosen so that the term multiplying τ has expectation zero under π, which is a property that the true advantage function satisfies: Eπ [Aπ] = 0. Note that our work omits that S term, because it is most natural to define the Q-function to not include the first entropy term. The authors show that taking the gradient of the Bellman error of the above Q-function leads to a result similar to the policy gradient. They then propose an algorithm called PGQ that mixes together the updates from different prior algorithms.
1704.06440#47
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
48
[19] [20] Chaochao Lu and Xiaoou Tang. 2014. Surpassing human-level face verification performance on LFW with GaussianFace. arXiv preprint arXiv:1404.3840 (2014). Jonathan Milgram StÃľphane Gentric Liming Chen Md. Abul Hasnat, Julien BohnÃľ. 2017. von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification. arXiv preprint arXiv:1706.04264 (2017). [22] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition. 4004–4012. [21] # [23] Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep Face Recognition.. In BMVC, Vol. 1. 6. [24] Rajeev Ranjan, Carlos D. Castillo, and Rama Chellappa. 2017. L2-constrained Softmax Loss for Discriminative Face Verification. arXiv preprint arXiv:1703.09507 (2017).
1704.06369#48
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
48
Nachum et al. [2017] also discuss the entropy-regularized reinforcement learning setting, and develop an off-policy method that applies in this setting. Their argument (modified to use our notation and KL penalty instead of entropy bonus) is as follows. The advantage function Aπ(s, a) = Qπ(s, a) Vπ(s) lets us define a multi-step consistency equation, which holds even if the actions were sampled from a different (suboptimal) policy. In the setting of deterministic dynamics, Qπ(st, at) = rt + γVπ(st+1), hence n-1 n—1 n-1 Yo Aa (82,40) = S07 (re + Wa (St41) — Velse)) = SO 7'te+7"Ve(Sn) — Velso) (88) t=0 t=0 t=0 13 If π is the optimal policy (for the discounted, entropy-augmented return), then it is the Boltzmann policy for Qπ, thus τ (log π(a s) log π(a s)) = AQπ (s, a) (89) | − | This expression for the advantage can be substituted into Equation (88), giving the consistency equation
1704.06440#48
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
49
[25] Sam Roweis, Geoffrey Hinton, and Ruslan Salakhutdinov. 2004. Neighbourhood component analysis. Advances in Neural Information Processing Systems 17 (2004), 513–520. [26] Walter Rudin and others. 1964. Principles of mathematical analysis, Chapter 10. Vol. 3. McGraw-Hill New York. [27] Tim Salimans and Diederik P Kingma. 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems. 901–901. [28] Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition. 815–823.
1704.06369#49
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
49
| − | This expression for the advantage can be substituted into Equation (88), giving the consistency equation n—-1 n-1 Ss ‘7 (log 7 (se, ae) — log (sz, a2)) = Ss yt +7" Van) — Vr(S0), (90) t=0 t=0 which holds when π is optimal. The authors define a squared error objective formed from by taking LHS - RHS in Equation (90), and jointly minimize it with respect to the parameters of π and V . The resulting algorithm is a kind of Bellman residual minimization—it optimizes with respect to the future target values, rather than treating them as fixed Scherrer [2010].
1704.06440#49
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
50
[29] Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Net- works for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556 (2014). [30] Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems. 1849–1857. [31] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation by joint identification-verification. In Advances in neural information processing systems. 1988–1996. [32] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition. 1–9. [33] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. 2014. Deep- face: Closing the gap to human-level performance in face verification. In IEEE Conference on Computer Vision and Pattern Recognition. 1701–1708.
1704.06369#50
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
50
Haarnoja et al. [2017] work in the same setting of soft Q-learning as the current paper, and they are concerned with tasks with high-dimensional action spaces, where we would like to learn stochastic policies that are multi-modal, and we would like to use Q-functions for which there is no closed-form s) exp(Q(s, a)/τ ). Hence, they use way of sampling from the Boltzmann distribution π(a a method called Stein Variational Gradient Descent to derive a procedure that jointly updates the Q- function and a policy π, which approximately samples from the Boltzmann distribution—this resembles variational inference, where one makes use of an approximate posterior distribution. # 8 Conclusion We study the connection between two of the leading families of RL algorithms used with deep neural net- In a framework of entropy-regularized RL we show that soft Q-learning is equivalent to a policy works. gradient method (with value function fitting) in terms of expected gradients (first-order view). In addi- tion, we also analyze how a damped Q-learning method can be interpreted as implementing natural policy gradient (second-order view). Empirically, we show that the entropy regularized formulation considered in our theoretical analysis works in practice on the Atari RL benchmark, and that the equivalence holds in a practically relevant regime.
1704.06440#50
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
51
[34] Kilian Q Weinberger and Lawrence K Saul. 2009. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research 10, Feb (2009), 207–244. [35] Zhiding Yu Ming Li Bhiksha Raj Weiyang Liu, Yandong Wen and Le Song. 2017. SphereFace: Deep Hypersphere Embedding for Face Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. [36] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A Discriminative Feature Learning Approach for Deep Face Recognition. In European Conference on Computer Vision. Springer, 499–515. [37] Lior Wolf, Tal Hassner, and Itay Maoz. 2011. Face recognition in unconstrained videos with matched background similarity. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 529–534. [38] Xiang Wu, Ran He, and Zhenan Sun. 2015. A Lightened CNN for Deep Face Representation. arXiv preprint arXiv:1511.02683 (2015). [39] Xiang Xiang and Trac D Tran. 2016. Pose-Selective Max Pooling for Measuring Similarity. Lecture Notes in Computer Science 10165 (2016).
1704.06369#51
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
51
# 9 Acknowledgements We would like to thank Matthieu Geist for pointing out an error in the first version of this manuscript, Chao Gao for pointing out several errors in the second version, and colleagues at OpenAI for insightful discussions. # References Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1512.08562, 2015. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017. Sham Kakade. A natural policy gradient. Advances in neural information processing systems, 2:1531–1538, 2002. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
1704.06440#51
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
52
[39] Xiang Xiang and Trac D Tran. 2016. Pose-Selective Max Pooling for Measuring Similarity. Lecture Notes in Computer Science 10165 (2016). [40] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. 2014. Learning face representa- tion from scratch. arXiv preprint arXiv:1411.7923 (2014). [41] Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao. 2016. Range Loss for Deep Face Recognition with Long-tail. arXiv preprint arXiv:1611.08976 (2016). 8 APPENDIX 8.1 Proof of Proposition 1 Proposition 1. For the softmax loss with no-bias inner-product similarity as its metric, let Pi (f) = wit similarity as its metric, let Pj(f) = —*—Tpr; denote the proba- jae 7 bility of f being classified as class i. For a given scale s > 1, if i = arg maxj (W T j f), then Pi (sf) ≥ Pi (f) always holds. Proof: Let t = s − 1, after scaling, we have,
1704.06369#52
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06440
52
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. 14 Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. arXiv preprint arXiv:1702.08892, 2017. Brendan O’Donoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih. Pgq: Combining policy gradient and q-learning. arXiv preprint arXiv:1611.01626, 2016. Bruno Scherrer. Should one compute the temporal difference fix point or minimize the bellman residual? the unified oblique projection view. arXiv preprint arXiv:1011.4362, 2010. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528–3536, 2015a.
1704.06440#52
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06440
53
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b. Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010. 15
1704.06440#53
Equivalence Between Policy Gradients and Soft Q-Learning
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and $Q$-learning methods. $Q$-learning methods can be effective and sample-efficient when they work, however, it is not well-understood why they work, since empirically, the $Q$-values they estimate are very inaccurate. A partial explanation may be that $Q$-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between $Q$-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that "soft" (entropy-regularized) $Q$-learning is exactly equivalent to a policy gradient method. We also point out a connection between $Q$-learning methods and natural policy gradient methods. Experimentally, we explore the entropy-regularized versions of $Q$-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a $Q$-learning method that closely matches the learning dynamics of A3C without using a target network or $\epsilon$-greedy exploration schedule.
http://arxiv.org/pdf/1704.06440
John Schulman, Xi Chen, Pieter Abbeel
cs.LG
null
null
cs.LG
20170421
20181014
[ { "id": "1602.01783" }, { "id": "1702.08892" }, { "id": "1702.08165" }, { "id": "1611.01626" }, { "id": "1511.06581" }, { "id": "1512.08562" }, { "id": "1506.02438" } ]
1704.06369
54
= Pi (f). The equality holds if W T f = 0 or Wi = Wj , ∀i, j ∈ [1, n], which is almost impossible in practice. 8.2 Proof of Proposition 2 Proposition 2. (Loss Bound After Normalization) Assume that every class has the same number of samples, and all the samples are well-separated, i.e. each sample’s feature is exactly the same with its corresponding class’s weight. If we normalize both the features and every column of the weights to have a norm of €, the softmax loss will have a lower bound, log (1 +(n-1) ome), where n is the class number. Proof: Assume ||W;|| = ¢, Vi € [1,n] for convenience. Since we have already assumed that all samples are well-separated, we di- rectly use W; to represent the i-th class’ feature. The definition of the softmax loss is, The definition of the softmax loss is, Ly ei Wi =—= ) log —— a (14) neo ry ew Wj This formula is different from Equation (1) because we assume that 2 every class has the same sample number. By dividing Wi Wi = of from both the numerator and denominator,
1704.06369#54
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
55
This formula is different from Equation (1) because we assume that 2 every class has the same sample number. By dividing Wi Wi = of from both the numerator and denominator, 1x 1 's =-= ) log $e né eWiw-e i=l 1+ Dien jai & al (15) = 5 1+ y ew W-e j=l jti ew W-e 5x D7, e%! > j=l jti : = pki | Since f(x) = e* isa convex function, 7 we have, : = pki | lyn 5x LEX Since f(x) = e* isa convex function, 7 D7, e%! > ew n Dist ‘then we have, 12 n 2 Ls2 hn » log (1 +(n- emt Dyan jei ww), (16) i=l i=1
1704.06369#55
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
56
12 n 2 Ls2 hn » log (1 +(n- emt Dyan jei ww), (16) i=l i=1 The equality holds if and only if all W T i Wj , 1 ≤ i < j ≤ n have the same value, i.e., features from different classes have the same distance. Unfortunately, in d-dimension space, there are only d + 1 unique vertices to ensure that every two vertices have the same distance. All these vertices will form a regular d-simplex[26], e.g., a regular 2-simplex is an equilateral triangle and a regular 3-simplex is a regular tetrahedron. Since the class number is usually much bigger than the dimension of feature in face verification datasets, this equality actually cannot hold in practice. One improvement over this inequality is taking the feature dimension into consideration because we actually have omitted the feature dimension term in this step. Similar with f(x) = e*, the softplus function s(x) = log(1+Ce*) is also a convex function when C > 0,so that + D7, log(1 + Ce*!) > log(1 + Cen Li: *!), then we have
1704.06369#56
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
57
Ly > log (1 4+ (n— 1) emt Lies Djs, jai (wwe) » (17) vigils 5 Ella De ser WE Wy) a This equality holds if and only if VW;, the sums of distances to other class’ weight )” wiw; are all the same. (j=l, j#i # Note that I wie = = nl? + y y wi wy, (18) i=1 j=l, j#i # so noon DD wiw; 2 -ne®. (19) i=1 j=1j#i The equality holds if and only if )7_, )7_, Wi = 0. Thus, an! _ pn md The equality holds if and only if )7_, Wi = 0. Thus, 2 an! _ pn Lg = log{1+(n-1)e md (20) = log (1 +(n- perme). 8.3 Proof of Proposition 3 Proposition 3. Using an agent for each class instead of a specific sample would cause a distortion of ag Lye, (d(fo. fi) - d(fo, w))’, where W; is the agent of the ith-class. The distortion is bounded by 1 nCi Proof: Since d(x,y) is a metric, through the triangle inequality we have
1704.06369#57
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
58
1 nCi Proof: Since d(x,y) is a metric, through the triangle inequality we have d(f0,Wi ) − d(fj ,Wi ) ≤ d(f0, fj ) ≤ d(f0,Wi ) + d(fj ,Wi ), (21) # so − d(fj ,Wi ) ≤ d(f0, fj ) − d(f0,Wi ) ≤ d(fj ,Wi ), (22) and thus, (d(fo. fi) — d(fo. Wi)? < df, Wi. (23) # As a result, 1 n Ci result — ¥) (a. fd. wi)? s — Yahwn® ey Ci jet; Ci jee, # 8.4 Inference of Equation 4 Equation 4: IL _x,y, PL aL _ 82s HN 03) OXi lIxlle Inference: Here we treat ||x||z as an independent variable. Note that ¥ = Ap a and ||x|lz = [Xj x7 + € . We have, aL _ OL AR , x OL Ak Allxllz Ox; Oj OX; OX; O\|x\l2 Ox; # and ||x|lz = [Xj _ OL AR , x Oj OX; + ϵ . We have,
1704.06369#58
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.06369
59
# and ||x|lz = [Xj _ OL AR , x Oj OX; + ϵ . We have, aL _ OL AR , x OL Ak Allxllz Ox; Oj OX; OX; O\|x\l2 Ox; OL 1 OL -&} 141 =. = oa + 2x, OX ||xll2 » 5; |[x|IF 2 Illa ‘ . (26) ~9£ OL X& Ox i Ixie oy OX; |Ix\l3 OL HLH IIxlle 8.5 Proof of (x, a) =0 Proof: The vectorized version of Equation 4 is a) =0 x x aL _ 36 -x(36.%) en ox Tle So, x ay _ (x, 9£) — (x, (9L,%) Ox lIxlla (x, aL) _ &, Aes x) = Ixll2 (x, aL) ~ x)(2£,x) (28) - an xll2 (x. gL) _ (9£, x) x a lIxlle
1704.06369#59
NormFace: L2 Hypersphere Embedding for Face Verification
Thanks to the recent developments of Convolutional Neural Networks, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differentiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve performance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%. Codes and models are released on https://github.com/happynear/NormFace
http://arxiv.org/pdf/1704.06369
Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille
cs.CV
camera-ready version
null
cs.CV
20170421
20170726
[ { "id": "1502.03167" }, { "id": "1611.08976" }, { "id": "1703.09507" }, { "id": "1511.02683" }, { "id": "1706.04264" }, { "id": "1607.06450" }, { "id": "1702.06890" } ]
1704.05179
0
7 1 0 2 n u J 1 1 ] L C . s c [ 3 v 9 7 1 5 0 . 4 0 7 1 : v i X r a # SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine # Matt Dunn Center for Data Science, NYU # Levent Sagun Courant Institute, NYU # Mike Higgins Center for Data Science, NYU Center for Data Science, NYU —_ Courant Institute, NYU Center for Data Science, NYU # V. U˘gur G ¨uney Senior Data Scientist, Driversiti # Volkan Cirik School of Computer Science, CMU Senior Data Scientist, Driversiti School of Computer Science, CMU # Kyunghyun Cho Courant Institute and Center for Data Science, NYU # Abstract
1704.05179#0
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
0
7 1 0 2 r p A 8 1 ] L M . t a t s [ 1 v 4 9 1 5 0 . 4 0 7 1 : v i X r a # Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction Kun Gai1, Xiaoqiang Zhu1, Han Li1, Kai Liu2†, Zhe Wang3† 1 Alibaba Inc. [email protected], {xiaoqiang.zxq, lihan.lh}@alibaba-inc.com 2 Beijing Particle Inc. [email protected] 3 University of Cambridge. [email protected] † contribute to this paper while worked at Alibaba January 14, 2022 # Abstract
1704.05194#0
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
0
2018: 8 1 0 2 b e F 9 1 ] L C . s c [ 4 v 6 2 4 5 0 . 4 0 7 1 : v i X r a # A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference Adina Williams1 [email protected] Nikita Nangia2 [email protected] # Samuel R. Bowman1,2,3 [email protected] # 1Department of Linguistics New York University 2Center for Data Science New York University 3Department of Computer Science New York University # Abstract which current models extract reasonable represen- tations of language meaning in these settings.
1704.05426#0
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
1
Senior Data Scientist, Driversiti School of Computer Science, CMU # Kyunghyun Cho Courant Institute and Center for Data Science, NYU # Abstract We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an ex- isting question-answer pair, crawled from J! Archive, and augment it with text snip- pets retrieved by Google. Following this approach, we built SearchQA, which con- sists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with ad- ditional meta-data such as the snippet’s URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two base- line methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a mean- ingful gap between the human and ma- chine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering. # Introduction
1704.05179#1
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
1
January 14, 2022 # Abstract CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with L1 and L2,1 regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba’s online display advertising system, serving hundreds of millions users every day. # 1 Introduction Click-through rate (CTR) prediction is a core problem in the multi-billion dollar online advertising industry. To improve the accuracy of CTR prediction, more and more data are involved, making CTR prediction a large scale learning problem, with massive samples and high dimension features.
1704.05194#1
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
1
2Center for Data Science New York University 3Department of Computer Science New York University # Abstract which current models extract reasonable represen- tations of language meaning in these settings. This paper introduces the Multi-Genre Natu- ral Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. At 433k examples, this resource is one of the largest corpora avail- able for natural language inference (a.k.a. rec- ognizing textual entailment), improving upon available resources in both its coverage and difficulty. MultiNLI accomplishes this by of- fering data from ten distinct genres of written and spoken English, making it possible to eval- uate systems on nearly the full complexity of the language, while supplying an explicit set- ting for evaluating cross-genre domain adap- tation. In addition, an evaluation using exist- ing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difficult task than does that corpus, despite the two showing similar levels of inter-annotator agreement.
1704.05426#1
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
2
# Introduction One of the driving forces behind the recent suc- cess of deep learning in challenging tasks, such as object recognition (Krizhevsky et al., 2012), speech recognition (Xiong et al., 2016) and ma- chine translation (Bahdanau et al., 2014), has been the increasing availability of large-scale annotated data. This observation has also led to the interest in building a large-scale annotated dataset for question-answering. In 2015, Bordes et al. (2015) released a large-scale dataset of 100k open-world question-answer pairs constructed from Freebase, and Hermann et al. (2015) released two datasets, each consisting of closed-world question-answer pairs automatically generated from news articles. The latter was followed by Hill et al. (2015), Ra- jpurkar et al. (2016) and Onishi et al. (2016), each of which has released a set of large-scale closed- world question-answer pairs focused on a specific aspect of question-answering.
1704.05179#2
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
2
Traditional solution is to apply a linear logistic regression (LR) model, trained in a parallel manner (Brendan et al. 2013, Andrew & Gao 2007). LR model with L1 regularization can generate sparse solution, making it fast for online prediction. Unfortunately, CTR prediction problem is a highly nonlinear problem. In particular, user-click generation involves many complex factors, like ad quality, context information, user interests, and complex interactions of these factors. To help LR model catch the nonlinearity, feature engineering technique is explored, which is both time and humanity consuming. Another direction, is to capture the nonlinearity with well-designed models. Facebook (He et al. 2014) uses a hybrid model which combines decision trees with logistic regression. Decision tree plays a nonlinear feature transformation role, whose output is fed to LR model. However, tree-based method is not suitable for very sparse and high dimensional data (Safavian S. R. & Landgrebe D. 1990). (Rendle S. 2010) introduces Factorization Machines(FM), which involves interactions among features using 2nd order functions (or using other given-number-order functions). However, FM can not fit all general nonlinear patterns in data (like other higher order patterns). 1
1704.05194#2
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
2
The task of natural language inference (NLI) is well positioned to serve as a benchmark task for research on NLU. In this task, also known as recognizing textual entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Mark- ert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009), a model is presented with a pair of sentences—like one of those in Figure 1— and asked to judge the relationship between their meanings by picking a label from a small set: typi- cally ENTAILMENT, NEUTRAL, and CONTRADIC- TION. Succeeding at NLI does not require a sys- tem to solve any difficult machine learning prob- lems except, crucially, that of extracting an effec- tive and thorough representations for the mean- ings of sentences (i.e., their lexical and compo- sitional semantics). In particular, a model must handle phenomena like lexical entailment, quan- tification, coreference, tense, belief, modality, and lexical and syntactic ambiguity. # 1 Introduction
1704.05426#2
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
3
Let us first take a step back, and ask what a full end-to-end pipeline for question-answering would look like. A general question-answering system would be able to answer a question about any do- main, based on the world knowledge. This system would consist of three stages. A given question is read and reformulated in the first stage, followed by information retrieval via a search engine. An answer is then synthesized based on the query and a set of retrieved documents. We notice a gap between the existing closed- world question-answering data sets and this con- ceptual picture of a general question-answering system. The general question-answering system must deal with a noisy set of retrieved documents, which likely consist of many irrelevant docu- ments as well as semantically and syntactically ill- formed documents. On the other hand, most of the existing closed-world question-answering datasets were constructed in a way that the context pro- vided for each question is guaranteed relevant and well-written. This guarantee comes from the fact that each question-answer-context tuple was gen- erated starting from the context from which the question and answer were extracted. In this paper, we build a new closed-world question-answering dataset that narrows this gap.
1704.05179#3
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
3
1 In this paper, we present a piece-wise linear model and its training algorithm for large scale data. We name it Large Scale Piecewise Linear Model (LS-PLM). LS-PLM follows the divide-and-conquer strategy, that is, first divides the feature space into several local regions, then fits a linear model in each region, resulting in the output with combinations of weighted linear predictions. Note that, these two steps are learned simultaneously in a supervised manner, aiming to minimize the prediction loss. LS-PLM is superior for web-scale data mining in three aspects: • Nonlinearity. With enough divided regions, LS-PLM can fit any complex nonlinear function. • Scalability. Similar to LR model, LS-PLM is scalable both to massive samples and high dimensional features. We design a distributed system which can train the model on hundreds of machines parallel. In our online product systems, dozens of LS-PLM models with tens of million parameters are trained and deployed everyday. • Sparsity. As pointed in (Brendan et al. 2013), model sparsity is a practical issue for online serving in industrial setting. We show LS-PLM with L1 and L2,1 regularizer can achieve good sparsity.
1704.05194#3
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
3
# 1 Introduction Many of the most actively studied problems in NLP, including question answering, translation, and dialog, depend in large part on natural lan- guage understanding (NLU) for success. While there has been a great deal of work that uses rep- resentation learning techniques to pursue progress on these applied NLU problems directly, in or- der for a representation learning model to fully succeed at one of these problems, it must simul- taneously succeed both at NLU, and at one or more additional hard machine learning problems like structured prediction or memory access. This makes it difficult to accurately judge the degree to As the only large human-annotated corpus for NLI currently available, the Stanford NLI Cor- pus (SNLI; Bowman et al., 2015) has enabled a good deal of progress on NLU, serving as a ma- jor benchmark for machine learning work on sen- tence understanding and spurring work on core representation learning techniques for NLU, such as attention (Wang and Jiang, 2016; Parikh et al., 2016), memory (Munkhdalai and Yu, 2017), and the use of parse structure (Mou et al., 2016b; Bow- man et al., 2016; Chen et al., 2017). However, SNLI falls short of providing a sufficient testing ground for machine learning models in two ways.
1704.05426#3
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
4
In this paper, we build a new closed-world question-answering dataset that narrows this gap. Unlike most of the existing work, we start by building a set of question-answer pairs from Jeop- ardy!. We augment each question-answer pair, which does not have any context attached to it, by querying Google with the question. This pro- cess enables us to retrieve a realistic set of rel- evant/irrelevant documents, or more specifically their snippets. We filter out those questions whose answers could not be found within the retrieved snippets and those with less than forty web pages returned by Google. We end up with 140k+ question-answer pairs, and in total 6.9M snippets.1 We evaluate this new dataset, to which we re- fer as SearchQA, with a variant of recently pro- posed attention sum reader (Kadlec et al., 2016) and with human volunteers. The evaluation shows that the proposed SearchQA is a challenging task both for humans and machines but there is still a significant gap between them. This suggests that the new dataset would be a valuable resource for further research and advance our ability to build a better automated question-answering system. # 2 SearchQA
1704.05179#4
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
4
The learning of LS-PLM with sparsity regularizer can be transformed to be a non-convex and non- differential optimization problem, which is difficult to be solved. We propose an efficient optimization method for such problems, based on directional derivatives and quasi-Newton method. Due to the ability of capturing nonlinear patterns and scalability to massive data, LS-PLMs have become main CTR prediction models in the online display advertising system in alibaba, serving hundreds of millions users since 2012 early. It is also applied in recommendation systems, search engines and other product systems. The paper is structured as follows. In Section 2, we present LS-PLM model in detail, including formula- tion, regularization and optimization issues. In Section 3 we introduce our parallel implementation structure. in Section 4, we evaluate the model carefully and demonstrate the advantage of LS-PLM compared with LR. Finally in Section 5, we close with conclusions. {as G6 07 02 0 02 04 G6 Ga 1 A) Training dataset B) LR model C) LS-PLM model
1704.05194#4
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
4
Met my first girlfriend that way. FACE-TO-FACE contradiction C C N C I didn’t meet my first girlfriend until later. 8 million in relief in the form of emergency housing. GOVERNMENT neutral N N N N The 8 million dollars for emergency hous- ing was still not enough to solve the prob- lem. Now, as children tend their gardens, they have a new ap- preciation of their relationship to the land, their cultural heritage, and their community. LETTERS neutral N N N N All of the children love working in their gardens. At 8:34, the Boston Center controller received a third transmission from American 11 9/11 entailment E E E E The Boston Center controller got a third transmission from American 11. I am a lacto-vegetarian. SLATE neutral N N E N I enjoy eating cheese too much to abstain from dairy. someone else noticed it and i said well i guess that’s true and it was somewhat melodious in other words it wasn’t just you know it was really funny TELEPHONE contradiction C C C C No one noticed and it wasn’t funny at all.
1704.05426#4
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
5
# 2 SearchQA Collection A major goal of the new dataset is to build and provide to the public a machine compre- hension dataset that better reflects a noisy informa- tion retrieval system. In order to achieve this goal, we need to introduce a natural, realistic noise to the context of each question-answer pair. We use a production-level search engine –Google– for this purpose. We crawled the entire set of question-answer pairs from J! Archive2 which has archived all the question-answer pairs from the popular television show Jeopardy!. We used the question from each pair to query Google in order to retrieve a set of relevant web page snippets. The relevancy in this case was fully determined by an unknown, but in- production, algorithm underlying Google’s search engine, making it much closer to a realistic sce- nario of question-answering. Cleaning Because we do not have any control over the internals of Google search engine, we extensively cleaned up the entire set of question- answer-context tuples. First, we removed any snippet returned that included the air-date of the Jeopardy! episode, the exact copy of the question, 1 The dataset can be found at https://github.com/ nyu-dl/SearchQA. # 2http://j-archive.com
1704.05179#5
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
5
{as G6 07 02 0 02 04 G6 Ga 1 A) Training dataset B) LR model C) LS-PLM model Figure 1: A demo illustration of LS-PLM model. Figure A is the demo dataset. It is a binary classification problem, with red points belong to positive class and blue points belong to negative class. Figure B shows the classification result using LR model. Figure C shows the classification result using LS-PLM model. It’s clear that LS-PLM can capture the nonlinear distribution of data. # 2 Method We focus on the large scale CTR prediction application. It is a binary classification problems, with dataset {xt, yt}|n # 2.1 Formulation To model the nonlinearity of massive scale data, we employ a divide-and-conquer strategy, similar with (Jordan & Jacobs 1994). We divide the whole feature space into some local regions. For each region we 2 employ an individual generalized linear-classification model. In this way, we tackle the nonlinearity with a piece-wise linear model. We give our model as follows: # m m py = Ie) = 9( So o(ufx)n(wf 2)) (1) j=l
1704.05194#5
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
5
Table 1: Randomly chosen examples from the development set of our new corpus, shown with their genre labels, their selected gold labels, and the validation labels (abbreviated E, N, C) assigned by individual annotators. First, the sentences in SNLI are derived from only a single text genre—image captions—and are thus limited to descriptions of concrete visual scenes, rendering the hypothesis sentences used to de- scribe these scenes short and simple, and ren- dering many important phenomena—like tempo- ral reasoning (e.g., yesterday), belief (e.g., know), and modality (e.g., should)—rare enough to be ir- relevant to task performance. Second, because of these issues, SNLI is not sufficiently demanding to serve as an effective benchmark for NLU, with the best current model performance falling within a few percentage points of human accuracy and limited room left for fine-grained comparisons be- tween strong models.
1704.05426#5
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
6
1 The dataset can be found at https://github.com/ nyu-dl/SearchQA. # 2http://j-archive.com a ikipedia.org/wiki/Klingon&sa-Usved=0ahUKEwi zhvDi e Klingons a fictional extraterrestrial humanoid warrior a book of sayings, and a cultural guide to the language havi portrayed Montgomery Scott, devised the ... language by...", of Guinness World Records, Kl. # Figure 1: One example in .json format. or a term “Jeopardy!”, “quiz” or “trivia”, to en- sure that the answer could not be found trivially by a process of word/phrase matching. Furthermore, we manually checked any URL, from which these removed snippets were taken, that occurs more than 50 times and removed any that explicitly con- tains Jeopardy! question-answer pairs. Among the remaining question-answer-context tuples, we removed any tuple whose context did not include the answer. This was done mainly for computational efficiency in building a question- answering system using the proposed dataset. We kept only those tuples whose answers were three or less words long.
1704.05179#6
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
6
# m m py = Ie) = 9( So o(ufx)n(wf 2)) (1) j=l Here Θ = {u1, · · · , um, w1, · · · , wm} ∈ Rd×2m denote the model parameters. {u1, · · · , um} is the parameters for dividing function σ(·), and {w1, · · · , wm} for fitting function η(·). Given instance x, our predicating model p(y|x) consists of two parts: the first part σ(uT j x) divides feature space into m (hyper-parameter) different regions, the second part η(wT j x) gives prediction in each region. Function g(·) ensures our model to satisfy the definition of probability function. Special Case. Take softmax ( Kivinen & Warmuth 1998) as dividing function σ(x) and sigmoid (Hilbe 2009) as fitting function η(x) and g(x) = x, we get a specific formulation: exp(uj a 1 1x . 2 ply = 1x) > 4 SL exp(ufa) 1+ exp (—w/'2) (2)
1704.05194#6
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
6
techniques have made it possible to train general- purpose feature extractors that, with no or min- imal retraining, can extract useful features for a variety of styles of data (Krizhevsky et al., 2012; Zeiler and Fergus, 2014; Donahue et al., 2014). However, attempts to bring this kind of general purpose representation learning to NLU have seen only very limited success (see, for example, Mou et al., 2016a). Nearly all successful applications of representation learning to NLU have involved models that are trained on data that closely resem- bles the target evaluation data, both in task and style. This fact limits the usefulness of these tools for problems involving styles of language not rep- resented in large annotated training sets. This paper introduces a new challenge dataset, the Multi-Genre NLI Corpus (MultiNLI), whose chief purpose is to remedy these limitations by making it possible to run large-scale NLI evalua- tions that capture more of the complexity of mod- ern English. While its size (433k pairs) and mode of collection are modeled closely on SNLI, unlike that corpus, MultiNLI represents both written and spoken speech in a wide range of styles, degrees of formality, and topics.
1704.05426#6
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
7
Basic Statistics After all these processes, we have ended up with 140,461 question-answer pairs. Each pair is coupled with a set of 49.6±2.10 snippets on average. Each snippet is 37.3±11.7 tokens long on average. Answers are on aver- age 1.47±0.58 tokens long. There are 1,257,327 unique tokens. Meta-Data We collected for each question- answer-context tuple additional metadata from Jeopardy! and returned by Google. More specifi- cally, from Jeopardy! we have the category, dollar value, show number and air date for each ques- tion. From Google, we have the URL, title and a set of related links (often none) for each snip- pet. Although we do not use them in this paper, these items are included in the public release of SearchQA and may be used in the future. An ex- ample of one question-answer pair with just one snippet is presented in Fig. 1.
1704.05179#7
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
7
exp(uj a 1 1x . 2 ply = 1x) > 4 SL exp(ufa) 1+ exp (—w/'2) (2) In this case, our mixture model can be seen as a FOE model (Jordan & Jacobs 1994, Wang & Puterman 1998) as follows: m ply = la) =) plz = ile)p(yle =i.) (3) i=l In the reminder of the paper, without special declaration, we take Eq.(2) as our prediction model. Figure 1 illustrates the model compared with LR in a demo dataset, which shows clearly LS-PLM can capture the nonlinear pattern of data. Eq.(2) is the most common used formulation in our real applications. The objective function of LS-PLM model is formalized as Eq.(4): argmin eo f(O) = loss(O) + Alj/Ol/2,1 + B||Oll1 (4) 1oss(0) == > [ye log (o(yr= 1 |e, ©)) + (1 = ye) low(r(ye=Oler, ©))| (5) t=1
1704.05194#7
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
7
Our chief motivation in creating this corpus is to provide a benchmark for ambitious machine learn- ing research on the core problems of NLU, but we are additionally interested in constructing a cor- pus that facilitates work on domain adaptation and cross-domain transfer learning. In many applica- tion areas outside NLU, artificial neural network With this in mind, we construct MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representa- tions within the training domain and on their abil- ity to derive reasonable representations in unfa- miliar domains. The corpus is derived from ten different genres of written and spoken English, which are collectively meant to approximate the full diversity of ways in which modern standard American English is used. All of the genres ap- pear in the test and development sets, but only five are included in the training set. Models thus can be evaluated on both the matched test examples, which are derived from the same sources as those in the training set, and on the mismatched exam- ples, which do not closely resemble any of those seen at training time.
1704.05426#7
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
8
Training, Validation and Test Sets In order to maximize its reusability and reproducibility, we provide a predefined split of the dataset into train- ing, validation and test sets. One of the most im- portant aspects in question-answering is whether a question-answering machine would generalize to unseen questions from the future. We thus ensure that these three sets consist of question- answer pairs from non-overlapping years, and that the validation and test question-answer pairs are from years later than the training set’s pairs. The training, validation and test sets consist of 99,820, 13,393 and 27,248 examples, respectively. Among these, examples with unigram answers are respec- tively 55,648, 8,672 and 17,056. # 3 Related Work
1704.05179#8
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
8
Here loss(@) defined in Eq. 65) is the neg-likelihood loss function and ||©2,1|| and ||©1|| are two regulariza- ion terms providing different properties. First, Lz; regularization ({[Oll2.1 = ye Vou 63.) is employed or feature selection. As in our model, each dimension of feature is associated with 2m parameters. Lo 1 regularization are expected to push all the 2m parameters of one dimension of feature to be zero, that is, to suppress those less important features. Second, Ly regularization (|O||1 = 30, |@ij|) is employed for sparsity. Except with the feature selection property, L, regularization can also force those parameters of left features o be zero as much as possible, which can help improve the interpretation ability as well as generalization performance of the model. However, both L1 norm and L2,1 norm are non-smooth functions. This causes the objective function of Eq.(4) to be non-convex and non-smooth, making it difficult to employ those traditional gradient-descent optimization methods (Andrew & Gao 2007, Zhang 2004, Bertsekas 2003) or EM method (Wang & Puterman 1998).
1704.05194#8
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
8
This task will involve reading a line from a non-fiction article and writing three sentences that relate to it. The line will describe a situation or event. Using only this description and what you know about the world: • Write one sentence that is definitely correct about the situation or event in the line. • Write one sentence that might be correct about the situation or event in the line. • Write one sentence that is definitely incorrect about the situation or event in the line. Figure 1: The main text of a prompt (truncated) that was presented to our annotators. This version is used for the written non-fiction genres. # 2 The Corpus # 2.1 Data Collection The data collection methodology for MultiNLI is similar to that of SNLI: We create each sentence pair by selecting a premise sentence from a preex- isting text source and asking a human annotator to compose a novel sentence to pair with it as a hy- pothesis. This section discusses the sources of our premise sentences, our collection method for hy- potheses, and our validation (relabeling) strategy.
1704.05426#8
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
9
# 3 Related Work Open-World Question-Answering An open- world question-answering dataset consists of a set of question-answer pairs and the knowledge database. It does not come with an explicit link be- tween each question-answer pair and any specific entry in the knowledge database. A representative example of such a dataset is SimpleQA by (Bordes et al., 2015). SimpleQA consists of 100k question- answer pairs, and uses Freebase as a knowledge database. The major limitation of this dataset is that all the questions are simple in that all of them are in the form of (subject, relationship, ?).
1704.05179#9
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
9
Note that, while (Wang & Puterman 1998) gives the same mixture model formulation as Eq.(3), our model is more general for employing different kinds of prediction functions. Besides, we propose a different objective function for large scale industry data, taking the feature sparsity into consideration explicitly. This is crucial for real-world applications, as prediction speed and memory usage are two key indicators for online model serving. Furthermore, we give a more efficient optimization method to solve the large-scale non-convex problem, which is described in the following section. 3 # 2.2 Optimization Before we present our optimization method, we establish some notations and definitions that will be used in the reminder of the paper. Let ∂+ ∂+ ij f (Θ) = lim α↓0 f (Θ + αeij) − f (Θ) α (6) where e;; is the ij*" standard basis vector. The directional derivative of f as © in direction d is denoted as f'(@;d), which is defined as: ‘(© d) — f(® pe :d) = tim O80) = 1) ald a (7)
1704.05194#9
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
9
Premise Text Sources The MultiNLI premise sentences are derived from ten sources of freely available text which are meant to be maximally diverse and roughly represent the full range of American English. We selected nine sources from the second release of the Open American National Corpus (OANC; Fillmore et al., 1998; Macleod et al., 2000; Ide and Macleod, 2001; Ide and Su- derman, 2006, downloaded 12/20161), balancing the volume of source text roughly evenly across genres, and avoiding genres with content that would be too difficult for untrained annotators. OANC data constitutes the following nine gen- transcriptions from the Charlotte Narrative res: and Conversation Collection of two-sided, in- person conversations that took place in the early 2000s (FACE-TO-FACE); reports, speeches, letters, and press releases from public domain govern- ment websites (GOVERNMENT); letters from the Indiana Center for Intercultural Communication of Philanthropic Fundraising Discourse written in the late 1990s–early 2000s (LETTERS); the public re1 http://www.anc.org/
1704.05426#9
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
10
Closed-World Question-Answering Although we use open-world snippets, the final SearchQA is a closed-world question-answering dataset since each question can be answered entirely based on the associated snippets. One family of such datasets includes Children’s Book dataset (Hill et al., 2015), CNN and DailyMail (Hermann et al., 2015). Each question-answer-context tuple in these datasets was constructed by first selecting the context article and then creating a question- answer pair, where the question is a sentence with a missing word and the answer is the miss- ing word. This family differs from SearchQA in two aspects. First, in SearchQA we start from a question-answer pair, and, second, our question is not necessarily of a fill-in-a-word type.
1704.05179#10
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
10
‘(© d) — f(® pe :d) = tim O80) = 1) ald a (7) A vector d is regarded as a descent direction if f’(O;d) < 0. sign(-) is the sign function takes value {—1,0, 1}. The projection function Θij 0 , , sign(Θij) = sign(Ωij) otherwise πij(Θ; Ω) = (8) means projecting Θ onto the orthant defined by Ω. # 2.2.1 Choose descent direction As discussed above, our objective function for large scale CTR prediction problem is both non-convex and non-smooth. Here we propose a general and efficient optimization method to solve such kind of non-convex problems. Since the negative-gradients of our objective function do not exists for all ©, we take the direction d which minimizes the directional derivative of f with © as a replace. The directional derivative f’(0; d) exists for any © and direction d, whic is declared as Lemma [I]
1704.05194#10
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
10
port from the National Commission on Terrorist Attacks Upon the United States released on July 22, 20042 (9/11); five non-fiction works on the textile industry and child development published by the Oxford University Press (OUP); popular culture articles from the archives of Slate Maga- zine (SLATE) written between 1996–2000; tran- scriptions from University of Pennsylvania’s Lin- guistic Data Consortium Switchboard corpus of two-sided, telephone conversations that took place in 1990 or 1991 (TELEPHONE); travel guides pub- lished by Berlitz Publishing in the early 2000s (TRAVEL); and short posts about linguistics for non-specialists from the Verbatim archives written between 1990 and 1996 (VERBATIM). For our tenth genre, FICTION, we compile sev- eral freely available works of contemporary fiction written between 1912 and 2010, spanning genres including mystery, humor, western, science fic- tion, and fantasy by authors Isaac Asimov, Agatha Christie, Ben Essex (Elliott Gesswell), Nick Name (Piotr Kowalczyk), Andre Norton, Lester del Ray, and Mike Shea.
1704.05426#10
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
11
Another family is an extension of the for- family in- mer cludes SQuAD (Rajpurkar et al., 2016) and NEWSQA (Trischler et al., 2016). Unlike the first family, answers in this family are often multi- word phrases, and they do not necessarily appear as they are in the corresponding context. In con- trast, in SearchQA we ensure that all multi-word phrase answers appear in their corresponding con- text. Answers, often as well as questions, are thus often crowd-sourced in this family of datasets. Nonetheless, each tuple in these datasets was how- ever also constructed starting from a correspond- ing context article, making them less realistic than the proposed SearchQA. Answer Per-question Average Per-user Average Per-user Std. Dev. F1 score (for n-gram) Unigram n-gram 66.97% 42.86% 64.85% 43.85% 10.43% 8.16% 57.62 % Table 1: The accuracies achieved by the volun- teers.
1704.05179#11
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
11
Lemma 1. When an objective function f(®) is composed by a smooth loss function with Ly and Lz, norm, for example the objective function given in Eq. (4), the directional derivative f'(O;d) exists for any © and direction d. We leave the proof in Appendix Since the directional derivative f’(@;d) always exists, we choose the direction as the descent direction which minimizes the directional derivative f’(O;d) when the negative gradient of f(©) does not exist. The following proposition [2] explicitly gives the direction. Proposition 2. Given a smooth loss function loss(®) and an objective function f(@) = loss(O) + A||O]l2,1 + B\|Oll1, the bounded direction d which minimizes the directional derivative f'(O;d) is denoted as follows: s — Bsign(@i;), 01; £0 dj; = § max{|s| — 8,0}sign(s), Oi; =0, ||Oi.l]21 4 0 (9) maxtfofes— 0}, \|9i.ll2,1 = 0,
1704.05194#11
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
11
We construct premise sentences from these ten source texts with minimal preprocessing; unique the sentences within genres, exclude very short sentences (under eight characters), and manu- ally remove certain types of non-narrative writing, such as mathematical formulae, bibliographic ref- erences, and lists. Although SNLI is collected in largely the same way as MultiNLI, and is also permissively li- censed, we do not include SNLI in the MultiNLI corpus distribution. SNLI can be appended and treated as an unusually large additional CAPTIONS genre, built on image captions from the Flickr30k corpus (Young et al., 2014). Hypothesis Collection To collect a sentence pair, we present a crowdworker with a sentence from a source text and ask them to compose three novel sentences (the hypotheses): one which is necessarily true or appropriate whenever the premise is true (paired with the premise and la- beled ENTAILMENT), one which is necessarily false or inappropriate whenever the premise is true (CONTRADICTION), and one where neither condi- tion applies (NEUTRAL). This method of data col- lection ensures that the three classes will be repre- sented equally in the raw corpus. 2https://9-11commission.gov/
1704.05426#11
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
12
MS MARCO (Nguyen et al., 2016)–the most recently released dataset to our knowledge– is perhaps most similar to the proposed SearchQA. Nguyen et al. (2016) selected a subset of actual user-generated queries to Microsoft Bing that cor- respond to questions. These questions are aug- mented with a manually selected subset of snip- pets returned by Bing. The question is then an- swered by a human. Two major differences be- tween MS MARCO and SearchQA are the choice of questions and search engine. We believe the comparison between MS MARCO and the pro- posed SearchQA would be valuable for expand- ing our understanding on how the choice of search engines as well as types of questions impact question-answering systems in the future. # 4 Experiments and Results As a part of our release of SearchQA, we provide a set of baseline performances against which other researchers may compare their future approaches. Unlike most of the previous datasets, SearchQA augments each question-answer pair with a noisy, real context retrieved from the largest search en- gine in the world. This implies that the human per- formance is not necessarily the upper-bound but we nevertheless provide it as a guideline. # 4.1 Human Evaluation
1704.05179#12
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
12
where s = −∇loss(Θ)ij − λ Θij and v = max{| − ∇loss(Θ)ij| − β, 0}sign(−∇loss(Θ)ij). More details about the proof can be found in Appendix B. According to the proof, we can see that the negative pseudo-gradient defined in Gao’s work (Andrew & Gao 2007) is a special case of our descent direction. Our proposed method is more general to find the descent direction for those non-smooth and non-convex objective functions. Based on the direction dk in Eq.(9), we update the model parameters along a descent direction calculated by limited-memory quasi-newton method (LBFGS) (Wang & Puterman 1998), which approximates the inverse Hessian matrix of Eq.(4) on the given orthant. Motivated by the OWLQN method (Andrew & 4
1704.05194#12
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
12
2https://9-11commission.gov/ Statistic SNLI MultiNLI Pairs w/ unanimous gold label Individual label = gold label Individual label = author’s label Gold label = author’s label Gold label 6= author’s label No gold label (no 3 labels match) 58.3% 58.2% 89.0% 85.8% 88.7% 85.2% 91.2% 6.8% 2.0% 92.6% 5.6% 1.8% Table 2: Key validation statistics for SNLI (copied from Bowman et al., 2015) and MultiNLI.
1704.05426#12
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
13
# 4.1 Human Evaluation We designed a web interface that displays a query and retrieved snippets and lets a user select an an- swer by clicking words on the screen. A user is given up to 40 minutes to answer as many ques- tions as possible. We randomly select question- answer-context pairs from the test set. We recruited thirteen volunteers from the mas- ter’s program in the Center for Data Science at NYU. They were uniform-randomly split into two groups. The first group was presented with ques- tions that have single-word (unigram) answers only, and the other group with questions that have either single-word or multi-word (n-gram) an- swers. On average, each participant answers 47.23 questions with the standard deviation of 30.42.
1704.05179#13
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
13
4 Algorithm 1 Optimize problem Eq.(4) Input:Choose initial point Θ(0) S ← {}, Y ← {} for k = 0 to MaxIters do 1. Compute d(k) with Eq. (9). 2. Compute pk with Eq. (11) using S and Y . 3. Find Θ(k+1) with constrained line search (12). 4. If termination condition satisfied then stop and return Θ(k+1) End if 5. Update S with s(k) = Θ(k) − Θ(k−1) 6. Update Y with y(k) = −d(k) − (−d(k−1)) # end for Gao 2007), we also restrict the signs of model parameters not changing in each iteration. Given the chosen direction dk and the old Θ(k), we constrain the orthant of current iteration as follows: ; ! ! 10 sign(d{;?), of =0 (10) . k k = {ae of) 40 When of? # 0, the new ©;; would not change sign in current iteration. When of = 0, we choose the orthant decided by the selected direction a) for the new of). # 2.2.2 Update direction constraint and line search
1704.05194#13
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
13
Table 2: Key validation statistics for SNLI (copied from Bowman et al., 2015) and MultiNLI. The prompts that surround each premise sen- tence during hypothesis collection are slightly tai- lored to fit the genre of that premise sentence. We pilot these prompts prior to data collection to ensure that the instructions are clear and that they yield hypothesis sentences that fit the in- tended meanings of the three classes. There are five unique prompts in total: one for written non-fiction genres (SLATE, OUP, GOVERNMENT, VERBATIM, TRAVEL; Figure 1), one for spoken genres (TELEPHONE, FACE-TO-FACE), one for each of the less formal written genres (FICTION, LETTERS), and a specialized one for 9/11, tai- lored to fit its potentially emotional content. Each prompt is accompanied by example premises and hypothesis that are specific to each genre.
1704.05426#13
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
14
We report the average and standard deviation of the accuracy achieved by the volunteers in Table 1. We notice the significant gap between the accura- cies by the first and second groups, suggesting that the difficulty of question-answering grows as the length of the answer increases. Also, according to the F1 scores, we observe a large gap between the ASR and humans. This suggests the potential for the proposed SearchQA as a benchmark for ad- vancing question-answering research. Overall, we found the performance of human volunteers much lower than expected and suspect the following un- derlying reasons. First, snippets are noisy, as they are often excerpts not full sentences. Second, hu- man volunteers may have become exhausted over the trial. We leave more detailed analysis of the performance of human subjects on the proposed SearchQA for the future. # 4.2 Machine Baselines
1704.05179#14
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
14
# 2.2.2 Update direction constraint and line search Given the descent direction dk, we approximate the inverse-Hessian matrix Hk using LBFGS method with a sequence of y(k), s(k). Then the final update direction is Hkd(k). Here we give two tricks to adjust the update direction. First, we constrain the update direction in the orthant with respect to d(k). Second, as our objective function is non-convex, we cannot guarantee that Hk is positive-definite. We use y(k)T s(k) > 0 as a condition to ensure Hk is a positive-definite matrix. If y(k)T s(k) ≤ 0, we switch to d(k) as the update direction. The final update direction pk is defined as follows: pk = π(Hkd(k); d(k)), d(k), otherwise y(k)T s(k) > 0 (11) Given the update direction, we use backtracking line search to find the proper step size α. Same as OWLQN, we project the new Θ(k+1) onto the given orthant decided by the Eq. (10).
1704.05194#14
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
14
Below the instructions, we present three text fields—one for each label—followed by a field for reporting issues, and a link to the frequently asked questions (FAQ) page. We provide one FAQ page per prompt. FAQs are modeled on their SNLI counterparts (supplied by the authors of that work) and include additional curated examples, answers to genre-specific questions arising from our pilot phase, and information about logistical concerns like payment. For both hypothesis collection and validation, we present prompts to annotators using Hybrid (gethybrid.io), a crowdsoucring platform similar to the Amazon Mechanical Turk platform used for SNLI. We used this platform to hire an organized group of workers. 387 annotators con- tributed through this group, and at no point was any identifying information about them, including demographic information, available to the authors.
1704.05426#14
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
15
# 4.2 Machine Baselines TF-IDF Max An interesting property of the proposed SearchQA is that the context of each question-answer pair was retrieved by Google with the question as a query. This implies that the information about the question itself may be implicitly embedded in the snippets. We therefore test a naive strategy (TF-IDF Max) of selecting the word with the highest TF-IDF score in the context as an answer. Note that this can only be used for the questions with a unigram answer.
1704.05179#15
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
15
Θ(k+1) = π(Θ(k) + αpk; ξ(k)) (12) # 2.3 Algorithm A pseudo-code description of optimization is given in Algorithm 1. In fact, only a few steps of the standard LBFGS algorithm need to change. These modifications are: 1. The direction d(k) which minimizes the direction derivative of the non-convex objective is used in replace of negative gradient. 5 2. The update direction is constrained to the given orthant defined by the chosen direction d(k). Switch to d(k) when the Hk is not positive-definite. 3. During the line search, each search point is projected onto the orthant of the previous point. # 3 Implementation In this section, we first provide our parallel implementation of LS-PLM model for large-scale data, then introduce an important trick which helps to accelerate the training procedue greatly. Server, Server, Server, Worker, Worker, Worker, Node, Node, Node, A) Distributed Topology B) model-parallelism and data-parallelism
1704.05194#15
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
15
Validation We perform an additional round of annotation on test and development examples The validation to ensure accurate labelling. phase follows the same procedure used for SICK (Marelli et al., 2014b) and SNLI: Workers are presented with pairs of sentences and asked to supply a single label (ENTAILMENT, CONTRADICTION, NEUTRAL) for the pair. Each pair is relabeled by four workers, yielding a total of five labels per example. Validation instructions are tailored by genre, based on the main data collection prompt (Figure 1); a single FAQ, modeled after the valida- tion FAQ from SNLI, is provided for reference. In order to encourage thoughtful labeling, we manu- ally label one percent of the validation examples and offer a $1 bonus each time a worker selects a label that matches ours.
1704.05426#15
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
16
Attention Sum Reader Attention sum reader (ASR, Kadlec et al., 2016) is a variant of a pointer network (Vinyals et al., 2015) that was specifi- cally constructed to solve a cloze-style question- answering task. ASR consists of two encoding recurrent networks. The first network encodes a given context c, which is the concatenation of all the snippets in the case of SearchQA, into a set of hidden vectors {h¢}, and the second network en- codes a question gq into a single vector h?. The dot product between each hidden vector from the context and the question vector is exponentiated to form word scores 3; = exp(h' hS). ASR then pulls these word scores by summing the scores of the same word, resulting in a set of unique word scores 3% = Vie p, 63, where D; indicates where the word ¢ appears in the context. These unique- word scores are normalized, and we obtain an an- swer distribution p(ilc,qg) = 6// 30, 6). The ASR is trained to maximize this (log-)probability of the correct answer word in the context.
1704.05179#16
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
16
Server, Server, Server, Worker, Worker, Worker, Node, Node, Node, A) Distributed Topology B) model-parallelism and data-parallelism Figure 2: The architecture of parallel implementation. Figure A illustrates the physical distributed topology. It’s a variant of parameter server, where each computation node runs with both a server and a worker, aiming to maximize the utility of computation power as well as memory usuage. Figure B illustrates the parameter server structure in model-parallelism and data-parallelism manner. # 3.1 Parallel implementation To apply Algorithm 1 in large-scale settings, we implement it with a distributed learning framework, as illustrated in Figure 2. It’s a variant of parameter server. In our implementation, each computation node runs with both a server node and a worker node, aiming to • Maximize the utility of CPU’s computation power. In traditional parameter server setting, server nodes work as a distributed KV storer with interfaces of push and pull operations, which are low computation costly. Running with worker nodes can make full use of the computation power. • Maximize the utility of memory. Machines today usually have big memory, for example 128GB. Running on the same computation node, server node and worker node can share and utilize the big memory better.
1704.05194#16
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
16
For each validated sentence pair, we assign a gold label representing a majority vote between the initial label assigned to the pair by the original annotator, and the four additional labels assigned by validation annotators. A small number of ex- amples did not receive a three-vote consensus on any one label. These examples are included in the distributed corpus, but are marked with ‘-’ in the gold label field, and should not be used in stan- dard evaluations. Table 2 shows summary statis- tics capturing the results of validation, alongside corresponding figures for SNLI. These statistics indicate that the labels included in MultiNLI are about as reliable as those included in SNLI, de- spite MultiNLI’s more diverse text contents. # 2.2 The Resulting Corpus
1704.05426#16
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
17
Set Model TF-IDF Valid Test Valid Test Max ASR Unigram Acc Acc@5 13.0 12.7 43.9 41.3 49.3 49.0 67.3 65.1 n-gram F1 – – 24.2 22.8 Table 2: The accuracies on the validation and test sets using the non-trainable baseline (TF-IDF Max) and the trainable baseline (ASR). We report top-1/5 accuracies for unigram answers, and oth- erwise, F1 scores. This vanilla ASR only works with a unigram answer and is not suitable for an n-gram answer. We avoid this issue by introducing another recur- rent network which encodes the previous answer words (4, ...,@—1) into a vector h*. This vector is added to the question vectors, i.e., hY — hI+h*. During training, we use the correct previou an- swer words, while we let the model, called n-gram ASR, predict one answer at a time until it predicts (answer). This special token, appended to the con- ext, indicates the end of the answer.
1704.05179#17
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
17
In brief, there are two roles in the framework. The first role is the work node. Each node stores a part of training data and a local model, which only saves the model parameters used for the local training data. The second role is the server node. Each node stores a part of global model, which is mutually-exclusive. In each iteration, all of the worker nodes first calculate the loss and the descent direction with local model and local data in parallel(data parallelism). Then server nodes aggregate the loss and the direction d(k) as 6 well as the corresponding entries of Θ needed to calculate the revised gradient (model parallelism). After finishing calculating the steepest descent direction in Step 1, workers synchronize the corresponding entries of Θ, and then, perform Step 2–6 locally. # 3.2 Common Feature Trick U1 Al Ul Al Ul A2 A2 U1 A3 A3 U2 A4 U2 A4 U2 AS A5 Figure 3: Common feature pattern in display advertising. Usually in each page view, a user will see several different ads at the same time. In this situation, user features can be shared across these samples.
1704.05194#17
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
17
# 2.2 The Resulting Corpus Table 1 shows randomly chosen development set examples from the collected corpus. Hypotheses tend to be fluent and correctly spelled, though not all are complete sentences. Punctuation is often omitted. Hypotheses can rely heavily on knowl- edge about the world, and often don’t correspond closely with their premises in syntactic structure. Unlabeled test data is available on Kaggle for both matched and mismatched sets as competi- tions that will be open indefinitely; Evaluations on a subset of the test set have previously been conducted with different leaderboards through the RepEval 2017 Workshop (Nangia et al., 2017). The corpus is available in two formats—tab sep- arated text and JSON Lines (jsonl), following SNLI. For each example, premise and hypothesis strings, unique identifiers for the pair and prompt, and the following additional fields are specified: • gold label: label used for classification. In examples rejected during the validation process, the value of this field will be ‘-’.
1704.05426#17
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
18
We try both the vanilla and n-gram ASR’s on the unigram-answer-only subset and on the whole set, respectively. We use recurrent networks with 100 gated recurrent units (GRU, Cho et al., 2014) for both unigram and n-gram models, respec- tively. We use Adam (Kingma and Ba, 2014) and dropout (Srivastava et al., 2014) for training. Result We report the results in Table 2. We see that the attention sum reader is below human eval- uation, albeit by a rather small margin. Also, TF- IDF Max scores are not on par when compared to ASR which is perhaps not surprising. Given the unstructured nature of SearchQA, we believe im- provements on the benchmarks presented are cru- cial for developing a real-world Q&A system. # 5 Conclusion
1704.05179#18
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
18
In addition to the general-purpose parallel implementation, we also optimized the implementation in online advertising context. Training samples in CTR prediction tasks usually have similar common feature pattern. Take display advertising as an example, as illustrated in Figure 3, during each page view, a user will see several different ads at the same time. For example, user U1 in Figure 3 sees three ads in one visit session, thus generates three samples. In this situation, features of user U1 can be shared across these three samples. These features include user profiles (sex, age, etc.) and user behavior histories during visits of Alibabas e-commerce websites, for example, his/her shopping item IDs, preferred brands or favorite shop IDs. i x. By employing the common feature trick, we can split the calculation into common and non-common parts and rewrite them as follows: µT i x = µT i x = wT wT i,cxc + µT i,cxc + wT i,ncxnc i,ncxnc (13) Hence, for the common feature part, we need just calculate once and then index the result, which will be utilized by the following samples. Specifically, we optimize the parallel implementation with common features trick in the following three aspects: • Group training samples with common features and make sure these samples are stored in the same worker.
1704.05194#18
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05179
19
# 5 Conclusion for question- We constructed a new dataset answering research, called SearchQA. It was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothet- ical) general question-answering system, which consists of information retrieval and answer syn- thesis. We conducted human evaluation as well as machine evaluation. Using the latest tech- nique, ASR, we show that there is a meaningful gap between humans and machines, which sug- gests the potential of SearchQA as a benchmark task for question-answering research. We release SearchQA publicly, including our own implemen- tation of ASR and n-gram ASR in PyTorch.3 # Acknowledgments KC thanks support by Google, NVIDIA, eBay and Facebook. MD conducted this work as a part of DS-GA 1010: Independent Study in Data Science at the Center for Data Science, New York Univer- sity. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473 .
1704.05179#19
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
19
• Group training samples with common features and make sure these samples are stored in the same worker. • Save memory by storing common features shared by multiple samples only once. • Speed up iteration by updating loss and gradient w.r.t. common features only once. Due to the common feature pattern of our production data, employing the common feature trick improves the performance of training procedure greatly, which will be shown in the following section 4.3. # 4 Experiments In this session, we evaluate the performance of LS-PLM. Our datasets are generated from Alibaba’s mobile display advertising product system. As shown in Table 1, we collect seven datasets in continuous sequential 7 Table 1: Alibaba’s mobile display advertising CTR prediction datasets 1 2 3 4 5 6 7 3.04 × 106 3.27 × 106 3.49 × 106 3.67 × 106 3.82 × 106 3.95 × 106 4.07 × 106 1.34/0.25/0.26 × 109 1.44/0.26/0.26 × 109 1.56/0.26/0.25 × 109 1.62/0.25/0.26 × 109 1.69/0.26/0.26 × 109 1.74/0.26/0.26 × 109 1.78/0.26/0.26 × 109
1704.05194#19
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]
1704.05426
19
Genre Train #Examples Dev. Test #Wds. Prem. ‘S’ parses Prem. Hyp. Agrmt. Model Acc. ESIM CBOW SNLI 550,152 10,000 10,000 14.1 74% 88% 89.0% 86.7% 80.6 % FICTION GOVERNMENT SLATE TELEPHONE TRAVEL 77,348 77,350 77,306 83,348 77,350 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 14.4 24.4 21.4 25.9 24.9 94% 97% 90% 97% 94% 98% 71% 97% 97% 98% 89.4% 73.0% 87.4% 74.8% 87.1% 67.9% 88.3% 72.2% 89.9% 73.7% 67.5% 67.5% 60.6% 63.7% 64.6% 9/11 FACE-TO-FACE LETTERS OUP VERBATIM 0 0 0 0 0 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 20.6 18.1 20.0 25.7 28.3 98% 99% 91% 96% 95%
1704.05426#19
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. In addition to being one of the largest corpora available for the task of NLI, at 433k examples, this corpus improves upon available resources in its coverage: it offers data from ten distinct genres of written and spoken English--making it possible to evaluate systems on nearly the full complexity of the language--and it offers an explicit setting for the evaluation of cross-genre domain adaptation.
http://arxiv.org/pdf/1704.05426
Adina Williams, Nikita Nangia, Samuel R. Bowman
cs.CL
10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy number for the CBOW model in the 'matched' setting. v3 adds a discussion of the difficulty of the corpus to the analysis section. v4 is the version that was accepted to NAACL2018
null
cs.CL
20170418
20180219
[ { "id": "1705.02364" } ]
1704.05179
20
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua Learning phrase representations Bengio. 2014. using rnn encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301 .
1704.05179#20
SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
http://arxiv.org/pdf/1704.05179
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
cs.CL
null
null
cs.CL
20170418
20170611
[ { "id": "1511.02301" }, { "id": "1608.05457" }, { "id": "1606.05250" }, { "id": "1611.09268" }, { "id": "1611.09830" }, { "id": "1603.01547" }, { "id": "1506.02075" }, { "id": "1610.05256" } ]
1704.05194
20
periods, aiming to evaluate the consist performance of the proposed model, which is important for online product serving. In each dataset, training/validation/testing samples are disjointly collected from different days, with proportion about 7:1:1. AUC (Fawcett 2006) metric is used to evaluate the model performance. # 4.1 Effectiveness of division number LS-PLM is a piece-wise linear model, with division number m controlling the model capacity. We evaluate the division effectiveness on model’s performance. It is executed on dataset 1 and results are shown in Figure 4. # a Generally speaking, larger m means more parameters and thus leads to larger model capacity. But the training cost will also increase, both time and memory. Hence, in real applications we have to balance the model performance with the training cost. training AUC testing AUC 0.69 0.685 0.68 0.675 0.67 0.665 0.66 0.655 0.65 0.645 6 12 24 36 #division Figure 4: Model performance with different divisions.
1704.05194#20
Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction
CTR prediction in real-world business is a difficult machine learning problem with large scale nonlinear sparse data. In this paper, we introduce an industrial strength solution with model named Large Scale Piece-wise Linear Model (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$ regularizers, leading to a non-convex and non-smooth optimization problem. Then, we propose a novel algorithm to solve it efficiently, based on directional derivatives and quasi-Newton method. In addition, we design a distributed system which can run on hundreds of machines parallel and provides us with the industrial scalability. LS-PLM model can capture nonlinear patterns from massive sparse data, saving us from heavy feature engineering jobs. Since 2012, LS-PLM has become the main CTR prediction model in Alibaba's online display advertising system, serving hundreds of millions users every day.
http://arxiv.org/pdf/1704.05194
Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang
stat.ML, cs.LG
null
null
stat.ML
20170418
20170418
[]