doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1510.03055 | 11 | # k=1 Y
# P
where f (hkâ1, eyk ) denotes the activation function between hkâ1 and eyk , where hkâ1 is the representa- tion output from the LSTM at time k â 1. Each sen- tence concludes with a special end-of-sentence sym- bol EOS. Commonly, input and output use different LSTMs with separate compositional parameters to capture different compositional patterns.
During decoding, the algorithm terminates when an EOS token is predicted. At each time step, either a greedy approach or beam search can be adopted for word prediction. Greedy search selects the to- ken with the largest conditional probability, the em- bedding of which is then combined with preceding output to predict the token at the next step.
# 4 MMI Models
# 4.1 Notation
In the response generation task, let S denote an input message sequence (source) S = {s1, s2, ..., sNs } where Ns denotes the number of words in S. Let T (target) denote a sequence in response to source sequence S, where T = {t1, t2, ..., tNt , EOS}, Nt is the length of the response (terminated by an EOS token) and t denotes a word token that is associated with a D dimensional distinct word embedding et. V denotes vocabulary size. | 1510.03055#11 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 12 | 3
Published as a conference paper at ICLR 2016
are needed for all computations from Eqs. 4 through 6. Pseudo code in Algorithm 1 outlines how quantized back propagation is conducted.
Algorithm 1 Quantized Back Propagation (QBP). C is the cost function. binarize(W ) and clip(W ) stands for binarize and clip methods. L is the number of layers. Require: a deep model with parameters W , b at each layer. Input data x, its corresponding targets
y, and learning rate η. | 1510.03009#12 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 12 | (1) (2)
(5) (6)
# 4.2 MMI Criterion
The standard objective function for sequence-to- sequence models is the log-likelihood of target T given source S, which at test time yields the statisti- cal decision problem:
ËT = arg max log p(T |S) (7) T
As discussed in the introduction, we surmise that this formulation leads to generic responses being generated, since it only selects for targets given sources, not the converse. To remedy this, we re- place it with Maximum Mutual Information (MMI) as the objective function. In MMI, parameters are chosen to maximize (pairwise) mutual information between the source S and the target T:
log p(S, T ) p(S)p(T ) (8) | 1510.03055#12 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 13 | y, and learning rate η.
y, and learning rate η. 1: procedure QBP(model, x, y, η) 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 1. Forward propagation: for each layer i in range(1, L) do Wb â binarize(W ) Compute activation ai according to its previous layer output aiâ1, Wb and b. 2. Backward propagation: Initialize output layerâs error signal δ = âC âaL for each layer i in range(L, 1) do . Compute âW and âb according to Eqs. 4 and 5. Update W : W â clip(W â âW ) Update b: b â b â âb Compute âC âakâ1 by updating δ according to Eq. 6.
Like in the forward pass, most of the multiplications are used in the weight updates. Compared with standard back propagation, which would need 2M N + 3M multiplications at least, the amount of multiplications left is negligible in quantized back propagation. Our experiments in Section 5 show that this way of dramatically decreasing multiplications does not necessarily entail a loss in performance.
# 5 EXPERIMENTS | 1510.03009#13 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 13 | This avoids favoring responses that unconditionally enjoy high probability, and instead biases towards those responses that are speciï¬c to the given input. The MMI objective can written as follows:3
ËT = arg max log p(T |S) â log p(T ) T
We use a generalization of the MMI objective which introduces a hyperparameter λ that controls how much to penalize generic responses:
ËT = arg max log p(T |S) â λ log p(T ) (9) T
An alternate formulation of the MMI objective uses Bayesâ theorem:
log p(T ) = log p(T |S) + log p(S) â log p(S|T )
which lets us rewrite Equation 9 as follows:
T = arg max {(1 â A) log p(T|S) T log p(s) = arg max {(1 â \) log p(T|S) + Alog p(S|T)} T (10) + Alog p(S|T) â
(10) This weighted MMI objective function can thus be viewed as representing a tradeoff between sources
3Note: log p(S,T ) p(S)p(T ) = log p(T |S) p(T ) = log p(T |S)âlog p(T ) | 1510.03055#13 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 14 | # 5 EXPERIMENTS
We tried our approach on both fully connected networks and convolutional networks. Our imple- mentation uses Theano (Bastien et al., 2012). We experimented with 3 datasets: MNIST, CIFAR10, and SVHN. In the following subsection we show the performance that these multiplier-light neural networks can achieve. In the subsequent subsections we study some of their properties, such as convergence and robustness, in more detail.
5.1 GENERAL PERFORMANCE
We tested different variations of our approach, and compare the results with Courbariaux et al. (2015) and full precision training (Table 1). All models are trained with stochastic gradient descent (SGD) without momentum. We use batch normalization for all the models to accelerate learning. At training time, binary (ternary) connect and quantized back propagation are used, while at test time, we use the learned full resolution weights for the forward propagation. For each dataset, all hyper-parameters are set to the same values for the different methods, except that the learning rate is adapted independently for each one.
Table 1: Performances across different datasets | 1510.03009#14 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 14 | given targets (i.e., p(S|T )) and targets given sources (i.e., p(T |S)).
Although the MMI optimization criterion has been comprehensively studied for other tasks, such as acoustic modeling in speech recognition (Huang et al., 2001), adapting MMI to SEQ2SEQ training is empirically nontrivial. Moreover, we would like to be able to adjust the value λ in Equation 9 with- out repeatedly training neural network models from scratch, which would otherwise be extremely time- consuming. Accordingly, we did not train a joint model (log p(T |S) â λ log p(T )), but instead trained maximum likelihood models, and used the MMI cri- terion only during testing.
# 4.3 Practical Considerations | 1510.03055#14 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 15 | Table 1: Performances across different datasets
MNIST CIFAR10 SVHN Full precision Binary connect 1.33% 15.64% 2.85% 1.23% 12.04% 2.47% Binary connect + Quantized backprop 1.29% 12.08% 2.48% Ternary connect + Quantized backprop 1.15% 12.01% 2.42%
4
Published as a conference paper at ICLR 2016
# 5.1.1 MNIST
The MNIST dataset (LeCun et al., 1998) has 50000 images for training and 10000 for testing. All images are grey value images of size 28 Ã 28 pixels, falling into 10 classes corresponding to the 10 digits. The model we use is a fully connected network with 4 layers: 784-1024-1024-1024-10. At the last layer we use the hinge loss as the cost. The training set is separated into two parts, one of which is the training set with 40000 images and the other the validation set with 10000 images. Training is conducted in a mini-batch way, with a batch size of 200. | 1510.03009#15 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 15 | # 4.3 Practical Considerations
Responses can be generated either from Equation 9, i.e., log p(T |S) â λ log p(T ) or Equation 10, i.e., (1 â λ) log p(T |S) + λ log p(S|T ). We will refer to these formulations as MMI-antiLM and MMI-bidi, respectively. However, these strategies are difï¬cult to apply directly to decoding since they can lead to ungrammatical responses (with MMI-antiLM) or make decoding intractable (with MMI-bidi). In the rest of this section, we will discuss these issues and explain how we resolve them in practice.
# 4.3.1 MMI-antiLM | 1510.03055#15 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 16 | With ternary connect, quantized backprop, and batch normalization, we reach an error rate of 1.15%. This result is better than full precision training (also with batch normalization), which yields an error rate 1.33%. If without batch normalization, the error rates rise to 1.48% and 1.67%, respectively. We also explored the performance if we sample those weights during test time. With ternary connect at test time, the same model (the one reaches 1.15% error rate) yields 1.49% error rate, which is still fairly acceptable. Our experimental results show that despite removing most multiplications, our approach yields a comparable (in fact, even slightly higher) performance than full precision training. The performance improvement is likely due to the regularization effect implied by the stochastic sampling. | 1510.03009#16 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 16 | # 4.3.1 MMI-antiLM
The second term of log p(T |S) â λ log p(T ) func- tions as an anti-language model. It penalizes not only high-frequency, generic responses, but also ï¬u- ent ones and thus can lead to ungrammatical out- puts. In theory, this issue should not arise when λ is less than 1, since ungrammatical sentences should al- ways be more severely penalized by the ï¬rst term of the equation, i.e., log p(T |S). In practice, however, we found that the model tends to select ungrammati- cal outputs that escaped being penalized by p(T |S).
Solution Again, let Nt be the length of target T . p(T ) in Equation 9 can be written as:
Nt p(T ) = p(tk|t1, t2, ..., tkâ1) (11)
# Yk=1
We replace the language model p(T ) with U (T ), which adapts the standard language model by mul- tiplying by a weight g(k) that is decremented monotonically as the index of the current token k in- creases:
# Nt | 1510.03055#16 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 17 | Taking this network as a concrete example, the actual amount of multiplications in each case can be estimated precisely. Multiplications in the forward pass is obvious, and for the backward pass section 4 has already given an estimation. Now we estimate the amount of multiplications incurred by batch normalization. Suppose we have a pre-hidden representation h with mini-batch size B on a layer which has M output units (thus h should have shape B à M ), then batch normalization can be formalized as γ hâmean(h) std(h) + β. One need to compute the mean(h) over a mini-batch, which takes M multiplications, and BM + 2M multiplication to compute the standard deviation std(h). The fraction takes BM divisions, which should be equal to the same amount of multiplication. Multiplying that by the γ parameter, adds another BM multiplications. So each batch normalization layer takes an extra 3BM + 3M multiplications in the forward pass. The backward pass takes roughly twice as many multiplications in addition, if we use SGD. These amount of multiplications are the same no matter we use binarization or not. Bearing those in mind, the total amount of multiplications invoked in a mini-batch update are shown in Table 2. The last column lists the ratio of multiplications left, after applying ternary connect and quantized back propagation. | 1510.03009#17 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 17 | # Nt
U (T ) = p(tk|t1, t2, ..., tkâ1) · g(k) (12)
# i=1 Y
The underlying intuition here is as follows. First, neural decoding combines the previously built rep- resentation with the word predicted at the current step. As decoding proceeds, the inï¬uence of the initial input on decoding (i.e., the source sentence representation) diminishes as additional previously- predicted words are encoded in the vector represen- tations.4 In other words, the ï¬rst words to be pre- dicted signiï¬cantly determine the remainder of the sentence. Penalizing words predicted early on by the language model contributes more to the diversity of the sentence than it does to words predicted later. Second, as the inï¬uence of the input on decoding de- clines, the inï¬uence of the language model comes to dominate. We have observed that ungrammatical segments tend to appear in the later parts of the sen- tences, especially in long sentences.
We adopt the most straightforward form of g(k) by setting up a threshold (γ) by penalizing the ï¬rst γ words where5
g(k) = ( 1 0 if k ⤠γ if k > γ (13) | 1510.03055#17 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 18 | Table 2: Estimated number of multiplications in MNIST net
Full precision without BN 1.7480 Ã 109 1.7535 Ã 109 with BN Ternary connect + Quantized backprop 1.8492 Ã 106 7.4245 Ã 106 ratio 0.001058 0.004234
# 5.1.2 CIFAR10
CIFAR10 (Krizhevsky & Hinton, 2009) contains images of size 32 à 32 RGB pixels. Like for MNIST, we split the dataset into 40000, 10000, and 10000 training-, validation-, and test-cases, respectively. We apply our approach in a convolutional network for this dataset. The network has 6 convolution/pooling layers, 1 fully connected layer and 1 classiï¬cation layer. We use the hinge loss for training, with a batch size of 100. We also tried using ternary connect at test time. On the model trained by ternary connect and quantized back propagation, it yields 13.54% error rate. Similar to what we observed in the fully connected network, binary (ternary) connect and quantized back propagation yield a slightly higher performance than ordinary SGD.
# 5.1.3 SVHN | 1510.03009#18 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 18 | g(k) = ( 1 0 if k ⤠γ if k > γ (13)
The objective in Equation 9 can thus be rewritten as:
log p(T |S) â λ log U (T ) (14)
where direct decoding is tractable.
# 4.3.2 MMI-bidi
Direct decoding from (1 â λ) log p(T |S) + λ log p(S|T ) is intractable, as the second part (i.e., p(S|T )) requires completion of target generation be- fore p(S|T ) can be effectively computed. Due to the enormous search space for target T , exploring all possibilities is infeasible. For practical reasons,
then, we turn to an ap- proximation approach that involves ï¬rst generating
4Attention models (Xu et al., 2015) may offer some promise of addressing this issue.
5We experimented with a smooth decay in g(k) rather than a stepwise function, but this did not yield better performance. | 1510.03055#18 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 19 | # 5.1.3 SVHN
The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) contains RGB images of house numbers. It contains more than 600,000 images in its extended training set, and roughly 26,000 images in its test set. We remove 6,000 images from the training set for validation. We use 7 layers of convolution/pooling, 1 fully connected layer, and 1 classiï¬cation layer. Batch size is also
5
Published as a conference paper at ICLR 2016
set to be 100. The performances we get is consistent with our results on CIFAR10. Extending the ternary connect mechanism to its test time yields 2.99% error rate on this dataset. Again, it improves over ordinary SGD by using binary (ternary) connect and quantized back propagation.
# 5.2 CONVERGENCE | 1510.03009#19 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 19 | 5We experimented with a smooth decay in g(k) rather than a stepwise function, but this did not yield better performance.
N-best lists given the ï¬rst part of objective func- tion, i.e., standard SEQ2SEQ model p(T |S). Then we rerank the N-best lists using the second term of the objective function. Since N-best lists produced by SEQ2SEQ models are generally grammatical, the ï¬nal selected options are likely to be well-formed. Model reranking has obvious drawbacks. It results in non-globally-optimal solutions by ï¬rst emphasiz- ing standard SEQ2SEQ objectives. Moreover, it re- lies heavily on the systemâs success in generating a sufï¬ciently diverse N-best set, requiring that a long list of N-best lists be generated for each message.
Nonetheless, these two variants of the MMI crite- rion work well in practice, signiï¬cantly improving both interestingness and diversity.
# 4.4 Training | 1510.03055#19 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 20 | # 5.2 CONVERGENCE
Taking the convolutional networks on CIFAR10 as a test-bed, we now study the learning behaviour in more detail. Figure 1 shows the performance of the model in terms of test set errors during training. The ï¬gure shows that binarization makes the network converge slower than ordinary SGD, but yields a better optimum after the algorithm converges. Compared with binary connect (red line), adding quantization in the error propagation (yellow line) doesnât hurt the model accuracy at all. Moreover, having ternary connect combined with quantized back propagation (green line) surpasses all the other three approaches.
â Full Resolution â Binary Connect â Binary Connect + Quantized BP Ternary Connect + Quantized BP Error rate i) 50 100 150 200 250 300 epochs
Figure 1: Test set error rate at each epoch for ordinary back propagation, binary connect, binary connect with quantized back propagation, and ternary connect with quantized back propagation. Vertical axis is represented in logarithmic scale.
5.3 THE EFFECT OF BIT CLIPPING | 1510.03009#20 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 20 | Nonetheless, these two variants of the MMI crite- rion work well in practice, signiï¬cantly improving both interestingness and diversity.
# 4.4 Training
Recent research has shown that deep LSTMs work better than single-layer LSTMs for SEQ2SEQ tasks (Vinyals et al., 2015; Sutskever et al., 2014). We adopt a deep structure with four LSTM layers for encoding and four LSTM layers for decoding, each of which consists of a different set of parameters. Each LSTM layer consists of 1,000 hidden neurons, and the dimensionality of word embeddings is set to 1,000. Other training details are given below, broadly aligned with Sutskever et al. (2014).
⢠LSTM parameters and embeddings are initial- ized from a uniform distribution in [â0.08, 0.08].
⢠Stochastic gradient decent is implemented us- ing a ï¬xed learning rate of 0.1.
Batch size is set to 256. ⢠Gradient clipping is adopted by scaling gradi- ents when the norm exceeded a threshold of 1. Our implementation on a single GPU processes at a speed of approximately 600-1200 tokens per second on a Tesla K40. | 1510.03055#20 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 21 | 5.3 THE EFFECT OF BIT CLIPPING
In Section 4 we mentioned that quantization will be limited by the number of bits we use. The maximum number of bits to shift determines the amount of memory needed, but it also determines in what range a single weight update can vary. Figure 2 shows the model performance as a function of the maximum allowed bit shifts. These experiments are conducted on the MNIST dataset, with the aforementioned fully connected model. For each case of bit clipping, we repeat the experiment for 10 times with different initial random instantiations.
The ï¬gure shows that the approach is not very sensible to the number of bits used. The maximum allowed shift in the ï¬gure varies from 2 bits to 10 bits, and the performance remains roughly the same. Even by restricting bit shifts to 2, the model can still learn successfully. The fact that the performance is not very sensitive to the maximum of allowed bit shifts suggests that we do not need to redeï¬ne the number of bits used for quantizing x for different tasks, which would be an important practical advantage. | 1510.03009#21 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 21 | The p(S|T ) model described in Section 4.3.1 was trained using the same model as that of p(T |S), with messages (S) and responses (T ) interchanged.
# 4.5 Decoding
# 4.5.1 MMI-antiLM
As described in Section 4.3.1, decoding using log p(T |S) â λU (T ) can be readily implemented by predicting tokens at each time-step. In addition, we found in our experiments that it is also important to
take into account the length of responses in decod- ing. We thus linearly combine the loss function with length penalization, leading to an ultimate score for a given target T as follows:
Score(T ) = p(T |S) â λU (T ) + γNt (15) | 1510.03055#21 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 22 | The x to be quantized is not necessarily distributed symmetrically around 2. For example, Figure 3 shows the distribution of x at each layer in the middle of training. The maximum amount of shift to the left does not need to be the same as that on the right. A more efï¬cient way is to use different values for the maximum left shift and the maximum right shift. Bearing that in mind, we set it to 3 bits maximum to the right and 4 bits to the left.
6
Published as a conference paper at ICLR 2016
1.375 OO Error rate (%) 1.125 Maximum allowed shifts
Figure 2: Model performance as a function of the maximum bit shifts allowed in quantized back propagation. The dark blue line indicates mean error rate over 10 independent runs, while light blue lines indicate their corresponding maximum and minimum error rates.
1200072 = =10 = 2 10000 8000 6000 4000 2000 =20 =15 =10 -5 0 5
Figure 3: Histogram of representations at each layer while training a fully connected network for MNIST. The ï¬gure represents a snap-shot in the middle of training. Each subï¬gure, from bottom up, represents the histogram of hidden states from the ï¬rst layer to the last layer. The horizontal axes stand for the exponent of the layersâ representations, i.e., log2 x.
# 6 CONCLUSION AND FUTURE WORK | 1510.03009#22 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 22 | Score(T ) = p(T |S) â λU (T ) + γNt (15)
where Nt denotes the length of the target and γ de- notes associated weight. We optimize γ and λ us- ing MERT (Och, 2003) on N-best lists of response candidates. The N-best lists are generated using the decoder with beam size B = 200. We set a maxi- mum length of 20 for generated candidates. At each time step of decoding, we are presented with B à B candidates. We ï¬rst add all hypotheses with an EOS token being generated at current time step to the N- best list. Next we preserve the top B unï¬nished hy- potheses and move to next time step. We therefore maintain beam size of 200 constant when some hy- potheses are completed and taken down by adding in more unï¬nished hypotheses. This will lead the size of ï¬nal N-best list for each input much larger than the beam size.
# 4.5.2 MMI-bidi
We generate N-best lists based on P (T |S) and then rerank the list by linearly combining p(T |S), λp(S|T ), and γNt. We use MERT to tune the weights λ and γ on the development set.6 | 1510.03055#22 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 23 | # 6 CONCLUSION AND FUTURE WORK
We proposed a way to eliminate most of the ï¬oating point multiplications used during training a feedforward neural network. This could make it possible to dramatically accelerate the training of neural networks by using dedicated hardware implementations.
A somewhat surprising fact is that instead of damaging prediction accuracy the approach tends im- prove it, which is probably due to several facts. First is the regularization effect that the stochastic sampling process entails. Noise injection brought by sampling the weight values can be viewed as a regularizer, and that improves the model generalization. The second fact is low precision weight val- ues. Basically, the generalization error bounds for neural nets depend on the weights precision. Low precision prevents the optimizer from ï¬nding solutions that require a lot of precision, which corre- spond to very thin (high curvature) critical points, and these minima are more likely to correspond to overï¬tted solutions then broad minima (there are more functions that are compatible with such solutions, corresponding to a smaller description length and thus better generalization). Similarly,
7
Published as a conference paper at ICLR 2016
Neelakantan et al. (2015) adds noise into gradients, which makes the optimizer prefer large-basin areas and forces it to ï¬nd broad minima. It also lowers the training loss and improves generalization. | 1510.03009#23 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 23 | # 5 Experiments
# 5.1 Datasets
Twitter Conversation Triple Dataset We used an extension of the dataset described in Sordoni et al. (2015), which consists of 23 million conversa- tional snippets randomly selected from a collection of 129M context-message-response triples extracted from the Twitter Firehose over the 3-month period from June through August 2012. For the purposes of our experiments, we limited context to the turn in the conversation immediately preceding the mes- sage. In our LSTM models, we used a simple input
6As with MMI-antiLM, we could have used grid search in- stead of MERT, since there are only 3 features and 2 free param- eters. In either case, the search attempts to ï¬nd the best tradeoff between p(T |S) and p(S|T ) according to BLEU (which tends to weight the two models relatively equally) and ensures that generated responses are of reasonable length. | 1510.03055#23 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 24 | Directions for future work include exploring actual implementations of this approach (for example, using FPGA), seeking more efï¬cient ways of binarization, and the extension to recurrent neural networks.
# ACKNOWLEDGMENTS
The authors would like to thank the developers of Theano (Bastien et al., 2012). We acknowledge the support of the following agencies for research funding and computing support: Samsung, NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR.
# REFERENCES
Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. | 1510.03009#24 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 24 | distinct-1 .023 .032 .033 .051 .098 .101 Model SEQ2SEQ (baseline) SEQ2SEQ (greedy) MMI-antiLM: log p(T |S) â λU (T ) MMI-bidi: (1 â λ) log p(T |S) + λ log p(S|T ) SMT (Ritter et al., 2011) SMT+neural reranking (Sordoni et al., 2015) # of training instances BLEU 4.31 4.51 4.86 5.22 3.60 4.44 23M 23M 23M 23M 50M 50M distinct-2 .107 .148 .175 .270 .351 .358
Table 2: Performance on the Twitter dataset of 4-layer SEQ2SEQ models and MMI models. distinct-1 and distinct-2 are respectively the number of distinct unigrams and bigrams divided by total number of generated words.
model in which contexts and messages are concate- nated to form the source input. | 1510.03055#24 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 25 | Burge, Peter S., van Daalen, Max R., Rising, Barry J. P., and Shawe-Taylor, John S. Stochastic bit- stream neural networks. In Maass, Wolfgang and Bishop, Christopher M. (eds.), Pulsed Neural Networks, pp. 337â352. MIT Press, Cambridge, MA, USA, 1999. ISBN 0-626-13350-4. URL http://dl.acm.org/citation.cfm?id=296533.296552.
Cheng, Zhiyong, Soudry, Daniel, Mao, Zexi, and Lan, Zhenzhong. Training binary multilayer arXiv preprint neural networks for image classiï¬cation using expectation backpropagation. arXiv:1503.03562, 2015.
Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015. | 1510.03009#25 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 25 | model in which contexts and messages are concate- nated to form the source input.
For tuning and evaluation, we used the devel- opment dataset (2118 conversations) and the test dataset (2114 examples), augmented using informa- tion retrieval methods to create a multi-reference set, as described by Sordoni et al. (2015). The selection criteria for these two datasets included a component of relevance/interestingness, with the result that dull responses will tend to be penalized in evaluation.
distinct-1 0.0056 0.0184 (+228%) 0.0103 (+83.9%) BLEU 1.28 1.74 (+35.9%) 1.44 (+28.2%) distinct-2 0.0136 0.066 (407%) 0.0303 (+122%) Model SEQ2SEQ MMI-antiLM MMI-bidi
Table 3: Performance of the SEQ2SEQ baseline and two MMI models on the OpenSubtitles dataset.
with source and target length restricted to the range of [6,18]. | 1510.03055#25 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 26 | Gulcehre, Caglar, Firat, Orhan, Xu, Kelvin, Cho, Kyunghyun, Barrault, Loic, Lin, Huei-Chi, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535, 2015.
Jeavons, Peter, Cohen, David A., and Shawe-Taylor, John. Generating binary sequences for stochas- tic computing. Information Theory, IEEE Transactions on, 40(3):716â720, 1994.
Kim, Minje and Paris, Smaragdis. Bitwise neural networks. In Proceedings of The 31st International Conference on Machine Learning, pp. 0â0, 2015.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012. | 1510.03009#26 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 26 | with source and target length restricted to the range of [6,18].
OpenSubtitles dataset In addition to unscripted Twitter conversations, we also used the OpenSub- titles (OSDb) dataset (Tiedemann, 2009), a large, noisy, open-domain dataset containing roughly 60M- 70M scripted lines spoken by movie characters. This dataset does not specify which character speaks each subtitle line, which prevents us from inferring speaker turns. Following Vinyals et al. (2015), we make the simplifying assumption that each line of subtitle constitutes a full speaker turn. Our models are trained to predict the current turn given the pre- ceding ones based on the assumption that consecu- tive turns belong to the same conversation. This in- troduces a degree of noise, since consecutive lines may not appear in the same conversation or scene, and may not even be spoken by the same character. This limitation potentially renders the OSDb dataset unreliable for evaluation purposes. For eval- uation purposes, we therefore used data from the In- ternet Movie Script Database (IMSDB),7 which ex- plicitly identiï¬es which character speaks each line of the script. This allowed us to identify consecutive message-response pairs spoken by different charac- ters. We randomly selected two subsets as devel- opment and test datasets, each containing 2k pairs,
# 5.2 Evaluation | 1510.03055#26 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 27 | Kwan, Hon Keung and Tang, CZ. Multiplierless multilayer feedforward neural network design suitable for continuous input-output mapping. Electronics Letters, 29(14):1259â1260, 1993.
In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8595â 8598. IEEE, 2013.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Machado, Emerson Lopes, Miosso, Cristiano Jacques, von Borries, Ricardo, Coutinho, Murilo, Berger, Pedro de Azevedo, Marques, Thiago, and Jacobi, Ricardo Pezzuol. Computational cost reduction in learned transform classiï¬cations. arXiv preprint arXiv:1504.06779, 2015.
Marchesi, Michele, Orlandi, Gianni, Piazza, Francesco, and Uncini, Aurelio. Fast neural networks without multipliers. Neural Networks, IEEE Transactions on, 4(1):53â62, 1993.
8
Published as a conference paper at ICLR 2016 | 1510.03009#27 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 27 | # 5.2 Evaluation
For parameter tuning and ï¬nal evaluation, we used BLEU (Papineni et al., 2002), which was shown to correlate reasonably well with human judgment on the response generation task (Galley et al., 2015). In the case of the Twitter models, we used multi- reference BLEU. As the IMSDB data is too limited to support extraction of multiple references, only sin- gle reference BLEU was used in training and evalu- ating the OSDb models.
We did not follow Vinyals and Le (2015) in us- ing perplexity as evaluation metric. Perplexity is un- likely to be a useful metric in our scenario, since our proposed model is designed to steer away from the standard SEQ2SEQ model in order to diversify the outputs. We report degree of diversity by calcu- lating the number of distinct unigrams and bigrams in generated responses. The value is scaled by total number of generated tokens to avoid favoring long sentences (shown as distinct-1 and distinct-2 in Ta- bles 2 and 3).
# 5.3 Results | 1510.03055#27 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03009 | 28 | 8
Published as a conference paper at ICLR 2016
Neelakantan, Arvind, Vilnis, Luke, Le, Quoc V, Sutskever, Ilya, Kaiser, Lukasz, Kurach, Karol, and Martens, James. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, pp. 5. Granada, Spain, 2011.
Simard, Patrice Y and Graf, Hans Peter. Backpropagation without multiplication. In Advances in Neural Information Processing Systems, pp. 232â239, 1994.
van Daalen, Max, Jeavons, Pete, Shawe-Taylor, John, and Cohen, Dave. Device for generating binary sequences for stochastic computing. Electronics Letters, 29(1):80â81, 1993.
9 | 1510.03009#28 | Neural Networks with Few Multiplications | For most deep learning algorithms training is notoriously time consuming.
Since most of the computation in training neural networks is typically spent on
floating point multiplications, we investigate an approach to training that
eliminates the need for most of these. Our method consists of two parts: First
we stochastically binarize weights to convert multiplications involved in
computing hidden states to sign changes. Second, while back-propagating error
derivatives, in addition to binarizing the weights, we quantize the
representations at each layer to convert the remaining multiplications into
binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10,
SVHN) show that this approach not only does not hurt classification performance
but can result in even better performance than standard stochastic gradient
descent training, paving the way to fast, hardware-friendly training of neural
networks. | http://arxiv.org/pdf/1510.03009 | Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio | cs.LG, cs.NE | Published as a conference paper at ICLR 2016. 9 pages, 3 figures | null | cs.LG | 20151011 | 20160226 | [
{
"id": "1503.03535"
},
{
"id": "1504.06779"
},
{
"id": "1503.03562"
},
{
"id": "1511.00363"
},
{
"id": "1511.06807"
}
] |
1510.03055 | 28 | 7IMSDB (http://www.imsdb.com/) is a relatively small database of around 0.4 million sentences and thus not suit- able for open domain dialogue training.
Twitter Dataset We ï¬rst report performance on Twitter datasets in Table 2, along with results for different models (i.e., Machine Translation and MT+neural reranking) reprinted from Sordoni et al.
(2015) on the same dataset. The baseline is the SEQ2SEQ model with its standard likelihood objec- tive and a beam size of 200. We compare this base- line against greedy-search SEQ2SEQ (Vinyals and Le, 2015), which can help achieve higher diversity by increasing search errors.8
Machine Translation is the phrase-based MT sys- tem described in (Ritter et al., 2011). MT features in- clude commonly used ones in Moses (Koehn et al., 2007), e.g., forward and backward maximum like- lihood âtranslationâ probabilities, word and phrase penalties, linear distortion, etc. For more details, re- fer to Sordoni et al. (2015). | 1510.03055#28 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 29 | MT+neural reranking is the phrase-based MT sys- tem, reranked using neural models. N-best lists are ï¬rst generated from the MT system. Recurrent neu- ral models generate scores for N-best list candidates given the input messages. These generated scores are re-incorporated to rerank all the candidates. Ad- ditional features to score [1-4]-gram matches be- tween context and response and between message and context (context and message match CMM fea- tures) are also employed, as in Sordoni et al. (2015). MT+neural reranking achieves a BLEU score of 4.44, which to the best of our knowledge repre- sents the previous state-of-the-art performance on this Twitter dataset. Note that Machine Translation and MT+neural reranking are trained on a much larger dataset of roughly 50 million examples. A sig- niï¬cant performance boost is observed from MMI- bidi over baseline SEQ2SEQ, both in terms of BLEU score and diversity. | 1510.03055#29 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 30 | The beam size of 200 used in our main experi- ments is quite conservative, and BLEU scores only slightly degrade when reducing beam size to 20. For MMI-bidi, BLEU scores for beam sizes of 200, 50, 20 are respectively 5.90, 5.86, 5.76. A beam size of 20 still produces relatively large N-best lists (173 elements on average) with responses of varying lengths, which offer enough diversity for the p(S|T ) model to have a signiï¬cant effect.
OpenSubtitles Dataset All models achieve signif- icantly lower BLEU scores on this dataset than on
8Another method would have been to sample from the p(T |S) distribution to increase diversity. While these methods have merits, we think we ought to ï¬nd a proper objective and optimize it exactly, rather than cope with an inadequate one and add noise to it.
Comparator Gain 95% CI SMT (Ritter et al., 2011) SMT+neural reranking 0.29 0.28 [0.25, 0.32] [0.25, 0.32] SEQ2SEQ (baseline) SEQ2SEQ (greedy) 0.11 0.08 [0.07, 0.14] [0.04, 0.11] | 1510.03055#30 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 31 | Table 6: MMI-bidi gains over comparator systems, based on pairwise human judgments.
the Twitter dataset, primarily because the IMSDB data provides only single references for evaluation. We note, however, that baseline SEQ2SEQ models yield lower levels of unigram diversity (distinct-1) on the OpenSubtitles dataset than on the Twitter data (0.0056 vs 0.017), which suggests that other fac- tors may be in play. It is likely that movie dialogs are much more concise and information-rich than typical conversations on Twitter, making it harder to match gold-standard responses and causing the learned models to strongly favor safe, conservative responses. | 1510.03055#31 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 32 | Table 3 shows that the MMI-antiLM model yields a signiï¬cant performance boost, with a BLEU score increase of up to 36% and a more than 200% jump in unigram diversity. Our interpretation of this huge performance improvement is that the diver- sity and complexity of input messages lead standard SEQ2SEQ models to generate very conservative re- sponses,9 which fail to match the more interesting reference strings typical of this dataset. This inter- pretation is also supported by the fact that the MMI- bidi model does not produce as signiï¬cant a perfor- mance boost as MMI-antiLM. In the case of MMI- bidi, N-best lists generated using standard SEQ2SEQ models remain conservative and uninteresting, atten- uating the impact of later reranking. An important potential limitation of MMI-bidi model is thus that its performance hinges on the initial generation of a highly diverse, informative N-best list.
Qualitative Evaluation We employed crowd- sourced judges to provide evaluations for a random sample of 1000 items in the Twitter test dataset. Ta- ble 6 shows the results of human evaluations be- tween paired systems. Each output pair was ranked | 1510.03055#32 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 34 | message I did not get the report from an mi-6 agent. You havenât been given an assignment in this case. Iâm losing my grip. MMI P (T |S) â λP (T ) You did the right thing, did you? SEQ2SEQ I donât think thatâs a good idea. Iâve been looking all over for you. I donât know what you are talking about. I donât know what you are talking about. Come on, come on You know that, donât you? Iâm the only one in the world. I have something we need to talk about. You do have fun, donât you? I am ready to help. You programmed me to gather intelli- gence. Thatâs all Iâve ever done. I mean, weâd have to talk to him. This has nothing to do with him. I mean, heâs a good guy. I know him as much. I mean, I donât know. You donât know anything about him, do you? I donât know I donât know what you are talking about Yes, he is, isnât he? How come you never say it? He is the one behind this scheme to | 1510.03055#34 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 36 | Table 4: Responses from the SEQ2SEQ baseline and MMI-antiLM models on the OpenSubtitles dataset.
MMI-bidi Like 2 weeks ago!! SEQ2SEQ I donât even know. message Ohh Iâve never seen that! How long does it take you guys to learn the drill? Youâll know when we sign Heskey in 2 weeks Iâm waiting!! I have even got biscuits waiting for you?? My parents actually had touch too much played at their wedding. Say anything you want... love me and I love you. I am off all week next week How are you doing? Iâll see what I can do I sure hope so.. I have no idea what youâre talking about Iâll send you a pic when I get home xx I have no idea what youâre talking about How old are they?? I know you I love you so much. I love you too, baby. What time you going? Iâm good, you? What time you going? Iâm good, thanks.
Table 5: Responses from the SEQ2SEQ baseline and MMI-bidi models on the Twitter dataset. | 1510.03055#36 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 37 | Table 5: Responses from the SEQ2SEQ baseline and MMI-bidi models on the Twitter dataset.
Input: What are you doing? 1. Iâve been looking for you. 2. I want to talk to you. 3. Just making sure youâre OK. 6. Iâm looking for a doctor. Input: What is your name? 1. Blue! 2. Peter. 3. Tyler. Input: How old are you? 1. Twenty-eight. 2. Twenty-four. 3. Long. Table 7: Examples generated by the MMI-antiLM model on the OpenSubtitles dataset.
by 5 judges, who were asked to decide which of the two outputs was better. They were instructed to pre- fer outputs that were more speciï¬c (relevant) to the message and preceding context, as opposed to those that were more generic. Ties were permitted. Iden- tical strings were algorithmically assigned the same | 1510.03055#37 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 38 | score. The mean of differences between outputs is shown as the gain for MMI-bidi over the competing system. At a signiï¬cance level of α = 0.05, we ï¬nd that MMI-bidi outperforms both baseline and greedy SEQ2SEQ systems, as well as the weaker SMT and SMT+RNN baselines. MMI-bidi outperforms SMT in human evaluations despite the greater lexical di- versity of MT output.
Separately, judges were also asked to rate overall quality of MMI-bidi output over the same 1000-item sample in isolation, each output being evaluated by 7 judges in context using a 5-point scale. The mean rating was 3.84 (median: 3.85, 1st Qu: 3.57, 3rd Qu: 4.14), suggesting that overall MMI-bidi output does appear reasonably acceptable to human judges.10
10In the human evaluations, we asked the annotators to prefer responses that were more speciï¬c to the context only when do- ing the pairwise evaluations of systems. The absolute evaluation was conducted separately (on different days) on the best system, | 1510.03055#38 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 39 | Table 7 presents the N-best candidates generated using the MMI-bidi model for the inputs of Table 1. We see that MMI generates signiï¬cantly more inter- esting outputs than SEQ2SEQ.
In Tables 4 and 5, we present responses generated by different models. All examples were randomly sampled (without cherry picking). We see that the baseline SEQ2SEQ model tends to generate reason- able responses to simple messages such as How are you doing? or I love you. As the complexity of the message increases, however, the outputs switch to more conservative, duller forms, such as I donât know or I donât know what you are talking about. An occasional answer of this kind might go unno- ticed in a natural conversation, but a dialog agent that always produces such responses risks being per- ceived as uncooperative. MMI-bidi models, on the other hand, produce far more diverse and interesting responses.
# 6 Conclusions | 1510.03055#39 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 40 | # 6 Conclusions
We investigated an issue encountered when applying SEQ2SEQ models to conversational response gen- eration. These models tend to generate safe, com- monplace responses (e.g., I donât know) regardless of the input. Our analysis suggests that the issue is at least in part attributable to the use of unidi- rectional likelihood of output (responses) given in- put (messages). To remedy this, we have proposed using Maximum Mutual Information (MMI) as the objective function. Our results demonstrate that the proposed MMI models produce more diverse and in- teresting responses, while improving quality as mea- sured by BLEU and human evaluation.
To the best of our knowledge, this paper repre- sents the ï¬rst work to address the issue of output diversity in the neural generation framework. We have focused on the algorithmic dimensions of the problem. Unquestionably numerous other factors such as grounding, persona (of both user and agent), and intent also play a role in generating diverse, con- versationally interesting outputs. These must be left for future investigation. Since the challenge of pro- ducing interesting outputs also arises in other neural generation tasks, including image-description gener
and annotators were asked to evaluate the overall quality of the response, speciï¬cally Provide your impression of overall qual- ity of the response in this particular conversation. | 1510.03055#40 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 41 | and annotators were asked to evaluate the overall quality of the response, speciï¬cally Provide your impression of overall qual- ity of the response in this particular conversation.
ation, question answering, and potentially any task where mutual correspondences must be modeled, the implications of this work extend well beyond conversational response generation.
# Acknowledgments
We thank the anonymous reviewers, as well as Dan Jurafsky, Alan Ritter, Stephanie Lukin, George Sp- ithourakis, Alessandro Sordoni, Chris Quirk, Meg Mitchell, Jacob Devlin, Oriol Vinyals, and Dhruv Batra for their comments and suggestions.
# References
David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luke, I am your father: dealing with out-of-domain requests by using movies subtitles. In Intelligent Virtual Agents, pages 13â21. Springer. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of the International Conference on Learning Representations (ICLR). | 1510.03055#41 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 42 | L. Bahl, P. Brown, P. de Souza, and R. Mercer. 1986. Maximum mutual information estimation of hidden Markov model parameters for speech recognition. Acoustics, Speech, and Signal Processing, IEEE Inter- national Conference on ICASSP â86., pages 49â52.
IRIS: a chat- oriented dialogue system based on the vector space model. In Proc. of the ACL 2012 System Demonstra- tions, pages 37â42.
Peter F. Brown. 1987. The Acoustic-modeling Prob- lem in Automatic Speech Recognition. Ph.D. thesis, Carnegie Mellon University.
Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering doc- In In Research uments and producing summaries. and Development in Information Retrieval, pages 335â 336.
Yun-Nung Chen, Wei Yu Wang, and Alexander Rudnicky. 2013. An empirical investigation of sparse log-linear models for improved dialogue act classiï¬cation. In Proc. of ICASSP, pages 8317â8321. | 1510.03055#42 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 43 | Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. âBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. of ACL- IJCNLP, pages 445â450, Beijing, China, July.
Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proc. of ACL, pages 699â709.
Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1100â1111.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â 1780.
Xuedong Huang, Alex Acero, Hsiao-Wuen Hon, and Raj Foreword By-Reddy. 2001. Spoken language process- ing: A guide to theory, algorithm, and system develop- ment. Prentice Hall. | 1510.03055#43 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 44 | Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of the 45th Annual Meeting of the Association for Computational Linguistics, pages 177â180, Prague, Czech Republic, June. Association for Computational Linguistics.
Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interac- tion for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11â23.
Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proc. of ACL-IJCNLP, pages 11â19, Beijing, China.
Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-RNN). ICLR. | 1510.03055#44 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 45 | Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on exam- ples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â361. Springer.
Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 160â167, Sapporo, Japan, July. Association for Computational Linguistics.
Alice H Oh and Alexander I Rudnicky. 2000. Stochastic language generation for spoken dialogue systems. In Proc. of the 2000 ANLP/NAACL Workshop on Conver- sational systems-Volume 3, pages 27â32.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evalu- ation of machine translation. In Proc. of ACL.
Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. 2009. Are we
there yet? systems. Springer. research in commercial spoken dialog In Text, Speech and Dialogue, pages 3â13. | 1510.03055#45 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 46 | there yet? systems. Springer. research in commercial spoken dialog In Text, Speech and Dialogue, pages 3â13.
Adwait Ratnaparkhi. 2002. Trainable approaches to sur- face natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â455.
Alan Ritter, Colin Cherry, and William Dolan. 2011. In Data-driven response generation in social media. Proc. of EMNLP, pages 583â593.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierar- chical neural network models. In Proc. of AAAI, Febru- ary.
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neu- ral responding machine for short-text conversation. In Proc. of ACL-IJCNLP, pages 1577â1586. | 1510.03055#46 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 47 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Proc. of NAACL-HLT, MayâJune. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Proc. of NIPS, pages 3104â3112.
J¨org Tiedemann. 2009. News from OPUS â a collec- tion of multilingual parallel corpora with tools and in- terfaces. In Recent advances in natural language pro- cessing, volume 5, pages 237â248.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proc. of ICML Deep Learning Work- shop.
Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proc. of NIPS.
Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH. | 1510.03055#47 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 48 | Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH.
William Yang Wang, Ron Artstein, Anton Leuski, and Improving spoken dialogue In David Traum. understanding using phonetic mixture models. FLAIRS. 2011.
Tsung-Hsien Wen, Milica Gasic, Nikola MrkËsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Se- mantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proc. of EMNLP, pages 1711â1721, Lisbon, Portugal, Septem- ber.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neu- ral image caption generation with visual attention. In
Proc. of ICML, pages 2048â2057. JMLR Workshop and Conference Proceedings. | 1510.03055#48 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.03055 | 49 | Proc. of ICML, pages 2048â2057. JMLR Workshop and Conference Proceedings.
Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversa- tion model. In NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction. Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken dia- logue management. Computer Speech & Language, 24(2):150â174. | 1510.03055#49 | A Diversity-Promoting Objective Function for Neural Conversation Models | Sequence-to-sequence neural network models for generation of conversational
responses tend to generate safe, commonplace responses (e.g., "I don't know")
regardless of the input. We suggest that the traditional objective function,
i.e., the likelihood of output (response) given input (message) is unsuited to
response generation tasks. Instead we propose using Maximum Mutual Information
(MMI) as the objective function in neural models. Experimental results
demonstrate that the proposed MMI models produce more diverse, interesting, and
appropriate responses, yielding substantive gains in BLEU scores on two
conversational datasets and in human evaluations. | http://arxiv.org/pdf/1510.03055 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan | cs.CL | In. Proc of NAACL 2016 | null | cs.CL | 20151011 | 20160610 | [] |
1510.02675 | 1 | # Adriaan M. J. Schakel NNLP [email protected]
Benjamin Wilson Adriaan M. J. Schakel Lateral GmbH NNLP [email protected] [email protected]
February 15, 2022
# Abstract
An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modiï¬cations of the training corpus, permit the demonstration of direct relations between word properties and word vector direc- tion and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefï¬cients of linearity depend upon the word. The special point in feature space, deï¬ned by the (artiï¬cial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero.
# 1 Introduction | 1510.02675#1 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 2 | # 1 Introduction
Word embeddings, or distributed representations of words, have been the subject of much recent re- search in the natural language processing and machine learning communities, demonstrating state-of- the-art performance on word similarity and word analogy tasks, amongst others. Word embeddings represent words from the vocabulary as dense, real-valued vectors. Instead of one-hot vectors that merely indicate the location of a word in the vocabulary, dense vectors of dimension much smaller than the vocabulary size are constructed such that they carry syntactic and semantic information. Irrespec- tive of the technique chosen, word embeddings are typically derived from word co-occurrences. More speciï¬cally, in a machine-learning setting, word embeddings are typically trained by scanning a short window over all the text in a corpus. This process can be seen as sampling word co-occurrence distribu- tions, where it is recalled that the co-occurrence distribution of a target word w denotes the conditional probability P(wâ²|w) that a word wâ² occurs in its context, i.e., given that w occurred. Most applications of word embeddings explore not the word vectors themselves, but relations between them to solve, for example, similarity and word relation tasks [2]. For these tasks, it was found that using normalised word vectors improves performance. Word vector length is therefore typically ignored. | 1510.02675#2 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 3 | In a previous paper [9], we proposed the use of word vector length as measure of word signiï¬cance. Using a domain-speciï¬c corpus of scientiï¬c abstracts, we observed that words that appear only in similar contexts tend to have longer vectors than words of the same frequency that appear in a wide variety of contexts. For a given frequency band, we found meaningless function words clearly separated from proper nouns, each of which typically carries the meaning of a distinctive context in this corpus. In other words, the longer its vector, the more signiï¬cant a word is. We also observed that word signiï¬cance is not the only factor determining the length of a word vector, also the frequency with which a word occurs plays an important role.
1 | 1510.02675#3 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 4 | In this paper, we wish to study in detail to what extent these two factors determine word vectors. For a given corpus, both term frequency and co-occurrence are, of course, ï¬xed and it is not obvious how to unravel these dependencies in an unambiguous, objective manner. In particular, it is difï¬cult to establish the distinctiveness of the contexts in which a word is used. To overcome these problems, we propose to modify the training corpus in a controlled fashion. To this end, we insert new tokens into the corpus with varying frequencies and varying levels of noise in their co-occurrence distributions. By modeling the frequency and co-occurrence distributions of these tokens, or pseudowords1, on existing words in the corpus, we are able to study their effect on word vectors independently of one another. We can thus study a family of pseudowords that all appear in the same context, but with different frequencies, or study a family of pseudowords that all have the same frequency, but appear in a different number of contexts. Starting from the limited number of contexts in which a word appears in the original corpus, we can increase | 1510.02675#4 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 5 | the same frequency, but appear in a different number of contexts. Starting from the limited number of contexts in which a word appears in the original corpus, we can increase this number by interspersing the word in arbitrary contexts at random. The word thus looses its signiï¬cance in a controlled way. Although we present our approach using the word2vec CBOW model, these and related experiments could equally well be carried out for other word embedding methods such as the word2vec skip-gram model [7, 6], GloVe [8], and SENNA [3]. | 1510.02675#5 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 6 | We show that the length of the word vectors generated by the CBOW model depends more or less linearly on both word frequency and level of noise in the co-occurrence distribution of the word. In both cases, the coefï¬cient of linearity depends upon the word. If the co-occurrence distribution is ï¬xed, then word vector length increases with word frequency. If, on the other hand, word frequency is held constant, then word vector length decreases as the level of noise in the co-occurrence distribution of the word is increased. In addition, we show that the direction of a word vector varies smoothly with word frequency and the level of co-occurrence noise. When noise is added to the co-occurrence distribu- tion of a word, the corresponding vector smoothly interpolates between the original word vector and a small vector perpendicular to it that represents a word with pure noise in its co-occurrence distribution. Surprisingly, the special point in feature space, obtained by interspersing a pseudoword uniformly at random throughout the corpus with a frequency sufï¬ciently large to sample all contexts, is non-zero. | 1510.02675#6 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 7 | This paper is structured as follows. Section 2 draws connections to related work, while Section 3 describes the corpus and the CBOW model used in our experiments. Section 4 describes a controlled experiment for varying word frequency while holding the co-occurrence distribution ï¬xed. Section 5, in a complementary fashion, describes a controlled experiment for varying the level of noise in the co- occurrence distribution of a word while holding the word frequency ï¬xed. The ï¬nal section, Section 6, considers further questions and possible future directions.
# 2 Related work | 1510.02675#7 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 8 | # 2 Related work
Our experimental ï¬nding that word vector length decreases with co-occurrence noise is related to earlier work by Vecchi, Baroni, and Zamparelli [11], where a relation between vector length and the âsemantic devianceâ of an adjective-noun composite was studied empirically. In that paper, which is also based on word co-occurrence statistics, the authors study adjective-noun composites. They built a vocabulary from the 8k most frequent nouns and 4k most frequent adjectives in a large general language corpus and added 22k adjective-noun composites. For each item in the vocabulary, they recorded the co-occurrences with the top 10k most frequent content words (nouns, adjectives or verbs), and constructed word embed- dings via singular value decomposition of the co-occurrence matrix [5]. The authors considered several models for constructing vectors of unattested adjective-noun composites, the two simplest being adding and component-wise multiplying the adjective and noun vectors. They hypothesized that the length of the vectors thus constructed can be used to distinguish acceptable and semantically deviant adjective- noun composites. Using a few hundred adjective-noun composites selected by humans for evaluation, they found that deviant composites have a shorter vector than acceptable ones, in accordance with their expectation. In contrast to their work, our approach does not require human annotation. | 1510.02675#8 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 9 | 1We refer to these tokens as pseudowords, since their properties are modeled upon words in the lexicon and because our corpus modiï¬cation approach is reminiscent of the pseudoword approach for generating labeled data for word sense disambiguation tasks in [4].
2
Recent theoretical work [1] has approached the problem of explaining the so-called âcompositionalityâ property exhibited by some word embeddings. In that work, unnormalised vectors are used in their model of the word relation task. It is hoped that experimental approaches such as those described here might enable theoretical investigations to describe the role of the word vector length in the word relation tasks.
# 3 Corpus and model
Our training data is built from the Wikipedia data dump from October 2013. To remove the bulk of robot-generated pages from the training data, only pages with at least 20 monthly page views are retained.2 Stubs and disambiguation pages are also removed, leaving 463 thousand pages with a total of 482 million words. Punctuation marks and numbers were removed from the pages and all words were lower-cased. Word frequencies are summarised in Table 1. This base corpus is then modiï¬ed as described in Sections 4 and 5. For recognisability, the pseudowords inserted into the corpus are upper-cased.
# 3.1 Word2vec | 1510.02675#9 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 10 | # 3.1 Word2vec
Word2vec, a feed-forward neural network with a single hidden layer, learns word vectors from word co-occurrences in an unsupervised manner. Word2vec comes in two versions. In the continuous bag- of-words (CBOW) model, the words appearing around a target word serve as input. That input is projected linearly onto the hidden layer and the network then attempts to predict the target word on output. Training is achieved through back-propagation. The word vectors are encoded in the weights of the ï¬rst synaptic layer, âsyn0â. The weights of the second synaptic layer (âsyn1negâ, in the case of negative sampling) are typically discarded. In the other model, called skip-gram, target and context words swap places, so that the target word now serves as input, while the network attempts to predict the context words on output. | 1510.02675#10 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 11 | For simplicity only the word2vec CBOW word embedding with a single set of hyperparameters is considered. Speciï¬cally, a CBOW model with a hidden layer of size 100 is trained using negative sampling with 5 negative samples, a window size of 10, a minimum frequency of 128, and 10 passes through the corpus. Sub-sampling was not used so that the inï¬uence of word frequency could be more clearly discerned. Similar experimental results were obtained using hierarchical softmax, but these are omitted for succinctness. The relatively high low-frequency cut-off is chosen to ensure that word vectors, in all but degenerate cases, receive a sufï¬cient number of gradient updates to be meaningful. This frequency cut-off results in a vocabulary of 81117 words (only unigrams were considered).
The most recent revision of word2vec was used.3 The source code for performing the experiments is made available on GitHub.4
# 3.2 Replacement procedure | 1510.02675#11 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 12 | The most recent revision of word2vec was used.3 The source code for performing the experiments is made available on GitHub.4
# 3.2 Replacement procedure
In the experiments detailed below, we modify the corpus in a controlled manner by introducing pseu- dowords into the corpus via a replacement procedure. For the frequency experiment, the procedure is as follows. Consider a word, say cat. For each occurrence of this word, a sample i, 1 6 i 6 n is drawn from a truncated geometric distribution, and that occurrence of the word cat is replaced with the pseudoword CAT i. In this way, the word cat is replaced throughout the corpus by a family of pseudowords with varying frequencies but approximately the same co-occurrence distribution as cat. That is, all these pseudowords are used in roughly the same contexts as the original word. | 1510.02675#12 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 13 | 2For further justiï¬cation and to obtain the dataset, see
https://blog.lateral.io/2015/06/the-unknown-perils-of-mining-wikipedia/ 3SVN revision 42, see http://word2vec.googlecode.com/svn/trunk/ 4https://github.com/benjaminwilson/word2vec-norm-experiments
3
frequency band 20 â 21 21 â 22 22 â 23 23 â 24 24 â 25 25 â 26 26 â 27 27 â 28 28 â 29 29 â 210 210 â 211 211 â 212 212 â 213 213 â 214 214 â 215 215 â 216 216 â 217 217 â 218 218 â 219 219 â 220 220 â 221 221 â 222 222 â 223 223 â 224 224 â 225 225 â 226 | 1510.02675#13 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 14 | # words 979187 isa220, zhangzhongzhu, yewell, gxgr 416549 wz132, prabhanjna, fesh, rudick 220573 gustafsdotter, summerfields, autodata, nagassarium 134870 futu, abertillery, shikaras, yuppy 90755 chuva, waffling, wws, andujar 62581 nagini, sultanah, charrette, wndy 41359 shew, dl, kidjo, strangeways 27480 smartly, sydow, beek, falsify 17817 legionaries, mbius, mannerism, cathars 12291 bedtime, disabling, jockeys, brougham 8215 frederic, monmouth, constituting, grabbing 5509 questionable, bosnian, pigment, coaster 3809 dismissal, torpedo, coordinates, stays 2474 liberty, hebrew, survival, muscles 1579 destruction, trophy, patrick, seats 943 draft, wood, ireland, reason 495 brought, move, sometimes, away 221 february, children, college, see 83 music, life, following, game 29 during, time, other, she 17 has, its, but, an 10 by, on, it, his 4 was, is, as, for 3 in, and, to 1 of 1 the | 1510.02675#14 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 15 | Table 1: Number of words, by frequency band, as observed in the unmodiï¬ed corpus.
4
The geometric distribution is truncated to limit the number of pseudowords inserted into the corpus. For any choice 0 < p < 1 and maximum value n > 0, the truncated geometric distribution is given by the probability density function
Pp,n(i) = piâ1(1 â p) 1 â pn , 1 6 i 6 n. (1)
The factor in the denominator, which tends to unity in the limit n â â, assures proper normalisation. We have chosen this distribution because the probabilities decay exponentially base p as a function of i. Of course, other distributions might equally well have been chosen for the experiments.
For the noise experiment, we take instead of a geometric distribution, the distribution
Pn(i) = 2(n â i) n(n â 1) , 1 6 i 6 n. (2) | 1510.02675#15 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 16 | We have chosen this distribution for the noise experiment, because it leads to evenly spaced proportions of co-occurrence noise that cover the entire interval [0, 1].
# 4 Varying word frequency
In this ï¬rst experiment, we investigate the effect of word frequency on the word embedding. Using the replacement procedure, we introduce a small number of families of pseudowords into the corpus. The pseudowords in each family vary in frequency but, replacing a single word, all share a common co-occurrence distribution. This allows us to study the role of word frequency in isolation, everything else being kept equal. We consider two types of pseudowords.
# 4.1 Pseudowords derived from existing words
We choose uniformly at random a small number of words from the unmodiï¬ed vocabulary for our experiment. In order that the inserted pseudowords do not have too low a frequency, only words which occur at least 10 thousand times are chosen. We also include the high-frequency stopword the for comparison. Table 2 lists the words chosen for this experiment along with their frequencies. | 1510.02675#16 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 17 | The replacement procedure of Section 3.2 is then performed for each of these words, using a geometric decay rate of p = 1 2 , and maximum value n = 20, so that the 1st pseudoword is inserted with a probability of about 0.5, the 2nd with a probability of about 0.25, and so on. This value of p is one of a range of values that ensure that, for each word, multiple pseudowords will be inserted with a frequency sufï¬cient to survive the low-frequency cut-off of 128. A maximum value n = 20 sufï¬ces for this choice of p, since 220+log2 128 exceeds the maximum frequency of any word in the corpus. Figure 1 illustrates the effect of these modiï¬cations on a sample text, with a family of pseudowords CAT i, derived from the word cat. Notice that all occurrences of the word cat have been replaced with the pseudowords CAT i. | 1510.02675#17 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 18 | # 4.2 Pseudowords derived from an artiï¬cial, meaningless word
Whereas the pseudowords introduced above all replace an existing word that carries a meaning, we now include for comparison a high-frequency, meaningless word. We choose to introduce an artiï¬cial, entirely meaningless word VOID into the corpus, rather than choose an existing (stop)word whose mean- inglessness is only supposed. To achieve this, we intersperse the word uniformly at random throughout the corpus so that its relative frequency is 0.005. The co-occurrence distribution of VOID thus coincides with the unconditional word distribution. The replacement procedure is then performed for this word, using the same values for p and n as above. Figure 2 shows the effect of these modiï¬cations on a sample text, where a higher relative frequency of 0.05 is used instead for illustrative purposes.
5
word lawsuit mercury protestant hidden squad kong awarded response the frequency 11565 13059 13404 15736 24872 32674 55528 69511 38012326
Table 2: Words chosen for the word frequency experiment, along with their frequency in the unmodiï¬ed corpus. | 1510.02675#18 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 19 | the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others
the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others
Figure 1: Example sentences modiï¬ed in the word frequency experiment as per Section 4.1, where the word cat is replaced with pseudowords CAT i using the truncated geometric distribution (1) with p = 1 | 1510.02675#19 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 20 | VOID 1 the domestic cat was first classified as felis catus the semiferal cat VOID 3 a mostly outdoor cat is not VOID 2 owned by VOID 1 any one individual a pedigreed cat is one whose ancestry is recorded by a cat fancier organization a purebred cat is one whose ancestry contains only individuals of the same breed the cat skull is unusual among VOID 1 mammals in having very large eye sockets another unusual feature is that the cat cannot produce taurine within groups one cat is usually dominant over the others
Figure 2: The same example sentences as in Figure 1 where instead of the word cat now the mean- ingless word VOID is replaced with pseudowords VOID i. For illustrative purposes, the meaningless word VOID was here interspersed with a relative frequency of 0.05.
6 | 1510.02675#20 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 21 | 6
# 4.3 Experimental results
We next present the results of the word frequency experiment. We consider the effect of word frequency on the direction and on the length of word vectors separately.
# 4.3.1 Word frequency and vector direction
Figure 3 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Recall that the cosine similarity measures the extent to which two vectors have the same direction, taking a maximum value of 1 and a minimum value of â1. The number of different pseudowords associated with an experiment word is the number of times that its frequency can be halved and remain above the low-frequency cut-off of 128.
Consider ï¬rst the vectors for the pseudowords associated to the word the. Notice that the cosine similarity of the vectors for THE 1 and THE i decreases monotonically with i, while the cosine sim- ilarity of the vectors for THE i and THE 18 increases monotonically with i. Indeed the direction of the vector THE i changes systematically, interpolating between the directions of the vectors of the highest-frequency pseudoword THE 1 and the lowest-frequency pseudoword THE 18. The same trend is apparent (though over shorter frequency ranges) for all the families of pseudowords other than that for VOID. | 1510.02675#21 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 22 | Consider now the vectors for pseudowords derived from the meaningless word VOID. The vectors for VOID 7, . . . , VOID 13 are approximately orthogonal to one another, just as would be expected from randomly drawn vectors in a high dimensional space. As the pseudoword VOID occurs by construction in every context, a much higher number of samples is required to capture its co-occurrence distribution, and thereby to learn its vector (the same is true, but to a lesser extent, for the stopword the). We conclude that the vectors corresponding to the lower frequency pseudowords VOID 7, . . . , VOID 13 have not been trained on a sufï¬cient number of samples to establish their proper direction. These vectors are excluded from further analysis. The vectors for VOID 1, . . . , VOID 6, on the other hand, exhibit the smooth change in vector direction with word frequency described in the previous paragraph. | 1510.02675#22 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 23 | In recent work on the evaluation of word embeddings, Schnabel et al. [10] trained logistic regression models to predict whether a word was rare or frequent given only the direction of its word vector. For various word embedding methods, the prediction accuracy was measured as a function of the threshold for word rarity. It was found in the case of word2vec CBOW that word vector direction could be used to distinguish very rare words from all other words. Figure 3 is consistent with this ï¬nding, as it is apparent that word vector direction does change gradually with frequency. Schnabel et al. claim further that word vector direction must encode word frequency directly, and not indirectly via semantic information. Figure 3, considered for any particular experiment word in isolation (e.g. SQUAD), demonstrates that the variance of word vector direction with word frequency is indeed independent of co-occurrence (semantic) information, and thereby provides further evidence for this claim.
# 4.3.2 Word frequency and vector length | 1510.02675#23 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 24 | # 4.3.2 Word frequency and vector length
We next consider the effect of frequency on word vector length. Throughout, we measure vector length using the Euclidean norm. Figure 4 shows this relation for individual words, both for the word vectors, represented by the weights of the ï¬rst synaptic layer, syn0, in the word2vec neural network, and for the vectors represented by the weights of the second synaptic layer, syn1neg. We include the latter, which are typically ignored, for completeness. Each line corresponds to a single word, and the points on each line indicate the frequency and vector length of the pseudowords derived from that word. For example, the six points on the line corresponding to the word protestant are labeled, from right to left, by the pseudowords PROTESTANT 1, PROTESTANT 2, . . . , PROTESTANT 6. Again, the number of points on the line is determined by the frequency of the original word. For example, the frequency of the word protestant can be halved at most 6 times so that the frequency of the last pseudoword is still above the low-frequency cut-off. Because all the points on a line share the same co-occurrence distribution, the left panel in Figure 4 demonstrates conclusively that length does indeed depend on frequency directly. | 1510.02675#24 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 26 | Cosine similarity o word vectors VOID_13 . . . . . . . . . . VOID_2 VOID_1 THE_18 . . . . . . . . . . . . . . . THE_2 THE_1 KONG_7 . . . . KONG_2 KONG_1 PROTESTANT_6 . . . PROTESTANT_2 PROTESTANT_1 HIDDEN_6 . . . HIDDEN_2 HIDDEN_1 SQUAD_7 . . . . SQUAD_2 SQUAD_1 1 _ D A U Q S 2 _ D A U Q S . . . 7 _ D A U Q S . 1 _ N E D D H I 2 _ N E D D H I . . . 1 _ T N A T S E T O R P 6 _ N E D D H I 2 _ T N A T S E T O R P . . 6 _ T N A T S E T O R P . 1 _ G N O K 2 _ G N O K . . . 7 _ G N O K . 1 _ E H T 2. _ E H T . . . . . . . . . . . . . 8 1 _ E H T . 1 _ D O V 2. _ D O V I I . . . . . . . . . 3 1 _ D O | 1510.02675#26 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 28 | Figure 3: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the word frequency experiment. The words other than the and VOID were chosen randomly.
Moreover, this relation is seen to be approximately linear for each word considered. Notice also that the relative positions of the lengths of the word vectors associated with the experiment words are roughly independent of the frequency band, i.e., the plotted lines rarely cross.
Observe that the lengths of the vectors representing the meaningless pseudowords VOID i are approx- imately constant (about 2.5). Since we already found the direction to be also constant, it is sensible to speak of the word vector of VOID irrespective of its frequency. In particular, the vector of the pseu- doword VOID 1 may be taken as an approximation. | 1510.02675#28 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 29 | 5 Varying co-occurrence noise
This second experiment is complementary to the ï¬rst. Whereas in the ï¬rst experiment we studied the effect of word frequency on word vectors for ï¬xed co-occurrence, we here study the effect of co- occurrence noise when the frequency is ï¬xed. As before, we do so in a controlled manner.
# 5.1 Generating noise
We take the noise distribution to be the (observed) unconditional word distribution. Noise can then be added to the co-occurrence distribution of a word by simply interspersing occurrences of that word
8
â0.2
â0.4
â0.6
â0.8
â1.0
9
syn0 syn1neg 45 25 40 h t g n e l r o t c e v 35 30 25 20 15 20 15 10 kong awarded lawsuit protestant squad mercury response hidden the VOID 10 5 5 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8 | 1510.02675#29 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 30 | Figure 4: Vector length vs. frequency for pseudowords derived from a few words chosen at random. For each word, pseudowords of varying frequency but with the co-occurrence distribution of that word were inserted into the corpus, as described in Section 4. The vectors are obtained from the ï¬rst synaptic layer, syn0, of the word2vec neural network. The vectors obtained from the second layer, syn1neg, are included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending.
word dying bridges appointment aids boss removal jobs community frequency 10693 12193 12546 13487 14105 15505 21065 115802
Table 3: Words chosen for the co-occurrence noise experiment, along with the word frequencies in the unmodiï¬ed corpus.
uniformly at random throughout the corpus. A word that is consistently used in a distinctive context in the unmodiï¬ed corpus thus appears in the modiï¬ed corpus also in completely unrelated contexts. As in Section 4, we choose a small number of words from the unmodiï¬ed corpus for this experiment. Table 3 lists the words chosen, along with their frequencies in the corpus. | 1510.02675#30 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 31 | For each of these words, the replacement procedure of Section 3.2 is performed using the distribu- tion (2) with n = 7. For every replacement pseudoword (e.g. CAT i), additional occurrences of this pseudoword are interspersed uniformly at random throughout the corpus, such that the ï¬nal frequency of the replacement pseudoword is 2/n times that of the original word cat. For example, if the original word cat occurred 1000 times, then after the replacement procedure, CAT 2 occurs approximately 238 times, so a further (approximately) 2/7 à 1000 â 238 â 48 random occurrences of CAT 2 are interspersed throughout the corpus. In this way, the word cat is removed from the corpus and replaced with a family of pseudowords CAT i, 1 6 i 6 7. These pseudowords all have the same frequency, but their co-occurrence distributions, while based on that of cat, have an increasing amount of noise. Speciï¬cally, the proportion of noise for the ith pseudoword is | 1510.02675#31 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 32 | 1 â n 2 Pn(i) = i â 1 n â 1 , or 0, 1 n â 1 , 2 n â 1 , . . . , 1 for i = 1, 2, . . . , n,
which is evenly distributed. The ï¬rst pseudoword contains no noise at all, while the last pseudoword stands for pure noise. The particular choice of n assures a reasonable coverage of the interval [0, 1]. Other parameter values (or indeed other distributions) could, of course, have been used equally well.
Figure 5 illustrates the effect of this modiï¬cation in the case where the only word chosen is cat. The original text in this case concerned both cats and dogs. Notice that the word cat has been replaced entirely in the cats section by CAT i and, moreover, that these same pseudowords appear also in the dogs section. These occurrences (and additionally, with probability, some occurrences from the cats section) constitute noise. | 1510.02675#32 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 33 | # 5.2 Experimental results
Figure 6 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Remember that the ï¬rst pseudoword (i = 1) in a family is without noise in its co-occurrence distribution, while the last one (i = n, with n = 7) stands for pure noise and has therefore no relation anymore with the word it derives from. The ï¬gure demonstrates that the vectors within a family only moderately deviate from the original direction deï¬ned by the ï¬rst pseudoword (i = 1) when noise is added to the co-occurrence distribution. For 1 < i < 7, the deviation typically increases with the proportion of noise. The vector of the last pseudoword (i = n), associated with pure noise, is seen within each of the families to point in a completely different direction, more or less perpendicular to the original one. To understand this interpolating behavior, recall from Section 4.3 that the vector for the entirely meaningless word VOID is small but non-zero. Since the noise distribution coincides with the co-occurrence distribution of VOID, the vectors for the experiment words must tend to the word vector for VOID as the proportion of noise in their co-occurrence distributions approaches
10 | 1510.02675#33 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 34 | the domestic CAT 2 was first classified as felis catus the semiferal CAT 3 a mostly outdoor CAT 4 is not CAT 2 owned by any one individual a pedigreed CAT 4 is one whose ancestry is recorded by a CAT 1 fancier organization CAT 6 a purebred CAT 3 is one whose ancestry contains only individuals of the same breed the CAT 1 skull is unusual among mammals in having very CAT 4 large eye sockets another unusual feature is that the CAT 4 cannot produce taurine within groups one CAT 2 is usually dominant over the others ... the domestic dog canis lupus familiaris is a domesticated canid which has been selectively CAT 5 bred dogs perform many roles for people such as hunting herding and pulling loads CAT 7 in domestic dogs sexual maturity begins to happen around age six to twelve months this is CAT 6 the time at CAT 3 which female dogs will have their first estrous cycle some dog breeds have acquired traits through selective breeding that interfere with reproduction | 1510.02675#34 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 35 | Figure 5: Example sentences modiï¬ed for the co-occurrence noise experiment, where the word cat was chosen for replacement. The pseudowords were generated using the distribution (2) with n = 7.
1. This convergence to a common point is only indistinctly apparent in Figure 6, as the frequency of the experiment pseudowords is insufï¬cient to sample the full variety of the contexts of VOID, i.e., all contexts (see Section 4.3.1).
The left panel in Figure 7 reveals that vector length varies more or less linearly with the proportion of noise in the co-occurrence distribution of the word. This ï¬gure motivates an interpretation of vector length, within a sufï¬ciently narrow frequency band, as a measure of the absence of co-occurrence noise, or put differently, of the extent to which a word carries the meaning of a distinctive context.
# 6 Discussion | 1510.02675#35 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 36 | # 6 Discussion
Our principle contribution has been to demonstrate that controlled experiments can be used to gain insight into a word embedding. These experiments can be carried out for any word embedding (or indeed language model), for they are achieved via modiï¬cation of the training corpus only. They do not require knowledge of the model implementation. It would naturally be of interest to perform these experiments for other word embeddings other than word2vec CBOW, such as skipgrams and GloVe, as well as for different hyperparameters settings.
More elaborate experiments could be carried out. For instance, by introducing pseudowords into the cor- pus that mix, with varying proportions, the co-occurrence distributions of two words, the path between the word vectors in the feature space could be studied. The co-occurrence noise experiment described here would be a special case of such an experiment where one of the two words was VOID.
Questions pertaining to word2vec in particular arise naturally from the results of the experiments. Fig- ures 4 and 7, for example, demonstrate that the word vectors obtained from the ï¬rst synaptic layer, syn0, have very different properties from those that could be obtained from the second layer, syn1neg. These differences warrant further investigation.
11 | 1510.02675#36 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 37 | Cosine simila ity of wo d vecto s
JOBS_7 . . . . JOBS_2 JOBS_1 BOSS_7 . . . . BOSS_2 BOSS_1 BRIDGES_7 . . . . BRIDGES_2 BRIDGES_1 DYING_7 . . . . DYING_2 DYING_1 1 _ G N Y D I 2 _ G N Y D I . . . . 7 _ G N Y D I 1 _ S E G D R B I 2 _ S E G D R B I . . . . 7 _ S E G D R B I 1 _ S S O B 2 _ S S O B . . . . 7 _ S S O B 1 _ S B O J 2 . _ S B O J . . . 7 _ S B O J | 1510.02675#37 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 38 | Figure 6: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the co-occurrence noise experiment (the words were chosen at random). The largely red blocks demonstrate that for i < 7 the direction of the vectors only moderately changes when noise is added to the co-occurrence distribution. The vector of the pseudowords associated with pure noise (i = 7) is seen to be almost perpendicular to the word vectors they derive from.
12
1.0
0.8
0.6
0.4
0.2
0.0
â0.2
â0.4
â0.6
â0.8
â1.0
1 3
syn0 syn1neg 30 14 25 20 15 12 10 8 6 10 4 5 2 0 0.0 0.2 0.4 0.6 0.8 1.0 0 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of occurrences from noise distribution Proportion of occurrences from noise distribution
# h t g n e
# l
r o t c e v
appointment jobs community removal bridges aids boss dying | 1510.02675#38 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 39 | # h t g n e
# l
r o t c e v
appointment jobs community removal bridges aids boss dying
Figure 7: Vector length vs. proportion of occurrences from the noise distribution for words chosen for this experiment. For each word, pseudowords of equal frequency but with increasing proportion of co-occurrence noise were inserted into the corpus, as described in Section 5. The word vectors are obtained from the ï¬rst synaptic layer, syn0. The second layer, syn1neg, is included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending.
The co-occurrence distribution of VOID is the unconditional frequency distribution, and in this sense pure background noise. Thus the word vector of VOID is a special point in the feature space. Figure 4 shows that this point is not at the origin of the feature space, i.e., is not the zero vector. The origin, however, is implicitly the point of reference in word2vec word similarity tasks. This raises the question of whether improved performance on similarity tasks could be achieved by transforming the feature space or modifying the model such that the representation of pure noise, i.e., the vector for VOID, is at the origin of the transformed feature space.
# 7 Acknowledgments
The authors thank Tobias Schnabel for helpful discussions.
14
# References | 1510.02675#39 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 40 | # 7 Acknowledgments
The authors thank Tobias Schnabel for helpful discussions.
14
# References
[1] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Random walks on context spaces: Towards an explanation of the mysteries of semantic word embeddings. CoRR, abs/1502.03520, 2015.
[2] Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. Donât count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238â247, Baltimore, Maryland, June 2014. Association for Computational Linguistics.
[3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493â2537, November 2011.
[4] William A Gale, Kenneth W Church, and David Yarowsky. Work on statistical methods for word sense In Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural disambiguation. Language, volume 54, page 60, 1992. | 1510.02675#40 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 41 | [5] Thomas K Landauer and Susan T. Dutnais. A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. PSYCHOLOGICAL REVIEW, 104(2):211â240, 1997.
[6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[7] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
[8] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representa- tion. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532â1543, 2014.
[9] Adriaan M. J. Schakel and Benjamin J. Wilson. Measuring word signiï¬cance using distributed representations of words, 2015. | 1510.02675#41 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.02675 | 42 | [9] Adriaan M. J. Schakel and Benjamin J. Wilson. Measuring word signiï¬cance using distributed representations of words, 2015.
[10] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsuper- vised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298â307, Lisbon, Portugal, September 2015. Association for Computational Linguistics.
(Linear) Maps of the Impossible: Capturing Se- mantic Anomalies in Distributional Space. In Proceedings of the Workshop on Distributional Semantics and Compositionality, pages 1â9, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
15 | 1510.02675#42 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is
proposed. Controlled experiments, achieved through modifications of the
training corpus, permit the demonstration of direct relations between word
properties and word vector direction and length. The approach is demonstrated
using the word2vec CBOW model with experiments that independently vary word
frequency and word co-occurrence noise. The experiments reveal that word vector
length depends more or less linearly on both word frequency and the level of
noise in the co-occurrence distribution of the word. The coefficients of
linearity depend upon the word. The special point in feature space, defined by
the (artificial) word with pure noise in its co-occurrence distribution, is
found to be small but non-zero. | http://arxiv.org/pdf/1510.02675 | Benjamin J. Wilson, Adriaan M. J. Schakel | cs.CL, 68T50, I.2.7 | Chagelog: Rerun experiment with subsampling turned off;
re-interpreted results in light of Schnabel et al. (2015). 15 pages | null | cs.CL | 20151009 | 20151214 | [
{
"id": "1510.02675"
}
] |
1510.01378 | 1 | Ying Zhang Universit´e de Montr´eal
Yoshua Bengio â Universit´e de Montr´eal
# Abstract
Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computa- tionally expensive to train and difï¬cult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can signiï¬cantly im- prove convergence rates in feedforward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to signiï¬cantly reduce training time. In this paper, we show that applying batch normalization to the hidden-to-hidden transitions of our RNNs doesnât help the training procedure. We also show that when applied to the input-to-hidden transi- tions, batch normalization can lead to a faster convergence of the training criterion but doesnât seem to improve the generalization performance on both our language modelling and speech recognition tasks. All in all, applying batch normalization to RNNs turns out to be more challenging than applying it to feedforward net- works, but certain variants of it can still be beneï¬cial.
# 1 Introduction | 1510.01378#1 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 2 | # 1 Introduction
Recurrent Neural Networks (RNNs) have received renewed interest due to their recent success in var- ious domains, including speech recognition [2], machine translation [3, 4] and language modelling [5]. The so-called Long Short-Term Memory (LSTM) [6] type RNN has been particularly success- ful. Often, it seems beneï¬cial to train deep architectures in which multiple RNNs are stacked on top of each other [2]. Unfortunately, the training cost for large datasets and deep architectures of stacked RNNs can be prohibitively high, often times an order of magnitude greater than simpler models like n-grams [7]. Because of this, recent work has explored methods for parallelizing RNNs across mul- tiple graphics cards (GPUs). In [3], an LSTM type RNN was distributed layer-wise across multiple GPUs and in [8] a bidirectional RNN was distributed across time. However, due to the sequential nature of RNNs, it is difï¬cult to achieve linear speed ups relative to the number of GPUs. | 1510.01378#2 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 3 | Another way to reduce training times is through a better conditioned optimization procedure. Stan- dardizing or whitening of input data has long been known to improve the convergence of gradient- based optimization methods [9]. Extending this idea to multi-layered networks suggests that nor- malizing or whitening intermediate representations can similarly improve convergence. However, applying these transforms would be extremely costly. In [1], batch normalization was used to stan- dardize intermediate representations by approximating the population statistics using sample-based approximations obtained from small subsets of the data, often called mini-batches, that are also used to obtain gradient approximations for stochastic gradient descent, the most commonly used optimization method for neural network training. It has also been shown that convergence can be improved even more by whitening intermediate representations instead of simply standardizing
# âEqual contribution â CIFAR Senior Fellow
1
them [10]. These methods reduced the training time of Convolutional Neural Networks (CNNs) by an order of magnitude and additionallly provided a regularization effect, leading to state-of-the-art results in object recognition on the ImageNet dataset [11]. In this paper, we explore how to leverage normalization in RNNs and show that training time can be reduced.
# 2 Batch Normalization | 1510.01378#3 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 4 | # 2 Batch Normalization
In optimization, feature standardization or whitening is a common procedure that has been shown to reduce convergence rates [9]. Extending the idea to deep neural networks, one can think of an arbitrary layer as receiving samples from a distribution that is shaped by the layer below. This distribution changes during the course of training, making any layer but the ï¬rst responsible not only for learning a good representation but also for adapting to a changing input distribution. This distribution variation is termed Internal Covariate Shift, and reducing it is hypothesized to help the training procedure [1].
To reduce this internal covariate shift, we could whiten each layer of the network. However, this often turns out to be too computationally demanding. Batch normalization [1] approximates the whitening by standardizing the intermediate representations using the statistics of the current mini- batch. Given a mini-batch x, we can calculate the sample mean and sample variance of each feature k along the mini-batch axis
m 1 Xe = So xik: dd) m i=l
# i=l 1
Ï2 k = 1 m i=1 (xi,k â ¯xk)2, (2)
where m is the size of the mini-batch. Using these statistics, we can standardize each feature as follows
(3)
where ⬠is a small positive constant to improve numerical stability. | 1510.01378#4 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 5 | where m is the size of the mini-batch. Using these statistics, we can standardize each feature as follows
(3)
where ⬠is a small positive constant to improve numerical stability.
However, standardizing the intermediate activations reduces the representational power of the layer. To account for this, batch normalization introduces additional learnable parameters γ and β, which respectively scale and shift the data, leading to a layer of the form
BN (xk) = γk Ëxk + βk. (4)
By setting γk to Ïk and βk to ¯xk, the network can recover the original layer representation. So, for a standard feedforward layer in a neural network
y = Ï(Wx + b), (5)
where W is the weights matrix, b is the bias vector, x is the input of the layer and Ï is an arbitrary activation function, batch normalization is applied as follows
y = Ï(BN (Wx)). (6)
Note that the bias vector has been removed, since its effect is cancelled by the standardization. Since the normalization is now part of the network, the back propagation procedure needs to be adapted to propagate gradients through the mean and variance computations as well. | 1510.01378#5 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
1510.01378 | 6 | At test time, we canât use the statistics of the mini-batch. Instead, we can estimate them by either forwarding several training mini-batches through the network and averaging their statistics, or by maintaining a running average calculated over each mini-batch seen during training.
2
# 3 Recurrent Neural Networks
Recurrent Neural Networks (RNNs) extend Neural Networks to sequential data. Given an input sequence of vectors (x1, . . . , xT ), they produce a sequence of hidden states (h1, . . . , hT ), which are computed at time step t as follows
ht = Ï(Whhtâ1 + Wxxt), (7)
where Wh is the recurrent weight matrix, Wx is the input-to-hidden weight matrix, and Ï is an arbitrary activation function.
If we have access to the whole input sequence, we can use information not only from the past time steps, but also from the future ones, allowing for bidirectional RNNs [12] ââ h t = Ï( ââ h t = Ï( ââ h t :
# ââ Wh ââ Wh
ââ h tâ1 + ââ h t+1 + ââ h t],
ââ Wxxt), ââ Wxxt), | 1510.01378#6 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that
have the potential to learn long-term dependencies. However, they are
computationally expensive to train and difficult to parallelize. Recent work
has shown that normalizing intermediate representations of neural networks can
significantly improve convergence rates in feedforward neural networks . In
particular, batch normalization, which uses mini-batch statistics to
standardize features, was shown to significantly reduce training time. In this
paper, we show that applying batch normalization to the hidden-to-hidden
transitions of our RNNs doesn't help the training procedure. We also show that
when applied to the input-to-hidden transitions, batch normalization can lead
to a faster convergence of the training criterion but doesn't seem to improve
the generalization performance on both our language modelling and speech
recognition tasks. All in all, applying batch normalization to RNNs turns out
to be more challenging than applying it to feedforward networks, but certain
variants of it can still be beneficial. | http://arxiv.org/pdf/1510.01378 | César Laurent, Gabriel Pereyra, Philémon Brakel, Ying Zhang, Yoshua Bengio | stat.ML, cs.LG, cs.NE | null | null | stat.ML | 20151005 | 20151005 | [
{
"id": "1502.03167"
},
{
"id": "1502.00512"
},
{
"id": "1507.00210"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.