id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1511.02274#34 | Stacked Attention Networks for Image Question Answering | The visualization of the attention layers further il- lustrates the process that the SAN focuses the attention to the relevant visual clues that lead to the answer of the ques- tion layer-by-layer. # References [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015. 1, 2, 5, 6, 7 [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 1 [3] J. Berant and P. Liang. | 1511.02274#33 | 1511.02274#35 | 1511.02274 | [
"1506.00333"
] |
1511.02274#35 | Stacked Attention Networks for Image Question Answering | Semantic parsing via paraphrasing. In Proceedings of ACL, volume 7, page 92, 2014. 1 [4] A. Bordes, S. Chopra, and J. Weston. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676, 2014. 1 [5] X. Chen and C. L. Zitnick. Learning a recurrent visual rep- arXiv preprint resentation for image caption generation. arXiv:1411.5654, 2014. 2 [6] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt, et al. From captions to visual concepts and back. arXiv preprint arXiv:1411.4952, 2014. 2, 3, 5 [7] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are you talking to a machine? dataset and methods for arXiv preprint multilingual arXiv:1505.05612, 2015. 1, 2 [8] A. | 1511.02274#34 | 1511.02274#36 | 1511.02274 | [
"1506.00333"
] |
1511.02274#36 | Stacked Attention Networks for Image Question Answering | Graves. Generating sequences with recurrent neural net- works. arXiv preprint arXiv:1308.0850, 2013. 5 [9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. 1 [10] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- arXiv preprint ments for generating image descriptions. arXiv:1412.2306, 2014. 2 [11] Y. Kim. | 1511.02274#35 | 1511.02274#37 | 1511.02274 | [
"1506.00333"
] |
1511.02274#37 | Stacked Attention Networks for Image Question Answering | Convolutional neural networks for sentence classiï¬ - cation. arXiv preprint arXiv:1408.5882, 2014. 3 [12] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural lan- guage models. arXiv preprint arXiv:1411.2539, 2014. 2 Imagenet In classiï¬ cation with deep convolutional neural networks. Advances in neural information processing systems, pages 1097â 1105, 2012. 2 [14] A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. | 1511.02274#36 | 1511.02274#38 | 1511.02274 | [
"1506.00333"
] |
1511.02274#38 | Stacked Attention Networks for Image Question Answering | Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015. 1 [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278â 2324, 1998. 1 [16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- In Computer Visionâ ECCV 2014, mon objects in context. pages 740â 755. Springer, 2014. 5 [17] L. Ma, Z. Lu, and H. Li. | 1511.02274#37 | 1511.02274#39 | 1511.02274 | [
"1506.00333"
] |
1511.02274#39 | Stacked Attention Networks for Image Question Answering | Learning to answer questions from image using convolutional neural network. arXiv preprint arXiv:1506.00333, 2015. 2, 5, 6 [18] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- In Advances in Neural Information Processing tain input. Systems, pages 1682â 1690, 2014. 1, 2, 4, 5, 6 [19] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. arXiv preprint arXiv:1505.01121, 2015. 1, 2, 5, 6 | 1511.02274#38 | 1511.02274#40 | 1511.02274 | [
"1506.00333"
] |
1511.02274#40 | Stacked Attention Networks for Image Question Answering | [20] J. Mao, W. Xu, Y. Yang, J. Wang, and A. Yuille. Deep cap- tioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014. 2 [21] M. Ren, R. Kiros, and R. Zemel. and data for image question answering. arXiv:1505.02074, 2015. 1, 2, 5, 6 Exploring models arXiv preprint [22] Y. Shen, X. He, J. Gao, L. Deng, and G. | 1511.02274#39 | 1511.02274#41 | 1511.02274 | [
"1506.00333"
] |
1511.02274#41 | Stacked Attention Networks for Image Question Answering | Mesnil. A latent semantic model with convolutional-pooling structure for in- formation retrieval. In Proceedings of the 23rd ACM Interna- tional Conference on Conference on Information and Knowl- edge Management, pages 101â 110. ACM, 2014. 3 [23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2 [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ | 1511.02274#40 | 1511.02274#42 | 1511.02274 | [
"1506.00333"
] |
1511.02274#42 | Stacked Attention Networks for Image Question Answering | tting. The Journal of Machine Learning Research, 15(1):1929â 1958, 2014. 5 [25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural infor- mation processing systems, pages 3104â 3112, 2014. 3 [26] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 2 [27] O. Vinyals, A. Toshev, S. Bengio, and D. | 1511.02274#41 | 1511.02274#43 | 1511.02274 | [
"1506.00333"
] |
1511.02274#43 | Stacked Attention Networks for Image Question Answering | Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. 2 [28] J. Weston, S. Chopra, and A. Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. 1 [29] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133â 138. Association for Computational Linguistics, 1994. 5 [30] K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. | 1511.02274#42 | 1511.02274#44 | 1511.02274 | [
"1506.00333"
] |
1511.02274#44 | Stacked Attention Networks for Image Question Answering | Show, attend and tell: Neural im- age caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. 1, 2 [31] W.-t. Yih, M.-W. Chang, X. He, and J. Gao. Semantic pars- ing via staged query graph generation: Question answering with knowledge base. In Proceedings of the Joint Conference of the 53rd Annual Meeting of the ACL and the 7th Interna- tional Joint Conference on Natural Language Processing of the AFNLP, 2015. 1 [32] W.-t. Yih, X. He, and C. Meek. | 1511.02274#43 | 1511.02274#45 | 1511.02274 | [
"1506.00333"
] |
1511.02274#45 | Stacked Attention Networks for Image Question Answering | Semantic parsing for single- relation question answering. In Proceedings of ACL, 2014. 1 What take the nap with a blanket? What is the color of the cake? Answer: dogs Prediction: dogs Answer: brown Prediction: white What stands between two blue lounge chairs on an empty beach? Answer: unbrella Prediction: unbrella What is the color of the motorcycle? Answer: blue Prediction: blue â S What is sitting in the luggage bag? What is the color of the design? Answer: cat Prediction: cat Answer: red Prediction: red What is the color of the trucks? What is in front of the clear sky? Answer: green Prediction: green Answer: tower Prediction: tower What is next to the desk with a computer and laptop? What is the color of the surface? Answer: chair Prediction: chair Answer: white Prediction: white F } sits What are flying against the cloudy sky? | 1511.02274#44 | 1511.02274#46 | 1511.02274 | [
"1506.00333"
] |
1511.02274#46 | Stacked Attention Networks for Image Question Answering | Where do the young adult make us standing? Answer: kites Prediction: kites Answer: room Prediction: room Figure 7: More examples | 1511.02274#45 | 1511.02274 | [
"1506.00333"
] |
|
1510.03009#0 | Neural Networks with Few Multiplications | 6 1 0 2 b e F 6 2 ] G L . s c [ 3 v 9 0 0 3 0 . 0 1 5 1 : v i X r a Published as a conference paper at ICLR 2016 # NEURAL NETWORKS WITH FEW MULTIPLICATIONS Zhouhan Lin Universit´e de Montr´eal Canada [email protected] Matthieu Courbariaux Universit´e de Montr´eal Canada [email protected] Roland Memisevic Universit´e de Montr´eal Canada [email protected] Yoshua Bengio Universit´e de Montr´eal Canada # ABSTRACT For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on ï¬ oating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 pop- ular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classiï¬ cation performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware- friendly training of neural networks. | 1510.03009#1 | 1510.03009 | [
"1503.03535"
] |
|
1510.03009#1 | Neural Networks with Few Multiplications | # INTRODUCTION Training deep neural networks has long been computational demanding and time consuming. For some state-of-the-art architectures, it can take weeks to get models trained (Krizhevsky et al., 2012). Another problem is that the demand for memory can be huge. For example, many common models in speech recognition or machine translation need 12 Gigabytes or more of storage (Gulcehre et al., 2015). To deal with these issues it is common to train deep neural networks by resorting to GPU or CPU clusters and to well designed parallelization strategies (Le, 2013). Most of the computation performed in training a neural network are ï¬ oating point multiplications. In this paper, we focus on eliminating most of these multiplications to reduce computation. Based on our previous work (Courbariaux et al., 2015), which eliminates multiplications in computing hidden representations by binarizing weights, our method deals with both hidden state computations and backward weight updates. Our approach has 2 components. In the forward pass, weights are stochastically binarized using an approach we call binary connect or ternary connect, and for back- propagation of errors, we propose a new approach which we call quantized back propagation that converts multiplications into bit-shifts. 1 # 2 RELATED WORK Several approaches have been proposed in the past to simplify computations in neural networks. Some of them try to restrict weight values to be an integer power of two, thus to reduce all the mul- tiplications to be binary shifts (Kwan & Tang, 1993; Marchesi et al., 1993). In this way, multiplica- tions are eliminated in both training and testing time. The disadvantage is that model performance can be severely reduced, and convergence of training can no longer be guaranteed. | 1510.03009#0 | 1510.03009#2 | 1510.03009 | [
"1503.03535"
] |
1510.03009#2 | Neural Networks with Few Multiplications | 1The codes BinaryConnect for these approaches are available online at https://github.com/hantek/ 1 Published as a conference paper at ICLR 2016 Kim & Paris (2015) introduces a completely Boolean network, which simpliï¬ es the test time com- putation at an acceptable performance hit. The approach still requires a real-valued, full precision training phase, however, so the beneï¬ ts of reducing computations does not apply to training. Sim- ilarly, Machado et al. (2015) manage to get acceptable accuracy on sparse representation classiï¬ | 1510.03009#1 | 1510.03009#3 | 1510.03009 | [
"1503.03535"
] |
1510.03009#3 | Neural Networks with Few Multiplications | - cation by replacing all ï¬ oating-point multiplications by integer shifts. Bit-stream networks (Burge et al., 1999) also provides a way of binarizing neural network connections, by substituting weight connections with logical gates. Similar to that, Cheng et al. (2015) proves deep neural networks with binary weights can be trained to distinguish between multiple classes with expectation back propagation. There are some other techniques, which focus on reducing the training complexity. For instance, instead of reducing the precision of weights, Simard & Graf (1994) quantizes states, learning rates, and gradients to powers of two. This approach manages to eliminate multiplications with negligible performance reduction. | 1510.03009#2 | 1510.03009#4 | 1510.03009 | [
"1503.03535"
] |
1510.03009#4 | Neural Networks with Few Multiplications | # 3 BINARY AND TERNARY CONNECT 3.1 BINARY CONNECT REVISITED In Courbariaux et al. (2015), we introduced a weight binarization technique which removes mul- tiplications in the forward pass. We summarize this approach in this subsection, and introduce an extension to it in the next. Consider a neural network layer with N input and M output units. The forward computation is y = h(W x + b) where W and b are weights and biases, respectively, h is the activation function, and x and y are the layerâ s inputs and outputs. If we choose ReLU as h, there will be no multiplications in computing the activation function, thus all multiplications reside in the matrix product W x. For each input vector x, N M ï¬ oating point multiplications are needed. Binary connect eliminates these multiplications by stochastically sampling weights to be â 1 or 1. Full precision weights ¯w are kept in memory as reference, and each time when y is needed, we sample a stochastic weight matrix W according to ¯w. For each element of the sampled matrix W , the probability of getting a 1 is proportional to how â | 1510.03009#3 | 1510.03009#5 | 1510.03009 | [
"1503.03535"
] |
1510.03009#5 | Neural Networks with Few Multiplications | closeâ its corresponding entry in ¯w is to 1. i.e., P (Wij = 1) = ¯wij + 1 2 ; P (Wij = â 1) = 1 â P (Wij = 1) (1) It is necessary to add some edge constraints to ¯w. To ensure that P (Wij = 1) lies in a reasonable range, values in ¯w are forced to be a real value in the interval [-1, 1]. If during the updates any of its value grows beyond that interval, we set it to be its corresponding edge values â 1 or 1. | 1510.03009#4 | 1510.03009#6 | 1510.03009 | [
"1503.03535"
] |
1510.03009#6 | Neural Networks with Few Multiplications | That way ï¬ oating point multiplications become sign changes. A remaining question concerns the use of multiplications in the random number generator involved in the sampling process. Sampling an integer has to be faster than multiplication for the algorithm to be worth it. To be precise, in most cases we are doing mini-batch learning and the sampling process is performed only once for the whole mini-batch. Normally the batch size B varies up to several hundreds. So, as long as one sampling process is signiï¬ cantly faster than B times of multiplications, it is still worth it. Fortunately, efï¬ ciently generating random numbers has been studied in Jeavons et al. (1994); van Daalen et al. (1993). Also, it is possible to get random numbers according to real random processes, like CPU temperatures, etc. We are not going into the details of random number generation as this is not the focus of this paper. | 1510.03009#5 | 1510.03009#7 | 1510.03009 | [
"1503.03535"
] |
1510.03009#7 | Neural Networks with Few Multiplications | 3.2 TERNARY CONNECT The binary connect introduced in the former subsection allows weights to be â 1 or 1. However, in a trained neural network, it is common to observe that many learned weights are zero or close to zero. Although the stochastic sampling process would allow the mean value of sampled weights to be zero, this suggests that it may be beneï¬ cial to explicitly allow weights to be zero. To allow weights to be zero, some adjustments are needed for Eq. 1. We split the interval of [-1, 1], within which the full precision weight value ¯wij lies, into two sub-intervals: [â 1, 0] and (0, 1]. | 1510.03009#6 | 1510.03009#8 | 1510.03009 | [
"1503.03535"
] |
1510.03009#8 | Neural Networks with Few Multiplications | If a 2 Published as a conference paper at ICLR 2016 weight value ¯wij drops into one of them, we sample ¯wij to be the two edge values of that interval, according to their distance from ¯wij, i.e., if ¯wij > 0: P (Wij = 1) = ¯wij; P (Wij = 0) = 1 â ¯wij (2) and if ¯wij <= 0: P (Wij = â 1) = â ¯wij; P (Wij = 0) = 1 + ¯wij (3) | 1510.03009#7 | 1510.03009#9 | 1510.03009 | [
"1503.03535"
] |
1510.03009#9 | Neural Networks with Few Multiplications | Like binary connect, ternary connect also eliminates all multiplications in the forward pass. # 4 QUANTIZED BACK PROPAGATION In the former section we described how multiplications can be eliminated from the forward pass. In this section, we propose a way to eliminate multiplications from the backward pass. Suppose the i-th layer of the network has N input and M output units, and consider an error signal δ propagating downward from its output. The updates for weights and biases would be the outer product of the layerâ s input and the error signal: AW =n [6 On (Wxt »)| x? (4) Ab=n [5 Ohâ (Wx + »)| (5) | 1510.03009#8 | 1510.03009#10 | 1510.03009 | [
"1503.03535"
] |
1510.03009#10 | Neural Networks with Few Multiplications | where 77 is the learning rate, and x the input to the layer. The operator © stands for element-wise multiply. While propagating through the layers, the error signal 6 needs to be updated, too. Its update taking into account the next layer below takes the form: # 6 = [Wd] Oh (W x + b) (6) There are 3 terms that appear repeatedly in Eqs. to 6,hâ (Wx + b) and x. The latter two terms introduce matrix outer products. To eliminate multiplications, we can quantize one of them to be an integer power of 2, so that multiplications involving that term become binary shifts. The expression nh (Wx + b) contains downflowing gradients, which are largely determined by the cost function and network parameters, thus it is hard to bound its values. However, bounding the values is essential for quantization because we need to supply a fixed number of bits for each sampled value, and if that value varies too much, we will need too many bits for the exponent. This, in turn, will result in the need for more bits to store the sampled value and unnecessarily increase the required amount of computation. While h (W x + b) is not a good choice for quantization, x is a better choice, because it is the hidden representation at each layer, and we know roughly the distribution of each layerâ | 1510.03009#9 | 1510.03009#11 | 1510.03009 | [
"1503.03535"
] |
1510.03009#11 | Neural Networks with Few Multiplications | s activation. Our approach is therefore to eliminate multiplications in Eq. 4 by quantizing each entry in x to an integer power of 2. That way the outer product in Eq. 4 becomes a series of bit shifts. Experi- mentally, we ï¬ nd that allowing a maximum of 3 to 4 bits of shift is sufï¬ cient to make the network work well. This means that 3 bits are already enough to quantize x. As the ï¬ oat32 format has 24 bits of mantissa, shifting (to the left or right) by 3 to 4 bits is completely tolerable. | 1510.03009#10 | 1510.03009#12 | 1510.03009 | [
"1503.03535"
] |
1510.03009#12 | Neural Networks with Few Multiplications | We refer to this approach of back propagation as â quantized back propagation.â If we choose ReLU as the activation function, and since we are reusing the (W x + b) that was (W x + b) involves no additional sampling computed during the forward pass, computing the term h or multiplications. In addition, quantized back propagation eliminates the multiplications in the outer product in Eq. 4. The only places where multiplications remain are the element-wise products. In Eq. 5, multiplying by η and Ï requires 2 à M multiplications, while in Eq. 4 we can reuse the result of Eq. 5. To update δ would need another M multiplications, thus 3 à M multiplications 3 | 1510.03009#11 | 1510.03009#13 | 1510.03009 | [
"1503.03535"
] |
1510.03009#13 | Neural Networks with Few Multiplications | Published as a conference paper at ICLR 2016 are needed for all computations from Eqs. 4 through 6. Pseudo code in Algorithm 1 outlines how quantized back propagation is conducted. Algorithm 1 Quantized Back Propagation (QBP). C is the cost function. binarize(W ) and clip(W ) stands for binarize and clip methods. L is the number of layers. Require: a deep model with parameters W , b at each layer. Input data x, its corresponding targets y, and learning rate η. y, and learning rate η. 1: procedure QBP(model, x, y, η) 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 1. Forward propagation: for each layer i in range(1, L) do Wb â binarize(W ) Compute activation ai according to its previous layer output aiâ 1, Wb and b. 2. Backward propagation: Initialize output layerâ s error signal δ = â C â aL for each layer i in range(L, 1) do . Compute â W and â b according to Eqs. 4 and 5. Update W : W â clip(W â â W ) Update b: b â b â â b Compute â C â akâ | 1510.03009#12 | 1510.03009#14 | 1510.03009 | [
"1503.03535"
] |
1510.03009#14 | Neural Networks with Few Multiplications | 1 by updating δ according to Eq. 6. Like in the forward pass, most of the multiplications are used in the weight updates. Compared with standard back propagation, which would need 2M N + 3M multiplications at least, the amount of multiplications left is negligible in quantized back propagation. Our experiments in Section 5 show that this way of dramatically decreasing multiplications does not necessarily entail a loss in performance. # 5 EXPERIMENTS We tried our approach on both fully connected networks and convolutional networks. Our imple- mentation uses Theano (Bastien et al., 2012). We experimented with 3 datasets: MNIST, CIFAR10, and SVHN. In the following subsection we show the performance that these multiplier-light neural networks can achieve. In the subsequent subsections we study some of their properties, such as convergence and robustness, in more detail. 5.1 GENERAL PERFORMANCE We tested different variations of our approach, and compare the results with Courbariaux et al. (2015) and full precision training (Table 1). All models are trained with stochastic gradient descent (SGD) without momentum. We use batch normalization for all the models to accelerate learning. At training time, binary (ternary) connect and quantized back propagation are used, while at test time, we use the learned full resolution weights for the forward propagation. For each dataset, all hyper-parameters are set to the same values for the different methods, except that the learning rate is adapted independently for each one. | 1510.03009#13 | 1510.03009#15 | 1510.03009 | [
"1503.03535"
] |
1510.03009#15 | Neural Networks with Few Multiplications | Table 1: Performances across different datasets MNIST CIFAR10 SVHN Full precision Binary connect 1.33% 15.64% 2.85% 1.23% 12.04% 2.47% Binary connect + Quantized backprop 1.29% 12.08% 2.48% Ternary connect + Quantized backprop 1.15% 12.01% 2.42% 4 Published as a conference paper at ICLR 2016 # 5.1.1 MNIST The MNIST dataset (LeCun et al., 1998) has 50000 images for training and 10000 for testing. All images are grey value images of size 28 Ã 28 pixels, falling into 10 classes corresponding to the 10 digits. The model we use is a fully connected network with 4 layers: 784-1024-1024-1024-10. At the last layer we use the hinge loss as the cost. The training set is separated into two parts, one of which is the training set with 40000 images and the other the validation set with 10000 images. Training is conducted in a mini-batch way, with a batch size of 200. With ternary connect, quantized backprop, and batch normalization, we reach an error rate of 1.15%. This result is better than full precision training (also with batch normalization), which yields an error rate 1.33%. If without batch normalization, the error rates rise to 1.48% and 1.67%, respectively. We also explored the performance if we sample those weights during test time. With ternary connect at test time, the same model (the one reaches 1.15% error rate) yields 1.49% error rate, which is still fairly acceptable. Our experimental results show that despite removing most multiplications, our approach yields a comparable (in fact, even slightly higher) performance than full precision training. The performance improvement is likely due to the regularization effect implied by the stochastic sampling. Taking this network as a concrete example, the actual amount of multiplications in each case can be estimated precisely. Multiplications in the forward pass is obvious, and for the backward pass section 4 has already given an estimation. Now we estimate the amount of multiplications incurred by batch normalization. | 1510.03009#14 | 1510.03009#16 | 1510.03009 | [
"1503.03535"
] |
1510.03009#16 | Neural Networks with Few Multiplications | Suppose we have a pre-hidden representation h with mini-batch size B on a layer which has M output units (thus h should have shape B à M ), then batch normalization can be formalized as γ hâ mean(h) std(h) + β. One need to compute the mean(h) over a mini-batch, which takes M multiplications, and BM + 2M multiplication to compute the standard deviation std(h). The fraction takes BM divisions, which should be equal to the same amount of multiplication. Multiplying that by the γ parameter, adds another BM multiplications. So each batch normalization layer takes an extra 3BM + 3M multiplications in the forward pass. The backward pass takes roughly twice as many multiplications in addition, if we use SGD. These amount of multiplications are the same no matter we use binarization or not. Bearing those in mind, the total amount of multiplications invoked in a mini-batch update are shown in Table 2. The last column lists the ratio of multiplications left, after applying ternary connect and quantized back propagation. Table 2: Estimated number of multiplications in MNIST net Full precision without BN 1.7480 à 109 1.7535 à 109 with BN Ternary connect + Quantized backprop 1.8492 à 106 7.4245 à 106 ratio 0.001058 0.004234 | 1510.03009#15 | 1510.03009#17 | 1510.03009 | [
"1503.03535"
] |
1510.03009#17 | Neural Networks with Few Multiplications | # 5.1.2 CIFAR10 CIFAR10 (Krizhevsky & Hinton, 2009) contains images of size 32 à 32 RGB pixels. Like for MNIST, we split the dataset into 40000, 10000, and 10000 training-, validation-, and test-cases, respectively. We apply our approach in a convolutional network for this dataset. The network has 6 convolution/pooling layers, 1 fully connected layer and 1 classiï¬ cation layer. | 1510.03009#16 | 1510.03009#18 | 1510.03009 | [
"1503.03535"
] |
1510.03009#18 | Neural Networks with Few Multiplications | We use the hinge loss for training, with a batch size of 100. We also tried using ternary connect at test time. On the model trained by ternary connect and quantized back propagation, it yields 13.54% error rate. Similar to what we observed in the fully connected network, binary (ternary) connect and quantized back propagation yield a slightly higher performance than ordinary SGD. # 5.1.3 SVHN The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) contains RGB images of house numbers. | 1510.03009#17 | 1510.03009#19 | 1510.03009 | [
"1503.03535"
] |
1510.03009#19 | Neural Networks with Few Multiplications | It contains more than 600,000 images in its extended training set, and roughly 26,000 images in its test set. We remove 6,000 images from the training set for validation. We use 7 layers of convolution/pooling, 1 fully connected layer, and 1 classiï¬ cation layer. Batch size is also 5 Published as a conference paper at ICLR 2016 set to be 100. The performances we get is consistent with our results on CIFAR10. Extending the ternary connect mechanism to its test time yields 2.99% error rate on this dataset. Again, it improves over ordinary SGD by using binary (ternary) connect and quantized back propagation. # 5.2 CONVERGENCE Taking the convolutional networks on CIFAR10 as a test-bed, we now study the learning behaviour in more detail. Figure 1 shows the performance of the model in terms of test set errors during training. | 1510.03009#18 | 1510.03009#20 | 1510.03009 | [
"1503.03535"
] |
1510.03009#20 | Neural Networks with Few Multiplications | The ï¬ gure shows that binarization makes the network converge slower than ordinary SGD, but yields a better optimum after the algorithm converges. Compared with binary connect (red line), adding quantization in the error propagation (yellow line) doesnâ t hurt the model accuracy at all. Moreover, having ternary connect combined with quantized back propagation (green line) surpasses all the other three approaches. â Full Resolution â Binary Connect â Binary Connect + Quantized BP Ternary Connect + Quantized BP Error rate i) 50 100 150 200 250 300 epochs Figure 1: Test set error rate at each epoch for ordinary back propagation, binary connect, binary connect with quantized back propagation, and ternary connect with quantized back propagation. Vertical axis is represented in logarithmic scale. | 1510.03009#19 | 1510.03009#21 | 1510.03009 | [
"1503.03535"
] |
1510.03009#21 | Neural Networks with Few Multiplications | 5.3 THE EFFECT OF BIT CLIPPING In Section 4 we mentioned that quantization will be limited by the number of bits we use. The maximum number of bits to shift determines the amount of memory needed, but it also determines in what range a single weight update can vary. Figure 2 shows the model performance as a function of the maximum allowed bit shifts. These experiments are conducted on the MNIST dataset, with the aforementioned fully connected model. For each case of bit clipping, we repeat the experiment for 10 times with different initial random instantiations. | 1510.03009#20 | 1510.03009#22 | 1510.03009 | [
"1503.03535"
] |
1510.03009#22 | Neural Networks with Few Multiplications | The ï¬ gure shows that the approach is not very sensible to the number of bits used. The maximum allowed shift in the ï¬ gure varies from 2 bits to 10 bits, and the performance remains roughly the same. Even by restricting bit shifts to 2, the model can still learn successfully. The fact that the performance is not very sensitive to the maximum of allowed bit shifts suggests that we do not need to redeï¬ ne the number of bits used for quantizing x for different tasks, which would be an important practical advantage. The x to be quantized is not necessarily distributed symmetrically around 2. For example, Figure 3 shows the distribution of x at each layer in the middle of training. The maximum amount of shift to the left does not need to be the same as that on the right. | 1510.03009#21 | 1510.03009#23 | 1510.03009 | [
"1503.03535"
] |
1510.03009#23 | Neural Networks with Few Multiplications | A more efï¬ cient way is to use different values for the maximum left shift and the maximum right shift. Bearing that in mind, we set it to 3 bits maximum to the right and 4 bits to the left. 6 Published as a conference paper at ICLR 2016 1.375 OO Error rate (%) 1.125 Maximum allowed shifts Figure 2: Model performance as a function of the maximum bit shifts allowed in quantized back propagation. The dark blue line indicates mean error rate over 10 independent runs, while light blue lines indicate their corresponding maximum and minimum error rates. 1200072 = =10 = 2 10000 8000 6000 4000 2000 =20 =15 =10 -5 0 5 Figure 3: Histogram of representations at each layer while training a fully connected network for MNIST. The ï¬ gure represents a snap-shot in the middle of training. Each subï¬ gure, from bottom up, represents the histogram of hidden states from the ï¬ rst layer to the last layer. The horizontal axes stand for the exponent of the layersâ representations, i.e., log2 x. | 1510.03009#22 | 1510.03009#24 | 1510.03009 | [
"1503.03535"
] |
1510.03009#24 | Neural Networks with Few Multiplications | # 6 CONCLUSION AND FUTURE WORK We proposed a way to eliminate most of the ï¬ oating point multiplications used during training a feedforward neural network. This could make it possible to dramatically accelerate the training of neural networks by using dedicated hardware implementations. A somewhat surprising fact is that instead of damaging prediction accuracy the approach tends im- prove it, which is probably due to several facts. First is the regularization effect that the stochastic sampling process entails. Noise injection brought by sampling the weight values can be viewed as a regularizer, and that improves the model generalization. The second fact is low precision weight val- ues. Basically, the generalization error bounds for neural nets depend on the weights precision. Low precision prevents the optimizer from ï¬ nding solutions that require a lot of precision, which corre- spond to very thin (high curvature) critical points, and these minima are more likely to correspond to overï¬ tted solutions then broad minima (there are more functions that are compatible with such solutions, corresponding to a smaller description length and thus better generalization). Similarly, | 1510.03009#23 | 1510.03009#25 | 1510.03009 | [
"1503.03535"
] |
1510.03009#25 | Neural Networks with Few Multiplications | 7 Published as a conference paper at ICLR 2016 Neelakantan et al. (2015) adds noise into gradients, which makes the optimizer prefer large-basin areas and forces it to ï¬ nd broad minima. It also lowers the training loss and improves generalization. Directions for future work include exploring actual implementations of this approach (for example, using FPGA), seeking more efï¬ cient ways of binarization, and the extension to recurrent neural networks. # ACKNOWLEDGMENTS The authors would like to thank the developers of Theano (Bastien et al., 2012). We acknowledge the support of the following agencies for research funding and computing support: Samsung, NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. # REFERENCES Bastien, Fr´ed´eric, Lamblin, Pascal, Pascanu, Razvan, Bergstra, James, Goodfellow, Ian J., Bergeron, Arnaud, Bouchard, Nicolas, and Bengio, Yoshua. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. | 1510.03009#24 | 1510.03009#26 | 1510.03009 | [
"1503.03535"
] |
1510.03009#26 | Neural Networks with Few Multiplications | Burge, Peter S., van Daalen, Max R., Rising, Barry J. P., and Shawe-Taylor, John S. Stochastic bit- stream neural networks. In Maass, Wolfgang and Bishop, Christopher M. (eds.), Pulsed Neural Networks, pp. 337â 352. MIT Press, Cambridge, MA, USA, 1999. ISBN 0-626-13350-4. URL http://dl.acm.org/citation.cfm?id=296533.296552. | 1510.03009#25 | 1510.03009#27 | 1510.03009 | [
"1503.03535"
] |
1510.03009#27 | Neural Networks with Few Multiplications | Cheng, Zhiyong, Soudry, Daniel, Mao, Zexi, and Lan, Zhenzhong. Training binary multilayer arXiv preprint neural networks for image classiï¬ cation using expectation backpropagation. arXiv:1503.03562, 2015. Courbariaux, Matthieu, Bengio, Yoshua, and David, Jean-Pierre. Binaryconnect: Training deep neu- ral networks with binary weights during propagations. arXiv preprint arXiv:1511.00363, 2015. | 1510.03009#26 | 1510.03009#28 | 1510.03009 | [
"1503.03535"
] |
1510.03009#28 | Neural Networks with Few Multiplications | Gulcehre, Caglar, Firat, Orhan, Xu, Kelvin, Cho, Kyunghyun, Barrault, Loic, Lin, Huei-Chi, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535, 2015. Jeavons, Peter, Cohen, David A., and Shawe-Taylor, John. Generating binary sequences for stochas- tic computing. Information Theory, IEEE Transactions on, 40(3):716â 720, 1994. | 1510.03009#27 | 1510.03009#29 | 1510.03009 | [
"1503.03535"
] |
1510.03009#29 | Neural Networks with Few Multiplications | Kim, Minje and Paris, Smaragdis. Bitwise neural networks. In Proceedings of The 31st International Conference on Machine Learning, pp. 0â 0, 2015. Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. | 1510.03009#28 | 1510.03009#30 | 1510.03009 | [
"1503.03535"
] |
1510.03009#30 | Neural Networks with Few Multiplications | Kwan, Hon Keung and Tang, CZ. Multiplierless multilayer feedforward neural network design suitable for continuous input-output mapping. Electronics Letters, 29(14):1259â 1260, 1993. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 8595â 8598. IEEE, 2013. LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. | 1510.03009#29 | 1510.03009#31 | 1510.03009 | [
"1503.03535"
] |
1510.03009#31 | Neural Networks with Few Multiplications | Machado, Emerson Lopes, Miosso, Cristiano Jacques, von Borries, Ricardo, Coutinho, Murilo, Berger, Pedro de Azevedo, Marques, Thiago, and Jacobi, Ricardo Pezzuol. Computational cost reduction in learned transform classiï¬ cations. arXiv preprint arXiv:1504.06779, 2015. Marchesi, Michele, Orlandi, Gianni, Piazza, Francesco, and Uncini, Aurelio. | 1510.03009#30 | 1510.03009#32 | 1510.03009 | [
"1503.03535"
] |
1510.03009#32 | Neural Networks with Few Multiplications | Fast neural networks without multipliers. Neural Networks, IEEE Transactions on, 4(1):53â 62, 1993. 8 Published as a conference paper at ICLR 2016 Neelakantan, Arvind, Vilnis, Luke, Le, Quoc V, Sutskever, Ilya, Kaiser, Lukasz, Kurach, Karol, and Martens, James. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015. Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Read- ing digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning, pp. 5. | 1510.03009#31 | 1510.03009#33 | 1510.03009 | [
"1503.03535"
] |
1510.03009#33 | Neural Networks with Few Multiplications | Granada, Spain, 2011. Simard, Patrice Y and Graf, Hans Peter. Backpropagation without multiplication. In Advances in Neural Information Processing Systems, pp. 232â 239, 1994. van Daalen, Max, Jeavons, Pete, Shawe-Taylor, John, and Cohen, Dave. Device for generating binary sequences for stochastic computing. Electronics Letters, 29(1):80â 81, 1993. 9 | 1510.03009#32 | 1510.03009 | [
"1503.03535"
] |
|
1510.02675#0 | Controlled Experiments for Word Embeddings | 5 1 0 2 c e D 4 1 arXiv:1510.02675v2 [cs.CL] # ] L C . s c [ 2 v 5 7 6 2 0 . 0 1 5 1 : v i X r a # Controlled Experiments for Word Embeddings # Adriaan M. J. Schakel NNLP [email protected] Benjamin Wilson Adriaan M. J. Schakel Lateral GmbH NNLP [email protected] [email protected] February 15, 2022 # Abstract | 1510.02675#1 | 1510.02675 | [
"1510.02675"
] |
|
1510.02675#1 | Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modiï¬ cations of the training corpus, permit the demonstration of direct relations between word properties and word vector direc- tion and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefï¬ cients of linearity depend upon the word. The special point in feature space, deï¬ ned by the (artiï¬ cial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero. # 1 Introduction Word embeddings, or distributed representations of words, have been the subject of much recent re- search in the natural language processing and machine learning communities, demonstrating state-of- the-art performance on word similarity and word analogy tasks, amongst others. Word embeddings represent words from the vocabulary as dense, real-valued vectors. Instead of one-hot vectors that merely indicate the location of a word in the vocabulary, dense vectors of dimension much smaller than the vocabulary size are constructed such that they carry syntactic and semantic information. Irrespec- tive of the technique chosen, word embeddings are typically derived from word co-occurrences. | 1510.02675#0 | 1510.02675#2 | 1510.02675 | [
"1510.02675"
] |
1510.02675#2 | Controlled Experiments for Word Embeddings | More speciï¬ cally, in a machine-learning setting, word embeddings are typically trained by scanning a short window over all the text in a corpus. This process can be seen as sampling word co-occurrence distribu- tions, where it is recalled that the co-occurrence distribution of a target word w denotes the conditional probability P(wâ ²|w) that a word wâ ² occurs in its context, i.e., given that w occurred. Most applications of word embeddings explore not the word vectors themselves, but relations between them to solve, for example, similarity and word relation tasks [2]. For these tasks, it was found that using normalised word vectors improves performance. Word vector length is therefore typically ignored. In a previous paper [9], we proposed the use of word vector length as measure of word signiï¬ | 1510.02675#1 | 1510.02675#3 | 1510.02675 | [
"1510.02675"
] |
1510.02675#3 | Controlled Experiments for Word Embeddings | cance. Using a domain-speciï¬ c corpus of scientiï¬ c abstracts, we observed that words that appear only in similar contexts tend to have longer vectors than words of the same frequency that appear in a wide variety of contexts. For a given frequency band, we found meaningless function words clearly separated from proper nouns, each of which typically carries the meaning of a distinctive context in this corpus. In other words, the longer its vector, the more signiï¬ cant a word is. We also observed that word signiï¬ cance is not the only factor determining the length of a word vector, also the frequency with which a word occurs plays an important role. | 1510.02675#2 | 1510.02675#4 | 1510.02675 | [
"1510.02675"
] |
1510.02675#4 | Controlled Experiments for Word Embeddings | 1 In this paper, we wish to study in detail to what extent these two factors determine word vectors. For a given corpus, both term frequency and co-occurrence are, of course, ï¬ xed and it is not obvious how to unravel these dependencies in an unambiguous, objective manner. In particular, it is difï¬ cult to establish the distinctiveness of the contexts in which a word is used. To overcome these problems, we propose to modify the training corpus in a controlled fashion. To this end, we insert new tokens into the corpus with varying frequencies and varying levels of noise in their co-occurrence distributions. By modeling the frequency and co-occurrence distributions of these tokens, or pseudowords1, on existing words in the corpus, we are able to study their effect on word vectors independently of one another. We can thus study a family of pseudowords that all appear in the same context, but with different frequencies, or study a family of pseudowords that all have the same frequency, but appear in a different number of contexts. Starting from the limited number of contexts in which a word appears in the original corpus, we can increase this number by interspersing the word in arbitrary contexts at random. The word thus looses its signiï¬ cance in a controlled way. | 1510.02675#3 | 1510.02675#5 | 1510.02675 | [
"1510.02675"
] |
1510.02675#5 | Controlled Experiments for Word Embeddings | Although we present our approach using the word2vec CBOW model, these and related experiments could equally well be carried out for other word embedding methods such as the word2vec skip-gram model [7, 6], GloVe [8], and SENNA [3]. We show that the length of the word vectors generated by the CBOW model depends more or less linearly on both word frequency and level of noise in the co-occurrence distribution of the word. In both cases, the coefï¬ cient of linearity depends upon the word. If the co-occurrence distribution is ï¬ xed, then word vector length increases with word frequency. If, on the other hand, word frequency is held constant, then word vector length decreases as the level of noise in the co-occurrence distribution of the word is increased. In addition, we show that the direction of a word vector varies smoothly with word frequency and the level of co-occurrence noise. When noise is added to the co-occurrence distribu- tion of a word, the corresponding vector smoothly interpolates between the original word vector and a small vector perpendicular to it that represents a word with pure noise in its co-occurrence distribution. Surprisingly, the special point in feature space, obtained by interspersing a pseudoword uniformly at random throughout the corpus with a frequency sufï¬ ciently large to sample all contexts, is non-zero. This paper is structured as follows. Section 2 draws connections to related work, while Section 3 describes the corpus and the CBOW model used in our experiments. Section 4 describes a controlled experiment for varying word frequency while holding the co-occurrence distribution ï¬ | 1510.02675#4 | 1510.02675#6 | 1510.02675 | [
"1510.02675"
] |
1510.02675#6 | Controlled Experiments for Word Embeddings | xed. Section 5, in a complementary fashion, describes a controlled experiment for varying the level of noise in the co- occurrence distribution of a word while holding the word frequency ï¬ xed. The ï¬ nal section, Section 6, considers further questions and possible future directions. # 2 Related work Our experimental ï¬ nding that word vector length decreases with co-occurrence noise is related to earlier work by Vecchi, Baroni, and Zamparelli [11], where a relation between vector length and the â semantic devianceâ of an adjective-noun composite was studied empirically. In that paper, which is also based on word co-occurrence statistics, the authors study adjective-noun composites. They built a vocabulary from the 8k most frequent nouns and 4k most frequent adjectives in a large general language corpus and added 22k adjective-noun composites. For each item in the vocabulary, they recorded the co-occurrences with the top 10k most frequent content words (nouns, adjectives or verbs), and constructed word embed- dings via singular value decomposition of the co-occurrence matrix [5]. The authors considered several models for constructing vectors of unattested adjective-noun composites, the two simplest being adding and component-wise multiplying the adjective and noun vectors. They hypothesized that the length of the vectors thus constructed can be used to distinguish acceptable and semantically deviant adjective- noun composites. Using a few hundred adjective-noun composites selected by humans for evaluation, they found that deviant composites have a shorter vector than acceptable ones, in accordance with their expectation. In contrast to their work, our approach does not require human annotation. 1We refer to these tokens as pseudowords, since their properties are modeled upon words in the lexicon and because our corpus modiï¬ cation approach is reminiscent of the pseudoword approach for generating labeled data for word sense disambiguation tasks in [4]. | 1510.02675#5 | 1510.02675#7 | 1510.02675 | [
"1510.02675"
] |
1510.02675#7 | Controlled Experiments for Word Embeddings | 2 Recent theoretical work [1] has approached the problem of explaining the so-called â compositionalityâ property exhibited by some word embeddings. In that work, unnormalised vectors are used in their model of the word relation task. It is hoped that experimental approaches such as those described here might enable theoretical investigations to describe the role of the word vector length in the word relation tasks. # 3 Corpus and model Our training data is built from the Wikipedia data dump from October 2013. To remove the bulk of robot-generated pages from the training data, only pages with at least 20 monthly page views are retained.2 Stubs and disambiguation pages are also removed, leaving 463 thousand pages with a total of 482 million words. Punctuation marks and numbers were removed from the pages and all words were lower-cased. Word frequencies are summarised in Table 1. This base corpus is then modiï¬ ed as described in Sections 4 and 5. For recognisability, the pseudowords inserted into the corpus are upper-cased. # 3.1 Word2vec Word2vec, a feed-forward neural network with a single hidden layer, learns word vectors from word co-occurrences in an unsupervised manner. Word2vec comes in two versions. In the continuous bag- of-words (CBOW) model, the words appearing around a target word serve as input. That input is projected linearly onto the hidden layer and the network then attempts to predict the target word on output. Training is achieved through back-propagation. The word vectors are encoded in the weights of the ï¬ rst synaptic layer, â syn0â . The weights of the second synaptic layer (â syn1negâ , in the case of negative sampling) are typically discarded. In the other model, called skip-gram, target and context words swap places, so that the target word now serves as input, while the network attempts to predict the context words on output. For simplicity only the word2vec CBOW word embedding with a single set of hyperparameters is considered. | 1510.02675#6 | 1510.02675#8 | 1510.02675 | [
"1510.02675"
] |
1510.02675#8 | Controlled Experiments for Word Embeddings | Speciï¬ cally, a CBOW model with a hidden layer of size 100 is trained using negative sampling with 5 negative samples, a window size of 10, a minimum frequency of 128, and 10 passes through the corpus. Sub-sampling was not used so that the inï¬ uence of word frequency could be more clearly discerned. Similar experimental results were obtained using hierarchical softmax, but these are omitted for succinctness. The relatively high low-frequency cut-off is chosen to ensure that word vectors, in all but degenerate cases, receive a sufï¬ cient number of gradient updates to be meaningful. This frequency cut-off results in a vocabulary of 81117 words (only unigrams were considered). The most recent revision of word2vec was used.3 The source code for performing the experiments is made available on GitHub.4 | 1510.02675#7 | 1510.02675#9 | 1510.02675 | [
"1510.02675"
] |
1510.02675#9 | Controlled Experiments for Word Embeddings | # 3.2 Replacement procedure In the experiments detailed below, we modify the corpus in a controlled manner by introducing pseu- dowords into the corpus via a replacement procedure. For the frequency experiment, the procedure is as follows. Consider a word, say cat. For each occurrence of this word, a sample i, 1 6 i 6 n is drawn from a truncated geometric distribution, and that occurrence of the word cat is replaced with the pseudoword CAT i. In this way, the word cat is replaced throughout the corpus by a family of pseudowords with varying frequencies but approximately the same co-occurrence distribution as cat. That is, all these pseudowords are used in roughly the same contexts as the original word. | 1510.02675#8 | 1510.02675#10 | 1510.02675 | [
"1510.02675"
] |
1510.02675#10 | Controlled Experiments for Word Embeddings | 2For further justiï¬ cation and to obtain the dataset, see https://blog.lateral.io/2015/06/the-unknown-perils-of-mining-wikipedia/ 3SVN revision 42, see http://word2vec.googlecode.com/svn/trunk/ 4https://github.com/benjaminwilson/word2vec-norm-experiments 3 frequency band 20 â 21 21 â 22 22 â 23 23 â 24 24 â 25 25 â 26 26 â 27 27 â 28 28 â 29 29 â 210 210 â 211 211 â 212 212 â 213 213 â 214 214 â 215 215 â 216 216 â 217 217 â 218 218 â 219 219 â 220 220 â 221 221 â 222 222 â 223 223 â 224 224 â 225 225 â 226 # words 979187 isa220, zhangzhongzhu, yewell, gxgr 416549 wz132, prabhanjna, fesh, rudick 220573 gustafsdotter, summerfields, autodata, nagassarium 134870 futu, abertillery, shikaras, yuppy 90755 chuva, waffling, wws, andujar 62581 nagini, sultanah, charrette, wndy 41359 shew, dl, kidjo, strangeways 27480 smartly, sydow, beek, falsify 17817 legionaries, mbius, mannerism, cathars 12291 bedtime, disabling, jockeys, brougham 8215 frederic, monmouth, constituting, grabbing 5509 questionable, bosnian, pigment, coaster 3809 dismissal, torpedo, coordinates, stays 2474 liberty, hebrew, survival, muscles 1579 destruction, trophy, patrick, seats 943 draft, wood, ireland, reason 495 brought, move, sometimes, away 221 february, children, college, see 83 music, life, following, game 29 during, time, other, she 17 has, its, but, an 10 by, on, it, his 4 was, is, as, for 3 in, and, to 1 of 1 the Table 1: | 1510.02675#9 | 1510.02675#11 | 1510.02675 | [
"1510.02675"
] |
1510.02675#11 | Controlled Experiments for Word Embeddings | Number of words, by frequency band, as observed in the unmodiï¬ ed corpus. 4 The geometric distribution is truncated to limit the number of pseudowords inserted into the corpus. For any choice 0 < p < 1 and maximum value n > 0, the truncated geometric distribution is given by the probability density function Pp,n(i) = piâ 1(1 â p) 1 â pn , 1 6 i 6 n. (1) The factor in the denominator, which tends to unity in the limit n â â , assures proper normalisation. | 1510.02675#10 | 1510.02675#12 | 1510.02675 | [
"1510.02675"
] |
1510.02675#12 | Controlled Experiments for Word Embeddings | We have chosen this distribution because the probabilities decay exponentially base p as a function of i. Of course, other distributions might equally well have been chosen for the experiments. For the noise experiment, we take instead of a geometric distribution, the distribution Pn(i) = 2(n â i) n(n â 1) , 1 6 i 6 n. (2) We have chosen this distribution for the noise experiment, because it leads to evenly spaced proportions of co-occurrence noise that cover the entire interval [0, 1]. # 4 Varying word frequency In this ï¬ rst experiment, we investigate the effect of word frequency on the word embedding. Using the replacement procedure, we introduce a small number of families of pseudowords into the corpus. The pseudowords in each family vary in frequency but, replacing a single word, all share a common co-occurrence distribution. This allows us to study the role of word frequency in isolation, everything else being kept equal. We consider two types of pseudowords. # 4.1 Pseudowords derived from existing words We choose uniformly at random a small number of words from the unmodiï¬ ed vocabulary for our experiment. In order that the inserted pseudowords do not have too low a frequency, only words which occur at least 10 thousand times are chosen. We also include the high-frequency stopword the for comparison. Table 2 lists the words chosen for this experiment along with their frequencies. The replacement procedure of Section 3.2 is then performed for each of these words, using a geometric decay rate of p = 1 2 , and maximum value n = 20, so that the 1st pseudoword is inserted with a probability of about 0.5, the 2nd with a probability of about 0.25, and so on. This value of p is one of a range of values that ensure that, for each word, multiple pseudowords will be inserted with a frequency sufï¬ cient to survive the low-frequency cut-off of 128. A maximum value n = 20 sufï¬ ces for this choice of p, since 220+log2 128 exceeds the maximum frequency of any word in the corpus. Figure 1 illustrates the effect of these modiï¬ cations on a sample text, with a family of pseudowords CAT i, derived from the word cat. | 1510.02675#11 | 1510.02675#13 | 1510.02675 | [
"1510.02675"
] |
1510.02675#13 | Controlled Experiments for Word Embeddings | Notice that all occurrences of the word cat have been replaced with the pseudowords CAT i. # 4.2 Pseudowords derived from an artiï¬ cial, meaningless word Whereas the pseudowords introduced above all replace an existing word that carries a meaning, we now include for comparison a high-frequency, meaningless word. We choose to introduce an artiï¬ cial, entirely meaningless word VOID into the corpus, rather than choose an existing (stop)word whose mean- inglessness is only supposed. To achieve this, we intersperse the word uniformly at random throughout the corpus so that its relative frequency is 0.005. The co-occurrence distribution of VOID thus coincides with the unconditional word distribution. The replacement procedure is then performed for this word, using the same values for p and n as above. Figure 2 shows the effect of these modiï¬ cations on a sample text, where a higher relative frequency of 0.05 is used instead for illustrative purposes. 5 word lawsuit mercury protestant hidden squad kong awarded response the frequency 11565 13059 13404 15736 24872 32674 55528 69511 38012326 Table 2: Words chosen for the word frequency experiment, along with their frequency in the unmodiï¬ | 1510.02675#12 | 1510.02675#14 | 1510.02675 | [
"1510.02675"
] |
1510.02675#14 | Controlled Experiments for Word Embeddings | ed corpus. the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others the domestic CAT 2 was first classified as felis catus the semiferal CAT 1 a mostly outdoor CAT 1 is not owned by any one individual a pedigreed CAT 1 is one whose ancestry is recorded by a CAT 2 fancier organization a purebred CAT 2 is one whose ancestry contains only individuals of the same breed the CAT 4 skull is unusual among mammals in having very large eye sockets another unusual feature is that the CAT 1 cannot produce taurine within groups one CAT 1 is usually dominant over the others Figure 1: Example sentences modiï¬ ed in the word frequency experiment as per Section 4.1, where the word cat is replaced with pseudowords CAT i using the truncated geometric distribution (1) with p = 1 VOID 1 the domestic cat was first classified as felis catus the semiferal cat VOID 3 a mostly outdoor cat is not VOID 2 owned by VOID 1 any one individual a pedigreed cat is one whose ancestry is recorded by a cat fancier organization a purebred cat is one whose ancestry contains only individuals of the same breed the cat skull is unusual among VOID 1 mammals in having very large eye sockets another unusual feature is that the cat cannot produce taurine within groups one cat is usually dominant over the others Figure 2: The same example sentences as in Figure 1 where instead of the word cat now the mean- ingless word VOID is replaced with pseudowords VOID i. For illustrative purposes, the meaningless word VOID was here interspersed with a relative frequency of 0.05. | 1510.02675#13 | 1510.02675#15 | 1510.02675 | [
"1510.02675"
] |
1510.02675#15 | Controlled Experiments for Word Embeddings | 6 # 4.3 Experimental results We next present the results of the word frequency experiment. We consider the effect of word frequency on the direction and on the length of word vectors separately. # 4.3.1 Word frequency and vector direction Figure 3 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Recall that the cosine similarity measures the extent to which two vectors have the same direction, taking a maximum value of 1 and a minimum value of â | 1510.02675#14 | 1510.02675#16 | 1510.02675 | [
"1510.02675"
] |
1510.02675#16 | Controlled Experiments for Word Embeddings | 1. The number of different pseudowords associated with an experiment word is the number of times that its frequency can be halved and remain above the low-frequency cut-off of 128. Consider ï¬ rst the vectors for the pseudowords associated to the word the. Notice that the cosine similarity of the vectors for THE 1 and THE i decreases monotonically with i, while the cosine sim- ilarity of the vectors for THE i and THE 18 increases monotonically with i. Indeed the direction of the vector THE i changes systematically, interpolating between the directions of the vectors of the highest-frequency pseudoword THE 1 and the lowest-frequency pseudoword THE 18. The same trend is apparent (though over shorter frequency ranges) for all the families of pseudowords other than that for VOID. Consider now the vectors for pseudowords derived from the meaningless word VOID. The vectors for VOID 7, . . . , VOID 13 are approximately orthogonal to one another, just as would be expected from randomly drawn vectors in a high dimensional space. As the pseudoword VOID occurs by construction in every context, a much higher number of samples is required to capture its co-occurrence distribution, and thereby to learn its vector (the same is true, but to a lesser extent, for the stopword the). We conclude that the vectors corresponding to the lower frequency pseudowords VOID 7, . . . , VOID 13 have not been trained on a sufï¬ cient number of samples to establish their proper direction. These vectors are excluded from further analysis. The vectors for VOID 1, . . . , VOID 6, on the other hand, exhibit the smooth change in vector direction with word frequency described in the previous paragraph. In recent work on the evaluation of word embeddings, Schnabel et al. [10] trained logistic regression models to predict whether a word was rare or frequent given only the direction of its word vector. For various word embedding methods, the prediction accuracy was measured as a function of the threshold for word rarity. It was found in the case of word2vec CBOW that word vector direction could be used to distinguish very rare words from all other words. | 1510.02675#15 | 1510.02675#17 | 1510.02675 | [
"1510.02675"
] |
1510.02675#17 | Controlled Experiments for Word Embeddings | Figure 3 is consistent with this ï¬ nding, as it is apparent that word vector direction does change gradually with frequency. Schnabel et al. claim further that word vector direction must encode word frequency directly, and not indirectly via semantic information. Figure 3, considered for any particular experiment word in isolation (e.g. SQUAD), demonstrates that the variance of word vector direction with word frequency is indeed independent of co-occurrence (semantic) information, and thereby provides further evidence for this claim. # 4.3.2 Word frequency and vector length We next consider the effect of frequency on word vector length. Throughout, we measure vector length using the Euclidean norm. Figure 4 shows this relation for individual words, both for the word vectors, represented by the weights of the ï¬ rst synaptic layer, syn0, in the word2vec neural network, and for the vectors represented by the weights of the second synaptic layer, syn1neg. We include the latter, which are typically ignored, for completeness. Each line corresponds to a single word, and the points on each line indicate the frequency and vector length of the pseudowords derived from that word. For example, the six points on the line corresponding to the word protestant are labeled, from right to left, by the pseudowords PROTESTANT 1, PROTESTANT 2, . . . , PROTESTANT 6. Again, the number of points on the line is determined by the frequency of the original word. For example, the frequency of the word protestant can be halved at most 6 times so that the frequency of the last pseudoword is still above the low-frequency cut-off. Because all the points on a line share the same co-occurrence distribution, the left panel in Figure 4 demonstrates conclusively that length does indeed depend on frequency directly. | 1510.02675#16 | 1510.02675#18 | 1510.02675 | [
"1510.02675"
] |
1510.02675#18 | Controlled Experiments for Word Embeddings | 7 Cosine similarity o word vectors VOID_13 . . . . . . . . . . VOID_2 VOID_1 THE_18 . . . . . . . . . . . . . . . THE_2 THE_1 KONG_7 . . . . KONG_2 KONG_1 PROTESTANT_6 . . . PROTESTANT_2 PROTESTANT_1 HIDDEN_6 . . . HIDDEN_2 HIDDEN_1 SQUAD_7 . . . . SQUAD_2 SQUAD_1 1 _ D A U Q S 2 _ D A U Q S . . . 7 _ D A U Q S . 1 _ N E D D H I 2 _ N E D D H I . . . 1 _ T N A T S E T O R P 6 _ N E D D H I 2 _ T N A T S E T O R P . . 6 _ T N A T S E T O R P . 1 _ G N O K 2 _ G N O K . . . 7 _ G N O K . 1 _ E H T 2. _ E H T . . . . . . . . . . . . . 8 1 _ E H T . 1 _ D O V 2. _ D O V I I . . . . . . . . . 3 1 _ D O V I 1.0 0.8 0.6 0.4 0.2 0.0 Figure 3: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the word frequency experiment. The words other than the and VOID were chosen randomly. Moreover, this relation is seen to be approximately linear for each word considered. | 1510.02675#17 | 1510.02675#19 | 1510.02675 | [
"1510.02675"
] |
1510.02675#19 | Controlled Experiments for Word Embeddings | Notice also that the relative positions of the lengths of the word vectors associated with the experiment words are roughly independent of the frequency band, i.e., the plotted lines rarely cross. Observe that the lengths of the vectors representing the meaningless pseudowords VOID i are approx- imately constant (about 2.5). Since we already found the direction to be also constant, it is sensible to speak of the word vector of VOID irrespective of its frequency. In particular, the vector of the pseu- doword VOID 1 may be taken as an approximation. 5 Varying co-occurrence noise This second experiment is complementary to the ï¬ rst. Whereas in the ï¬ rst experiment we studied the effect of word frequency on word vectors for ï¬ xed co-occurrence, we here study the effect of co- occurrence noise when the frequency is ï¬ | 1510.02675#18 | 1510.02675#20 | 1510.02675 | [
"1510.02675"
] |
1510.02675#20 | Controlled Experiments for Word Embeddings | xed. As before, we do so in a controlled manner. # 5.1 Generating noise We take the noise distribution to be the (observed) unconditional word distribution. Noise can then be added to the co-occurrence distribution of a word by simply interspersing occurrences of that word 8 â 0.2 â 0.4 â 0.6 â 0.8 â 1.0 9 syn0 syn1neg 45 25 40 h t g n e l r o t c e v 35 30 25 20 15 20 15 10 kong awarded lawsuit protestant squad mercury response hidden the VOID 10 5 5 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8 0 10 2 10 3 10 4 5 10 frequency 10 6 10 7 10 8 Figure 4: Vector length vs. frequency for pseudowords derived from a few words chosen at random. For each word, pseudowords of varying frequency but with the co-occurrence distribution of that word were inserted into the corpus, as described in Section 4. The vectors are obtained from the ï¬ rst synaptic layer, syn0, of the word2vec neural network. The vectors obtained from the second layer, syn1neg, are included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending. | 1510.02675#19 | 1510.02675#21 | 1510.02675 | [
"1510.02675"
] |
1510.02675#21 | Controlled Experiments for Word Embeddings | word dying bridges appointment aids boss removal jobs community frequency 10693 12193 12546 13487 14105 15505 21065 115802 Table 3: Words chosen for the co-occurrence noise experiment, along with the word frequencies in the unmodiï¬ ed corpus. uniformly at random throughout the corpus. A word that is consistently used in a distinctive context in the unmodiï¬ ed corpus thus appears in the modiï¬ ed corpus also in completely unrelated contexts. As in Section 4, we choose a small number of words from the unmodiï¬ ed corpus for this experiment. Table 3 lists the words chosen, along with their frequencies in the corpus. For each of these words, the replacement procedure of Section 3.2 is performed using the distribu- tion (2) with n = 7. For every replacement pseudoword (e.g. CAT i), additional occurrences of this pseudoword are interspersed uniformly at random throughout the corpus, such that the ï¬ nal frequency of the replacement pseudoword is 2/n times that of the original word cat. For example, if the original word cat occurred 1000 times, then after the replacement procedure, CAT 2 occurs approximately 238 times, so a further (approximately) 2/7 à 1000 â 238 â 48 random occurrences of CAT 2 are interspersed throughout the corpus. In this way, the word cat is removed from the corpus and replaced with a family of pseudowords CAT i, 1 6 i 6 7. These pseudowords all have the same frequency, but their co-occurrence distributions, while based on that of cat, have an increasing amount of noise. | 1510.02675#20 | 1510.02675#22 | 1510.02675 | [
"1510.02675"
] |
1510.02675#22 | Controlled Experiments for Word Embeddings | Speciï¬ cally, the proportion of noise for the ith pseudoword is 1 â n 2 Pn(i) = i â 1 n â 1 , or 0, 1 n â 1 , 2 n â 1 , . . . , 1 for i = 1, 2, . . . , n, which is evenly distributed. The ï¬ rst pseudoword contains no noise at all, while the last pseudoword stands for pure noise. The particular choice of n assures a reasonable coverage of the interval [0, 1]. Other parameter values (or indeed other distributions) could, of course, have been used equally well. Figure 5 illustrates the effect of this modiï¬ cation in the case where the only word chosen is cat. The original text in this case concerned both cats and dogs. Notice that the word cat has been replaced entirely in the cats section by CAT i and, moreover, that these same pseudowords appear also in the dogs section. These occurrences (and additionally, with probability, some occurrences from the cats section) constitute noise. # 5.2 Experimental results Figure 6 shows the cosine similarity of pairs of vectors representing some of the pseudowords used in this experiment. Remember that the ï¬ rst pseudoword (i = 1) in a family is without noise in its co-occurrence distribution, while the last one (i = n, with n = 7) stands for pure noise and has therefore no relation anymore with the word it derives from. The ï¬ gure demonstrates that the vectors within a family only moderately deviate from the original direction deï¬ ned by the ï¬ rst pseudoword (i = 1) when noise is added to the co-occurrence distribution. For 1 < i < 7, the deviation typically increases with the proportion of noise. The vector of the last pseudoword (i = n), associated with pure noise, is seen within each of the families to point in a completely different direction, more or less perpendicular to the original one. To understand this interpolating behavior, recall from Section 4.3 that the vector for the entirely meaningless word VOID is small but non-zero. | 1510.02675#21 | 1510.02675#23 | 1510.02675 | [
"1510.02675"
] |
1510.02675#23 | Controlled Experiments for Word Embeddings | Since the noise distribution coincides with the co-occurrence distribution of VOID, the vectors for the experiment words must tend to the word vector for VOID as the proportion of noise in their co-occurrence distributions approaches 10 the domestic CAT 2 was first classified as felis catus the semiferal CAT 3 a mostly outdoor CAT 4 is not CAT 2 owned by any one individual a pedigreed CAT 4 is one whose ancestry is recorded by a CAT 1 fancier organization CAT 6 a purebred CAT 3 is one whose ancestry contains only individuals of the same breed the CAT 1 skull is unusual among mammals in having very CAT 4 large eye sockets another unusual feature is that the CAT 4 cannot produce taurine within groups one CAT 2 is usually dominant over the others ... the domestic dog canis lupus familiaris is a domesticated canid which has been selectively CAT 5 bred dogs perform many roles for people such as hunting herding and pulling loads CAT 7 in domestic dogs sexual maturity begins to happen around age six to twelve months this is CAT 6 the time at CAT 3 which female dogs will have their first estrous cycle some dog breeds have acquired traits through selective breeding that interfere with reproduction Figure 5: Example sentences modiï¬ ed for the co-occurrence noise experiment, where the word cat was chosen for replacement. The pseudowords were generated using the distribution (2) with n = 7. | 1510.02675#22 | 1510.02675#24 | 1510.02675 | [
"1510.02675"
] |
1510.02675#24 | Controlled Experiments for Word Embeddings | 1. This convergence to a common point is only indistinctly apparent in Figure 6, as the frequency of the experiment pseudowords is insufï¬ cient to sample the full variety of the contexts of VOID, i.e., all contexts (see Section 4.3.1). The left panel in Figure 7 reveals that vector length varies more or less linearly with the proportion of noise in the co-occurrence distribution of the word. This ï¬ gure motivates an interpretation of vector length, within a sufï¬ ciently narrow frequency band, as a measure of the absence of co-occurrence noise, or put differently, of the extent to which a word carries the meaning of a distinctive context. | 1510.02675#23 | 1510.02675#25 | 1510.02675 | [
"1510.02675"
] |
1510.02675#25 | Controlled Experiments for Word Embeddings | # 6 Discussion Our principle contribution has been to demonstrate that controlled experiments can be used to gain insight into a word embedding. These experiments can be carried out for any word embedding (or indeed language model), for they are achieved via modiï¬ cation of the training corpus only. They do not require knowledge of the model implementation. It would naturally be of interest to perform these experiments for other word embeddings other than word2vec CBOW, such as skipgrams and GloVe, as well as for different hyperparameters settings. More elaborate experiments could be carried out. For instance, by introducing pseudowords into the cor- pus that mix, with varying proportions, the co-occurrence distributions of two words, the path between the word vectors in the feature space could be studied. The co-occurrence noise experiment described here would be a special case of such an experiment where one of the two words was VOID. Questions pertaining to word2vec in particular arise naturally from the results of the experiments. Fig- ures 4 and 7, for example, demonstrate that the word vectors obtained from the ï¬ rst synaptic layer, syn0, have very different properties from those that could be obtained from the second layer, syn1neg. These differences warrant further investigation. | 1510.02675#24 | 1510.02675#26 | 1510.02675 | [
"1510.02675"
] |
1510.02675#26 | Controlled Experiments for Word Embeddings | 11 Cosine simila ity of wo d vecto s JOBS_7 . . . . JOBS_2 JOBS_1 BOSS_7 . . . . BOSS_2 BOSS_1 BRIDGES_7 . . . . BRIDGES_2 BRIDGES_1 DYING_7 . . . . DYING_2 DYING_1 1 _ G N Y D I 2 _ G N Y D I . . . . 7 _ G N Y D I 1 _ S E G D R B I 2 _ S E G D R B I . . . . 7 _ S E G D R B I 1 _ S S O B 2 _ S S O B . . . . 7 _ S S O B 1 _ S B O J 2 . _ S B O J . . . 7 _ S B O J Figure 6: Heatmap of the cosine similarity of the vectors representing some of the pseudowords used in the co-occurrence noise experiment (the words were chosen at random). The largely red blocks demonstrate that for i < 7 the direction of the vectors only moderately changes when noise is added to the co-occurrence distribution. The vector of the pseudowords associated with pure noise (i = 7) is seen to be almost perpendicular to the word vectors they derive from. | 1510.02675#25 | 1510.02675#27 | 1510.02675 | [
"1510.02675"
] |
1510.02675#27 | Controlled Experiments for Word Embeddings | 12 1.0 0.8 0.6 0.4 0.2 0.0 â 0.2 â 0.4 â 0.6 â 0.8 â 1.0 1 3 syn0 syn1neg 30 14 25 20 15 12 10 8 6 10 4 5 2 0 0.0 0.2 0.4 0.6 0.8 1.0 0 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of occurrences from noise distribution Proportion of occurrences from noise distribution # h t g n e # l r o t c e v | 1510.02675#26 | 1510.02675#28 | 1510.02675 | [
"1510.02675"
] |
1510.02675#28 | Controlled Experiments for Word Embeddings | appointment jobs community removal bridges aids boss dying Figure 7: Vector length vs. proportion of occurrences from the noise distribution for words chosen for this experiment. For each word, pseudowords of equal frequency but with increasing proportion of co-occurrence noise were inserted into the corpus, as described in Section 5. The word vectors are obtained from the ï¬ rst synaptic layer, syn0. The second layer, syn1neg, is included for completeness. Legend entries are ordered by vector length of the left-most data point in the syn0 plot, descending. The co-occurrence distribution of VOID is the unconditional frequency distribution, and in this sense pure background noise. Thus the word vector of VOID is a special point in the feature space. Figure 4 shows that this point is not at the origin of the feature space, i.e., is not the zero vector. The origin, however, is implicitly the point of reference in word2vec word similarity tasks. This raises the question of whether improved performance on similarity tasks could be achieved by transforming the feature space or modifying the model such that the representation of pure noise, i.e., the vector for VOID, is at the origin of the transformed feature space. | 1510.02675#27 | 1510.02675#29 | 1510.02675 | [
"1510.02675"
] |
1510.02675#29 | Controlled Experiments for Word Embeddings | # 7 Acknowledgments The authors thank Tobias Schnabel for helpful discussions. 14 # References [1] Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. Random walks on context spaces: Towards an explanation of the mysteries of semantic word embeddings. CoRR, abs/1502.03520, 2015. [2] Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. Donâ | 1510.02675#28 | 1510.02675#30 | 1510.02675 | [
"1510.02675"
] |
1510.02675#30 | Controlled Experiments for Word Embeddings | t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238â 247, Baltimore, Maryland, June 2014. Association for Computational Linguistics. [3] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. | 1510.02675#29 | 1510.02675#31 | 1510.02675 | [
"1510.02675"
] |
1510.02675#31 | Controlled Experiments for Word Embeddings | J. Mach. Learn. Res., 12:2493â 2537, November 2011. [4] William A Gale, Kenneth W Church, and David Yarowsky. Work on statistical methods for word sense In Working Notes of the AAAI Fall Symposium on Probabilistic Approaches to Natural disambiguation. Language, volume 54, page 60, 1992. [5] Thomas K Landauer and Susan T. Dutnais. | 1510.02675#30 | 1510.02675#32 | 1510.02675 | [
"1510.02675"
] |
1510.02675#32 | Controlled Experiments for Word Embeddings | A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. PSYCHOLOGICAL REVIEW, 104(2):211â 240, 1997. [6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬ cient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013. [7] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013. [8] Jeffrey Pennington, Richard Socher, and Christopher D Manning. | 1510.02675#31 | 1510.02675#33 | 1510.02675 | [
"1510.02675"
] |
1510.02675#33 | Controlled Experiments for Word Embeddings | Glove: Global vectors for word representa- tion. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), 12:1532â 1543, 2014. [9] Adriaan M. J. Schakel and Benjamin J. Wilson. Measuring word signiï¬ cance using distributed representations of words, 2015. [10] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsuper- vised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298â 307, Lisbon, Portugal, September 2015. Association for Computational Linguistics. | 1510.02675#32 | 1510.02675#34 | 1510.02675 | [
"1510.02675"
] |
1510.02675#34 | Controlled Experiments for Word Embeddings | (Linear) Maps of the Impossible: Capturing Se- mantic Anomalies in Distributional Space. In Proceedings of the Workshop on Distributional Semantics and Compositionality, pages 1â 9, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. 15 | 1510.02675#33 | 1510.02675 | [
"1510.02675"
] |
|
1510.01378#0 | Batch Normalized Recurrent Neural Networks | 5 1 0 2 t c O 5 ] L M . t a t s [ 1 v 8 7 3 1 0 . 0 1 5 1 : v i X r a # Batch Normalized Recurrent Neural Networks # C´esar Laurent â Universit´e de Montr´eal # Gabriel Pereyra â University of Southern California Phil´emon Brakel Universit´e de Montr´eal Ying Zhang Universit´e de Montr´eal Yoshua Bengio â Universit´e de Montr´eal # Abstract | 1510.01378#1 | 1510.01378 | [
"1502.03167"
] |
|
1510.01378#1 | Batch Normalized Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computa- tionally expensive to train and difï¬ cult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can signiï¬ cantly im- prove convergence rates in feedforward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to signiï¬ cantly reduce training time. In this paper, we show that applying batch normalization to the hidden-to-hidden transitions of our RNNs doesnâ t help the training procedure. We also show that when applied to the input-to-hidden transi- tions, batch normalization can lead to a faster convergence of the training criterion but doesnâ t seem to improve the generalization performance on both our language modelling and speech recognition tasks. All in all, applying batch normalization to RNNs turns out to be more challenging than applying it to feedforward net- works, but certain variants of it can still be beneï¬ | 1510.01378#0 | 1510.01378#2 | 1510.01378 | [
"1502.03167"
] |
1510.01378#2 | Batch Normalized Recurrent Neural Networks | cial. # 1 Introduction Recurrent Neural Networks (RNNs) have received renewed interest due to their recent success in var- ious domains, including speech recognition [2], machine translation [3, 4] and language modelling [5]. The so-called Long Short-Term Memory (LSTM) [6] type RNN has been particularly success- ful. Often, it seems beneï¬ cial to train deep architectures in which multiple RNNs are stacked on top of each other [2]. Unfortunately, the training cost for large datasets and deep architectures of stacked RNNs can be prohibitively high, often times an order of magnitude greater than simpler models like n-grams [7]. Because of this, recent work has explored methods for parallelizing RNNs across mul- tiple graphics cards (GPUs). In [3], an LSTM type RNN was distributed layer-wise across multiple GPUs and in [8] a bidirectional RNN was distributed across time. However, due to the sequential nature of RNNs, it is difï¬ cult to achieve linear speed ups relative to the number of GPUs. Another way to reduce training times is through a better conditioned optimization procedure. Stan- dardizing or whitening of input data has long been known to improve the convergence of gradient- based optimization methods [9]. Extending this idea to multi-layered networks suggests that nor- malizing or whitening intermediate representations can similarly improve convergence. However, applying these transforms would be extremely costly. In [1], batch normalization was used to stan- dardize intermediate representations by approximating the population statistics using sample-based approximations obtained from small subsets of the data, often called mini-batches, that are also used to obtain gradient approximations for stochastic gradient descent, the most commonly used optimization method for neural network training. It has also been shown that convergence can be improved even more by whitening intermediate representations instead of simply standardizing | 1510.01378#1 | 1510.01378#3 | 1510.01378 | [
"1502.03167"
] |
1510.01378#3 | Batch Normalized Recurrent Neural Networks | # â Equal contribution â CIFAR Senior Fellow 1 them [10]. These methods reduced the training time of Convolutional Neural Networks (CNNs) by an order of magnitude and additionallly provided a regularization effect, leading to state-of-the-art results in object recognition on the ImageNet dataset [11]. In this paper, we explore how to leverage normalization in RNNs and show that training time can be reduced. # 2 Batch Normalization In optimization, feature standardization or whitening is a common procedure that has been shown to reduce convergence rates [9]. Extending the idea to deep neural networks, one can think of an arbitrary layer as receiving samples from a distribution that is shaped by the layer below. This distribution changes during the course of training, making any layer but the ï¬ rst responsible not only for learning a good representation but also for adapting to a changing input distribution. This distribution variation is termed Internal Covariate Shift, and reducing it is hypothesized to help the training procedure [1]. To reduce this internal covariate shift, we could whiten each layer of the network. However, this often turns out to be too computationally demanding. Batch normalization [1] approximates the whitening by standardizing the intermediate representations using the statistics of the current mini- batch. Given a mini-batch x, we can calculate the sample mean and sample variance of each feature k along the mini-batch axis | 1510.01378#2 | 1510.01378#4 | 1510.01378 | [
"1502.03167"
] |
1510.01378#4 | Batch Normalized Recurrent Neural Networks | m 1 Xe = So xik: dd) m i=l # i=l 1 Ï 2 k = 1 m i=1 (xi,k â ¯xk)2, (2) where m is the size of the mini-batch. Using these statistics, we can standardize each feature as follows (3) where â ¬ is a small positive constant to improve numerical stability. However, standardizing the intermediate activations reduces the representational power of the layer. To account for this, batch normalization introduces additional learnable parameters γ and β, which respectively scale and shift the data, leading to a layer of the form BN (xk) = γk Ë xk + βk. (4) By setting γk to Ï k and βk to ¯xk, the network can recover the original layer representation. So, for a standard feedforward layer in a neural network y = Ï (Wx + b), (5) where W is the weights matrix, b is the bias vector, x is the input of the layer and Ï is an arbitrary activation function, batch normalization is applied as follows y = Ï (BN (Wx)). (6) Note that the bias vector has been removed, since its effect is cancelled by the standardization. Since the normalization is now part of the network, the back propagation procedure needs to be adapted to propagate gradients through the mean and variance computations as well. | 1510.01378#3 | 1510.01378#5 | 1510.01378 | [
"1502.03167"
] |
1510.01378#5 | Batch Normalized Recurrent Neural Networks | At test time, we canâ t use the statistics of the mini-batch. Instead, we can estimate them by either forwarding several training mini-batches through the network and averaging their statistics, or by maintaining a running average calculated over each mini-batch seen during training. 2 # 3 Recurrent Neural Networks Recurrent Neural Networks (RNNs) extend Neural Networks to sequential data. Given an input sequence of vectors (x1, . . . , xT ), they produce a sequence of hidden states (h1, . . . , hT ), which are computed at time step t as follows | 1510.01378#4 | 1510.01378#6 | 1510.01378 | [
"1502.03167"
] |
1510.01378#6 | Batch Normalized Recurrent Neural Networks | ht = Ï (Whhtâ 1 + Wxxt), (7) where Wh is the recurrent weight matrix, Wx is the input-to-hidden weight matrix, and Ï is an arbitrary activation function. If we have access to the whole input sequence, we can use information not only from the past time steps, but also from the future ones, allowing for bidirectional RNNs [12] â â h t = Ï ( â â h t = Ï ( â â h t : # â â Wh â â Wh â â h tâ 1 + â â h t+1 + â â h t], â â Wxxt), â â Wxxt), where [x : y] denotes the concatenation of x and y. Finally, we can stack RNNs by using h as the input to another RNN, creating deeper architectures [13] | 1510.01378#5 | 1510.01378#7 | 1510.01378 | [
"1502.03167"
] |
1510.01378#7 | Batch Normalized Recurrent Neural Networks | hl t = Ï (Whhl tâ 1 + Wxhlâ 1 t ). (11) In vanilla RNNs, the activation function Ï is usually a sigmoid function, such as the hyperbolic tangent. Training such networks is known to be particularly difï¬ cult, because of vanishing and exploding gradients [14]. # 3.1 Long Short-Term Memory A commonly used recurrent structure is the Long Short-Term Memory (LSTM). It addresses the vanishing gradient problem commonly found in vanilla RNNs by incorporating gating functions into its state dynamics [6]. At each time step, an LSTM maintains a hidden vector h and a cell vector c responsible for controlling state updates and outputs. | 1510.01378#6 | 1510.01378#8 | 1510.01378 | [
"1502.03167"
] |
1510.01378#8 | Batch Normalized Recurrent Neural Networks | More concretely, we deï¬ ne the computation at time step t as follows [15]: i, = sigmoid(W);hy_1 + Waix:) f, = sigmoid(W), phy_1 + Wi fXt) ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) 0, = sigmoid(W),ohy_1 + WheXt + Weotr) hy = o% © tanh(c;) i, = sigmoid(W);hy_1 + Waix:) (12) f, = sigmoid(W), phy_1 + Wi fXt) (13) ce, =f, Oc_1 +i; © tanh(W),-hy_1 + WicX:) (14) = sigmoid(W),ohy_1 + WheXt + Weotr) (15) hy = o% © tanh(c;) (16) where sigmoid(·) is the logistic sigmoid function, tanh is the hyperbolic tangent function, Wh· are the recurrent weight matrices and Wx· are the input-to-hiddent weight matrices. it, ft and ot are respectively the input, forget and output gates, and ct is the cell. # 4 Batch Normalization for RNNs From equation 6, an analogous way to apply batch normalization to an RNN would be as follows: | 1510.01378#7 | 1510.01378#9 | 1510.01378 | [
"1502.03167"
] |
1510.01378#9 | Batch Normalized Recurrent Neural Networks | ht = Ï (BN (Whhtâ 1 + Wxxt)). (17) However, in our experiments, when batch normalization was applied in this fashion, it didnâ t help the training procedure (see appendix A for more details). Instead we propose to apply batch normal- ization only to the input-to-hidden transition (Wxxt), i.e. as follows: ht = Ï (Whhtâ 1 + BN (Wxxt)). (18) This idea is similar to the way dropout [16] can be applied to RNNs [17]: batch normalization is applied only on the vertical connections (i.e. from one layer to another) and not on the horizontal connections (i.e. within the recurrent layer). We use the same principle for LSTMs: batch normal- ization is only applied after multiplication with the input-to-hidden weight matrices Wx·. | 1510.01378#8 | 1510.01378#10 | 1510.01378 | [
"1502.03167"
] |
1510.01378#10 | Batch Normalized Recurrent Neural Networks | 3 (12) (13) (14) (15) (16) Model Train Dev BiRNN BiRNN (BN) FCE FER FCE FER 0.33 0.95 0.73 0.34 0.28 0.22 1.11 1.19 Table 1: Best framewise cross entropy (FCE) and frame error rate (FER) on the training and devel- opment sets for both networks. # 4.1 Frame-wise and Sequence-wise Normalization | 1510.01378#9 | 1510.01378#11 | 1510.01378 | [
"1502.03167"
] |
1510.01378#11 | Batch Normalized Recurrent Neural Networks | In experiments where we donâ t have access to the future frames, like in language modelling where the goal is to predict the next character, we are forced to compute the normalization a each time step Xkt â Xkyt Von te (19) Xkt = Weâ ll refer to this as frame-wise normalization. In applications like speech recognition, we usually have access to the entire sequences. However, those sequences may have variable length. Usually, when using mini-batches, the smaller sequences are padded with zeroes to match the size of the longest sequence of the mini-batch. | 1510.01378#10 | 1510.01378#12 | 1510.01378 | [
"1502.03167"
] |
1510.01378#12 | Batch Normalized Recurrent Neural Networks | In such setups we canâ t use frame-wise normalization, because the number of unpadded frames decreases along the time axis, leading to increasingly poorer statistics estimates. To solve this problem, we apply a sequence-wise normalization, where we compute the mean and variance of each feature along both the time and batch axis using m T 1 Xe =o > So Xie: (20) i=1 t=1 rat oR = = Dei â Re)â , (21) i=1 t=1 where T is the length of each sequence and n is the total number of unpadded frames in the mini- batch. | 1510.01378#11 | 1510.01378#13 | 1510.01378 | [
"1502.03167"
] |
1510.01378#13 | Batch Normalized Recurrent Neural Networks | Weâ ll refer to this type of normalization as sequence-wise normalization. # 5 Experiments We ran experiments on a speech recognition task and a language modelling task. The models were implemented using Theano [18] and Blocks [19]. # 5.1 Speech Alignment Prediction For the speech task, we used the Wall Street Journal (WSJ) [20] speech corpus. We used the si284 split as training set and evaluated our models on the dev93 development set. The raw audio was transformed into 40 dimensional log mel ï¬ | 1510.01378#12 | 1510.01378#14 | 1510.01378 | [
"1502.03167"
] |
1510.01378#14 | Batch Normalized Recurrent Neural Networks | lter-banks (plus energy), with deltas and delta-deltas. As in [21], the forced alignments were generated from the Kaldi recipe tri4b, leading to 3546 clustered triphone states. Because of memory issues, we removed from the training set the sequences that were longer than 1300 frames (4.6% of the set), leading to a training set of 35746 sequences. The baseline model (BL) is a stack of 5 bidirectional LSTM layers with 250 hidden units each, followed by a size 3546 softmax output layer. All the weights were initialized using the Glorot [22] scheme and all the biases were set to zero. For the batch normalized model (BN) we applied sequence-wise normalization to each LSTM of the baseline model. Both networks were trained using standard SGD with momentum, with a ï¬ xed learning rate of 1e-4 and a ï¬ xed momentum factor of 0.9. The mini-batch size is 24. 4 vee BL train â BLdev 2. BN train 5 â BNdev c Oo B 4b 2 oO b 3p = E 52 x 1 0 i L L i n (0) 20 40 60 80 100 120 Every 250 batches Figure 1: Frame-wise cross entropy on WSJ for the baseline (blue) and batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves. # 5.2 Language Modeling We used the Penn Treebank (PTB) [23] corpus for our language modeling experiments. We use the standard split (929k training words, 73k validation words, and 82k test words) and vocabulary of 10k words. We train a small, medium and large LSTM as described in [17]. All models consist of two stacked LSTM layers and are trained with stochastic gradient descent (SGD) with a learning rate of 1 and a mini-batch size of 32. The small LSTM has two layers of 200 memory cells, with parameters being initialized from a uniform distribution with range [-0.1, 0.1]. We back propagate across 20 time steps and the gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 10. | 1510.01378#13 | 1510.01378#15 | 1510.01378 | [
"1502.03167"
] |
1510.01378#15 | Batch Normalized Recurrent Neural Networks | We train for 15 epochs and halve the learning rate every epoch after the 6th. The medium LSTM has a hidden size of 650 for both layers, with parameters being initialized from a uniform distribution with range [-0.05, 0.05]. We apply dropout with probability of 50% between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 40 epochs and divide the learning rate by 1.2 every epoch after the 6th. The Large LSTM has two layers of 1500 memory cells, with parameters being initialized from a uniform distribution with range [-0.04, 0.04]. We apply dropout between all layers. We back propagate across 35 time steps and gradients are scaled according to the maximum norm of the gradients whenever the norm is greater than 5. We train for 55 epochs and divide the learning rate by 1.15 every epoch after the 15th. | 1510.01378#14 | 1510.01378#16 | 1510.01378 | [
"1502.03167"
] |
1510.01378#16 | Batch Normalized Recurrent Neural Networks | # 6 Results and Discussion Figure 1 shows the training and development framewise cross entropy curves for both networks of the speech experiments. As we can see, the batch normalized networks trains faster (at some points about twice as fast as the baseline), but overï¬ ts more. The best results, reported in table 1, are comparable to the ones obtained in [21]. Figure 2 shows the training and validation perplexity for the large LSTM network of the language experiment. We can also observe that the training is faster when we apply batch normalization to | 1510.01378#15 | 1510.01378#17 | 1510.01378 | [
"1502.03167"
] |
1510.01378#17 | Batch Normalized Recurrent Neural Networks | 5 300 - rrseene Large BL train â Large BL valid Large BN train â Large BN valid 250 200 150 Perplexity 100 50 0 10 20 30 40 50 60 Epochs Figure 2: Large LSTM on Penn Treebank for the baseline (blue) and the batch normalized (red) networks. The dotted lines are the training curves and the solid lines are the validation curves. Model Train Valid Small LSTM 78.5 119.2 Small LSTM (BN) 62.5 120.9 Medium LSTM 49.1 89.0 Medium LSTM (BN) 41.0 90.6 Large LSTM 49.3 81.8 Large LSTM (BN) 35.0 97.4 Table 2: Best perplexity on training and development sets for LSTMs on Penn Treebank. | 1510.01378#16 | 1510.01378#18 | 1510.01378 | [
"1502.03167"
] |
Subsets and Splits