id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1609.08144#27
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation. # 4.2 Mixed Word/Character Model A second approach we use is the mixed word/character model. As in a word model, we keep a ï¬ xed-size word vocabulary. However, unlike in a conventional word model where OOV words are collapsed into a single UNK symbol, we convert OOV words into the sequence of its constituent characters. Special preï¬ xes are prepended to the characters, to 1) show the location of the characters in a word, and 2) to distinguish them from normal in-vocabulary characters. There are three preï¬ xes: <B>,<M>, and <E>, indicating beginning of the word, middle of the word and end of the word, respectively. For example, letâ s assume the word Miki is not in the vocabulary. It will be preprocessed into a sequence of special tokens: <B>M <M>i <M>k <E>i. The process is done on both the source and the target sentences. During decoding, the output may also contain sequences of special tokens.
1609.08144#26
1609.08144#28
1609.08144
[ "1603.06147" ]
1609.08144#28
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
With the preï¬ xes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. # 5 Training Criteria Given a dataset of parallel text containing N input-output sequence pairs, denoted D = {(x®, yyy standard maximum-likelihood training aims at maximizing the sum of log probabilities of the ground-truth outputs given the corresponding inputs, ie OML(θ) = N X log Pθ(Y â (i) | X (i)) . i=1 (7) The main problem with this objective is that it does not reï¬
1609.08144#27
1609.08144#29
1609.08144
[ "1603.06147" ]
1609.08144#29
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
ect the task reward function as measured by the BLEU score in translation. Further, this objective does not explicitly encourage a ranking among incorrect output sequences â where outputs with higher BLEU scores should still obtain higher probabilities under the model â since incorrect outputs are never observed during training. In other words, using maximum-likelihood training only, the model will not learn to be robust to errors made during decoding since they are never observed, which is quite a mismatch between the training and testing procedure. Several recent papers [34, 39, 32] have considered diï¬ erent ways of incorporating the task reward into optimization of neural sequence-to-sequence models. In this work, we also attempt to reï¬ ne a model pre- trained on the maximum likelihood objective to directly optimize for the task reward. We show that, even on large datasets, reï¬ nement of state-of-the-art maximum-likelihood models using task reward improves the results considerably. We consider model reï¬ nement using the expected reward objective (also used in [34]), which can be expressed as ORL(θ) = N X X Pθ(Y | X (i)) r(Y, Y â (i)). i=1 Y â Y (8) Here, r(Y, Y â (i)) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y , up to a certain length. The BLEU score has some undesirable properties when used for single sentences, as it was designed to be a corpus measure.
1609.08144#28
1609.08144#30
1609.08144
[ "1603.06147" ]
1609.08144#30
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
We therefore use a slightly diï¬ erent score for our RL experiments which we call the â GLEU scoreâ . For the GLEU score, we record all sub-sequences of 1, 2, 3 or 4 tokens in output and target sequence (n-grams). We then compute a recall, which is the ratio of the number of matching n-grams to the number of total n-grams in the target (ground truth) sequence, and a precision, which is the ratio of the number of matching n-grams to the number of total n-grams in the generated output sequence. Then GLEU score is simply the minimum of recall and precision. This GLEU scoreâ s range is always between 0 (no matches) and 1 (all match) and it is symmetrical when switching output and target. According to our experiments, GLEU score correlates quite well with the BLEU metric on a corpus level but does not have its drawbacks for our per sentence reward objective.
1609.08144#29
1609.08144#31
1609.08144
[ "1603.06147" ]
1609.08144#31
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
8 As is common practice in reinforcement learning, we subtract the mean reward from r(Y, Y â (i)) in equation 8. The mean is estimated to be the sample mean of m sequences drawn independently from distribution Pθ(Y | X (i)). In our implementation, m is set to be 15. To further stabilize training, we optimize a linear combination of ML (equation 7) and RL (equation 8) objectives as follows: OMixed(θ) = α â OML(θ) + ORL(θ) (9) α in our implementation is typically set to be 0.017. In our setup, we ï¬ rst train a model using the maximum likelihood objective (equation 7) until convergence. We then reï¬ ne this model using a mixed maximum likelihood and expected reward objective (equation 9), until BLEU score on a development set is no longer improving. The second step is optional. # 6 Quantizable Model and Quantized Inference One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation diï¬ cult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can signiï¬ cantly reduce the cost of inference for these models, often providing eï¬ ciency improvements on the same computational devices. For example, in [43], it is demonstrated that a convolutional neural network model can be sped up by a factor of 4-6 with minimal loss on classiï¬ cation accuracy on the ILSVRC-12 benchmark. In [27], it is demonstrated that neural network model weights can be quantized to only three states, -1, 0, and +1. Many of those previous studies [19, 20, 43, 27] however mostly focus on CNN models with relatively few layers. Deep LSTMs with long sequences pose a novel challenge in that quantization errors can be signiï¬ cantly ampliï¬ ed after many unrolled steps or after going through a deep LSTM stack. In this section, we present our approach to speed up inference with quantized arithmetic. Our solution is tailored towards the hardware options available at Google.
1609.08144#30
1609.08144#32
1609.08144
[ "1603.06147" ]
1609.08144#32
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
To reduce quantization errors, additional constraints are added to our model during training so that it is quantizable with minimal impact on the output of the model. That is, once a model is trained with these additional constraints, it can be subsequently quantized without loss to translation quality. Our experimental results suggest that those additional constraints do not hurt model convergence nor the quality of a model once it has converged. Recall from equation 6 that in an LSTM stack with residual connections there are two accumulators: ci t along the time axis and xi t along the depth axis. In theory, both of the accumulators are unbounded, but in practice, we noticed their values remain quite small. For quantized inference, we explicitly constrain the values of these accumulators to be within [-δ, δ] to guarantee a certain range that can be used for quantization later. The forward computation of an LSTM stack with residual connections is modiï¬
1609.08144#31
1609.08144#33
1609.08144
[ "1603.06147" ]
1609.08144#33
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
ed to the following: c0i t = LSTMi(ci t, mi t = max(â δ, min(δ, c0i ci x0i t + xiâ 1 t = mi t t = max(â δ, min(δ, x0i xi t)) t = LSTMi+1(ci+1 , mi+1 tâ 1, mi+1 tâ 1, xi t = max(â δ, min(δ, c0i+1 ci+1 )) tâ 1, mi tâ 1, xiâ 1 t)) t ; Wi) t; Wi+1) t (10) # c0i+1 t Let us expand LSTMi in equation 10 to include the internal gating logic. For brevity, we drop all the superscripts i.
1609.08144#32
1609.08144#34
1609.08144
[ "1603.06147" ]
1609.08144#34
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
9 W = [Wi, Wo, Ws, Wa, Ws, Wo, Wz, Ws! i, = sigmoid(W x; + W2m,) iâ , = tanh(W3x; + Wam,) f, = sigmoid(W;x, + Wem;) (11) 0; = sigmoid(W7x, + Wgm;) c= 1 Of +1, 0% my = Cy © O; When doing quantized inference, we replace all the ï¬ oating point operations in equations 10 and 11 with ï¬ xed-point integer operations with either 8-bit or 16-bit resolution. The weight matrix W above is represented using an 8-bit integer matrix WQ and a ï¬ oat vector s, as shown below: si = max(abs(W[i, :])) WQ[i, j] = round(W[i, j]/si à 127.0) (12) All accumulator values (ci and xi) are represented using 16-bit integers representing the range [â 6, 6]. All matrix multiplications (e.g., W1x,, W2m;, etc.) in equation [11] are done using 8-bit integer multiplication accumulated into larger accumulators. All other operations, including all the activations (sigmoid, tanh) and elementwise operations (©, +) are done using 16-bit integer operations. We now turn our attention to the log-linear softmax layer. During training, given the decoder RNN network output yt, we compute the probability vector pt over all candidate output symbols as follows: vt = Ws â yt v0 pt = sof tmax(v0 t) t = max(â γ, min(γ, vt)) (13) In equation 13, Ws is the weight matrix for the linear layer, which has the same number of rows as the number of symbols in the target vocabulary with each row corresponding to one unique target symbol. v represents the raw logits, which are ï¬
1609.08144#33
1609.08144#35
1609.08144
[ "1603.06147" ]
1609.08144#35
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
rst clipped to be between â γ and γ and then normalized into a probability vector p. Input yt is guaranteed to be between â δ and δ due to the quantization scheme we applied to the decoder RNN. The clipping range γ for the logits v is determined empirically, and in our case, it is set to 25. In quantized inference, the weight matrix Ws is quantized into 8 bits as in equation 12, and the matrix multiplication is done using 8 bit arithmetic. The calculations within the sof tmax function and the attention model are not quantized during inference. It is worth emphasizing that during training of the model we use full-precision ï¬ oating point numbers. The only constraints we add to the model during training are the clipping of the RNN accumulator values into [â δ, δ] and softmax logits into [â γ, γ]. γ is ï¬ xed to be at 25.0, while the value for δ is gradually annealed from a generous bound of δ = 8.0 at the beginning of training, to a rather stringent bound of δ = 1.0 towards the end of training. At inference time, δ is ï¬ xed at 1.0. Those additional constraints do not degrade model convergence nor the decoding quality of the model when it has converged. In Figure 4, we compare the loss vs. steps for an unconstrained model (the blue curve) and a constrained model (the red curve) on WMTâ
1609.08144#34
1609.08144#36
1609.08144
[ "1603.06147" ]
1609.08144#36
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
14 English-to-French. We can see that the loss for the constrained model is slightly better, possibly due to regularization roles those constraints play. Our solution strikes a good balance between eï¬ ciency and accuracy. Since the computationally expensive operations (the matrix multiplications) are done using 8-bit integer operations, our quantized inference is quite eï¬ cient. Also, since error-sensitive accumulator values are stored using 16-bit integers, our solution is very accurate and is robust to quantization errors. In Table 1 we compare the inference speed and quality when decoding the WMTâ 14 English-to-French development set (a concatenation of newstest2012 and newstest2013 test sets for a total of 6003 sentences) on
1609.08144#35
1609.08144#37
1609.08144
[ "1603.06147" ]
1609.08144#37
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
10 w= Normal training 4.5 === Quantized training | Log perplexity ye} ol wo to 1.5 0 2 4 6 8 10 12 14 Steps 10° # x Figure 4: Log perplexity vs. steps for normal (non-quantized) training and quantization-aware training on WMTâ 14 English to French during maximum likelihood training. Notice the training losses are similar, with the quantization-aware loss being slightly better. Our conjecture for quantization-aware training being slightly better is that the clipping constraints act as additional regularization which improves the model quality.
1609.08144#36
1609.08144#38
1609.08144
[ "1603.06147" ]
1609.08144#38
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
CPU, GPU and Googleâ s Tensor Processing Unit (TPU) respectively.1 The model used here for comparison is trained with quantization constraints on the ML objective only (i.e., without reinforcement learning based model reï¬ nement). When the model is decoded on CPU and GPU, it is not quantized and all operations are done using full-precision ï¬ oats. When it is decoded on TPU, certain operations, such as embedding lookup and attention module, remain on the CPU, and all other quantized operations are oï¬ -loaded to the TPU. In all cases, decoding is done on a single machine with two Intel Haswell CPUs, which consists in total of 88 CPU cores (hyperthreads). The machine is equipped with an NVIDIA GPU (Tesla k80) for the experiment with GPU or a single Google TPU for the experiment with TPU. Table 1 shows that decoding using reduced precision arithmetics on the TPU suï¬ ers a very minimal loss of 0.0072 on log perplexity, and no loss on BLEU at all. This result matches previous work reporting that quantizing convolutional neural network models can retain most of the model quality. Table 1 also shows that decoding our model on CPU is actually 2.3 times faster than on GPU. Firstly, our dual-CPUs host machine oï¬ ers a theoretical peak FLOP performance which is more than two thirds that of the GPU. Secondly, the beam search algorithm forces the decoder to incur a non-trivial amount of data transfer between the host and the GPU at every decoding step. Hence, our current decoder implementation 1https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html
1609.08144#37
1609.08144#39
1609.08144
[ "1603.06147" ]
1609.08144#39
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
11 is not fully utilizing the computation capacities that a GPU can theoretically oï¬ er during inference. Finally, Table 1 shows that decoding on TPUs is 3.4 times faster than decoding on CPUs, demonstrating that quantized arithmetics is much faster on TPUs than both CPUs or GPUs. Table 1: Model inference on CPU, GPU and TPU. The model used here for comparison is trained with the ML objective only with quantization constraints. Results are obtained by decoding the WMT Enâ Fr development set on CPU, GPU and TPU respectively. BLEU Log Perplexity Decoding time (s) CPU 31.20 GPU 31.20 TPU 31.21 1.4553 1.4553 1.4626 1322 3028 384 Unless otherwise noted, we always train and evaluate quantized models in our experiments.
1609.08144#38
1609.08144#40
1609.08144
[ "1603.06147" ]
1609.08144#40
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Because there is little diï¬ erence from a quality perspective between a model decoded on CPUs and one decoded on TPUs, we use CPUs to decode for model evaluation during training and experimentation and use TPUs to serve production traï¬ c. # 7 Decoder We use beam search during decoding to ï¬ nd the sequence Y that maximizes a score function s(Y, X) given a trained model. We introduce two important reï¬ nements to the pure max-probability based beam search algorithm: a coverage penalty [42] and length normalization. With length normalization, we aim to account for the fact that we have to compare hypotheses of diï¬ erent length. Without some form of length-normalization regular beam search will favor shorter results over longer ones on average since a negative log-probability is added at each step, yielding lower (more negative) scores for longer sentences.
1609.08144#39
1609.08144#41
1609.08144
[ "1603.06147" ]
1609.08144#41
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
We ï¬ rst tried to simply divide by the length to normalize. We then improved on that original heuristic by dividing by lengthα, with 0 < α < 1 where α is optimized on a development set (α â [0.6 â 0.7] was usually found to be best). Eventually we designed the empirically-better scoring function below, which also includes a coverage penalty to favor translations that fully cover the source sentence according to the attention module. More concretely, the scoring function s(Y, X) that we employ to rank candidate translations is deï¬ ned as follows: s(Y, X) = log(P (Y |X))/lp(Y ) + cp(X; Y ) (5 + |Y |)α (5 + 1)α |X| X lp(Y ) = cp(X; Y ) = β â log(min( |Y | X pi,j, 1.0)), i=1 j=1 (14) where pi,j is the attention probability of the j-th target word yj on the i-th source word xi. By construction (equation 4), P|X| i=0 pi,j is equal to 1. Parameters α and β control the strength of the length normalization and the coverage penalty. When α = 0 and β = 0, our decoder falls back to pure beam search by probability. During beam search, we typically keep 8-12 hypotheses but we ï¬ nd that using fewer (4 or 2) has only slight negative eï¬ ects on BLEU scores. Besides pruning the number of considered hypotheses, two other forms of pruning are used. Firstly, at each step, we only consider tokens that have local scores that are not more than beamsize below the best token for this step. Secondly, after a normalized best score has been found according to equation 14, we prune all hypotheses that are more than beamsize below the best normalized score so far. The latter type of pruning only applies to full hypotheses because it compares scores in the normalized space, which is only available when a hypothesis ends. This latter form of pruning also has the eï¬ ect that very quickly no more hypotheses will be generated once a suï¬ ciently good hypothesis has been found, so the search will end quickly. The pruning speeds up search by 30% â
1609.08144#40
1609.08144#42
1609.08144
[ "1603.06147" ]
1609.08144#42
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
40% when run on CPUs 12 compared to not pruning (where we simply stop decoding after a predetermined maximum output length of twice the source length). Typically we use beamsize = 3.0, unless otherwise noted. To improve throughput during decoding we can put many sentences (typically up to 35) of similar length into a batch and decode all of those in parallel to make use of available hardware optimized for parallel computations. In this case the beam search only ï¬ nishes if all hypotheses for all sentences in the batch are out of beam, which is slightly less eï¬ cient theoretically, but in practice is of negligible additional computational cost. α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 30.3 31.4 31.4 31.4 31.4 31.4 0.2 30.7 31.4 31.4 31.4 31.4 31.3 0.4 30.9 31.4 31.4 31.3 31.2 31.2 0.6 31.1 31.3 31.1 30.9 30.8 30.6 0.8 31.2 30.8 30.5 30.1 29.8 29.4 1.0 31.1 30.3 29.6 28.9 28.1 27.2 Table 2: WMTâ 14 Enâ Fr BLEU score with respect to diï¬ erent values of α and β.
1609.08144#41
1609.08144#43
1609.08144
[ "1603.06147" ]
1609.08144#43
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
The model in this experiment trained using ML without RL reï¬ nement. A single WMT Enâ Fr model achieves a BLEU score of 30.3 on the development set when the beam search scoring function is purely based on the sequence probability (i.e., both α and β are 0). Slightly larger α and β values improve BLEU score by up to +1.1 (α = 0.2, β = 0.2), with a wide range of α and β values giving results very close to the best BLEU scores. Table 2 shows the impact of α and β on the BLEU score when decoding the WMTâ 14 English-to-French development set. The model used here for experiments is trained using the ML objective only (without RL reï¬ nement). As can be seen from the results, having some length normalization and coverage penalty improves BLEU score considerably (from 30.3 to 31.4).
1609.08144#42
1609.08144#44
1609.08144
[ "1603.06147" ]
1609.08144#44
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
We ï¬ nd that length normalization (α) and coverage penalty (β) are less eï¬ ective for models with RL reï¬ nement. Table 3 summarizes our results. This is understandable, as during RL reï¬ nement, the models already learn to pay attention to the full source sentence to not under-translate or over-translate, which would result in a penalty on the BLEU (or GLEU) scores. α BLEU 0.0 0.2 0.4 0.6 0.8 1.0 β 0.0 0.320 0.322 0.322 0.322 0.322 0.322 0.2 0.321 0.322 0.322 0.322 0.322 0.321 0.4 0.322 0.322 0.322 0.321 0.321 0.321 0.6 0.322 0.322 0.321 0.321 0.321 0.320 0.8 0.322 0.321 0.321 0.319 0.316 0.313 1.0 0.322 0.321 0.316 0.309 0.302 0.295 Table 3: WMT Enâ Fr BLEU score with respect to diï¬ erent values of α and β. The model used here is trained using ML, then reï¬ ned with RL. Compared to the results in Table 2, coverage penalty and length normalization appear to be less eï¬ ective for models after RL-based model reï¬ nements. Results are obtained on the development set. We found that the optimal α and β vary slightly for diï¬ erent models. Based on tuning results using internal Google datasets, we use α = 0.2 and β = 0.2 in our experiments, unless noted otherwise. # 8 Experiments and Results In this section, we present our experimental results on two publicly available corpora used extensively as benchmarks for Neural Machine Translation systems:
1609.08144#43
1609.08144#45
1609.08144
[ "1603.06147" ]
1609.08144#45
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
WMTâ 14 English-to-French (WMT Enâ Fr) and English-to-German (WMT Enâ De). On these two datasets, we benchmark GNMT models with word-based, 13 character-based, and wordpiece-based vocabularies. We also present the improved accuracy of our models after ï¬ ne-tuning with RL and model ensembling. Our main objective with these datasets is to show the contributions of various components in our implementation, in particular the wordpiece model, RL model reï¬ nement, and model ensembling. In addition to testing on publicly available corpora, we also test GNMT on Googleâ s translation production corpora, which are two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair. We compare the accuracy of our model against human accuracy and the best Phrase-Based Machine Translation (PBMT) production system for Google Translate. In all experiments, our models consist of 8 encoder layers and 8 decoder layers. (Since the bottom encoder layer is actually bi-directional, in total there are 9 logically distinct LSTM passes in the encoder.) The attention network is a simple feedforward network with one hidden layer with 1024 nodes. All of the models use 1024 LSTM nodes per encoder and decoder layers. # 8.1 Datasets We evaluate our model on the WMT Enâ Fr dataset, the WMT Enâ De dataset, as well as many Google- internal production datasets. On WMT Enâ Fr, the training set contains 36M sentence pairs. On WMT Enâ De, the training set contains 5M sentence pairs. In both cases, we use newstest2014 as the test sets to compare against previous work [31, 37, 45]. The combination of newstest2012 and newstest2013 is used as the development set. In addition to WMT, we also evaluate our model on some Google-internal datasets representing a wider spectrum of languages with distinct linguistic properties:
1609.08144#44
1609.08144#46
1609.08144
[ "1603.06147" ]
1609.08144#46
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
English â French, English â Spanish and English â Chinese. # 8.2 Evaluation Metrics We evaluate our models using the standard BLEU score metric. To be comparable to previous work [41, 31, 45], we report tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which is also used in [31]. As is well-known, BLEU score does not fully capture the quality of a translation. For that reason we also carry out side-by-side (SxS) evaluations where we have human raters evaluate and compare the quality of two translations presented side by side for a given source sentence. Side-by-side scores range from 0 to 6, with a score of 0 meaning â completely nonsense translationâ , and a score of 6 meaning â perfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correctâ . A translation is given a score of 4 if â the sentence retains most of the meaning of the source sentence, but may have some grammar mistakesâ , and a translation is given a score of 2 if â the sentence preserves some of the meaning of the source sentence but misses signiï¬ cant partsâ . These scores are generated by human raters who are ï¬ uent in both languages and hence often capture translation quality better than BLEU scores.
1609.08144#45
1609.08144#47
1609.08144
[ "1603.06147" ]
1609.08144#47
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
# 8.3 Training Procedure The models are trained by a system we implemented using TensorFlow[1]. The training setup follows the classic data parallelism paradigm. There are 12 replicas running concurrently on separate machines. Every replica updates the shared parameters asynchronously. We initialize all trainable parameters uniformly between [-0.04, 0.04]. As is common wisdom in training RNN models, we apply gradient clipping (similar to [41]): all gradients are uniformly scaled down such that the norm of the modiï¬ ed gradients is no larger than a ï¬ xed constant, which is 5.0 in our case. If the norm of the original gradients is already smaller than or equal to the given threshold, then gradients are not changed.
1609.08144#46
1609.08144#48
1609.08144
[ "1603.06147" ]
1609.08144#48
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
For the ï¬ rst stage of maximum likelihood training (that is, to optimize for objective function 7), we use a combination of Adam [25] and simple SGD learning algorithms provided by the TensorFlow runtime system. We run Adam for the ï¬ rst 60k steps, after which we switch to simple SGD. Each step in training is a mini-batch of 128 examples. We ï¬ nd that Adam accelerates training at the beginning, but Adam alone converges to a worse point than a combination of Adam ï¬ rst, followed by SGD (Figure 5). For the Adam part, we use a learning rate of
1609.08144#47
1609.08144#49
1609.08144
[ "1603.06147" ]
1609.08144#49
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
14 =e= SGD only 45 === Adam only a Adam then SGD Log perplexity 0 1 1 1 1 1 1 1 0 2 4 6 8 10 12 14 16 Steps 10° # x Figure 5: Log perplexity vs. steps for Adam, SGD and Adam-then-SGD on WMT Enâ Fr during maximum likelihood training. Adam converges much faster than SGD at the beginning. Towards the end, however, Adam-then-SGD is gradually better. Notice the bump in the red curve (Adam-then-SGD) at around 60k steps where we switch from Adam to SGD. We suspect that this bump occurs due to diï¬ erent optimization trajectories of Adam vs. SGD. When we switch from Adam to SGD, the model ï¬
1609.08144#48
1609.08144#50
1609.08144
[ "1603.06147" ]
1609.08144#50
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
rst suï¬ ers a little, but is able to quickly recover afterwards. 0.0002, and for the SGD part, we use a learning rate of 0.5. We ï¬ nd that it is important to also anneal the learning rate after a certain number of total steps. For the WMT Enâ Fr dataset, we begin to anneal the learning rate after 1.2M steps, after which we halve the learning rate every 200k steps for an additional 800k steps. On WMT Enâ Fr, it takes around 6 days to train a basic model using 96 NVIDIA K80 GPUs. Once a model is fully converged using the ML objective, we switch to RL based model reï¬ nement, i.e., we further optimize the objective function as in equation 9. We reï¬ ne a model until the BLEU score does not change much on the development set. For this model reï¬ nement phase, we simply run the SGD optimization algorithm. The number of steps needed to reï¬ ne a model varies from dataset to dataset.
1609.08144#49
1609.08144#51
1609.08144
[ "1603.06147" ]
1609.08144#51
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
For WMT Enâ Fr, it takes around 3 days to complete 400k steps. To prevent overï¬ tting, we apply dropout during training with a scheme similar to [44]. For the WMT Enâ Fr and Enâ De datasets, we set the dropout probability to be 0.2 and 0.3 respectively. Due to various technical reasons, dropout is only applied during the ML training phase, not during the RL reï¬ nement phase. The exact hyper-parameters vary from dataset to dataset and from model to model.
1609.08144#50
1609.08144#52
1609.08144
[ "1603.06147" ]
1609.08144#52
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
For the WMT Enâ De dataset, since it is signiï¬ cantly smaller than the WMT Enâ Fr dataset, we use a higher dropout 15 probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps. # 8.4 Evaluation after Maximum Likelihood Training The models in our experiments are word-based, character-based, mixed word-character-based or several wordpiece models with varying vocabulary sizes. For the word model, we selected the most frequent 212K source words as the source vocabulary and the most popular 80k target words as the target vocabulary. Words not in the source vocabulary or the target vocabulary (unknown words) are converted into special <first_char>_UNK_<last_char> symbols. Note, in this case, there is more than one UNK (e.g., our production word models have roughly 5000 diï¬ erent UNKs in this case). We then use the attention mechanism to copy a corresponding word from the source to replace these unknown words during decoding [37]. The mixed word-character model is similar to the word model, except the out-of-vocabulary (OOV) words are converted into sequences of characters with special delimiters around them as described in section 4.2 in more detail. In our experiments, the vocabulary size for the mixed word-character model is 32K. For the pure character model, we simply split all words into constituent characters, resulting typically in a few hundred basic characters (including special symbols appearing in the data). For the wordpiece models, we train 3 diï¬ erent models with vocabulary sizes of 8K, 16K, and 32K. Table 4 summarizes our results on the WMT Enâ Fr dataset. In this table, we also compare against other strong baselines without model ensembling. As can be seen from the table, â WPM-32Kâ , a wordpiece model with a shared source and target vocabulary of 32K wordpieces, performs well on this dataset and achieves the best quality as well as the fastest inference speed. The pure character model (char input, char output) works surprisingly well on this task, not much worse than the best wordpiece models in BLEU score. However, these models are rather slow to train and slow to use as the sequences are much longer.
1609.08144#51
1609.08144#53
1609.08144
[ "1603.06147" ]
1609.08144#53
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Our best model, WPM-32K, achieves a BLEU score of 38.95. Note that this BLEU score represents the averaged score of 8 models we trained. The maximum BLEU score of the 8 models is higher at 39.37. We point out that our models are completely self-contained, as opposed to previous models reported in [45], which depend on some external alignment models to achieve their best results. Also note that all our test set numbers were achieved by picking an optimal model on the development set which was then used to decode the test set. Note that the timing numbers for this section are obtained on CPUs, not TPUs. We use here the same CPU machine as described above, and run the decoder with a batchsize of 16 sentences in parallel and a maximum of 4 concurrent hypotheses at any time per sentence. The time per sentence is the total decoding time divided by the number of respective sentences in the test set. Table 4: Single model results on WMT Enâ Fr (newstest2014) Model BLEU CPU decoding time 37.90 Word 38.01 Character WPM-8K 38.27 WPM-16K 37.60 WPM-32K 38.95 38.39 37.0 31.5 33.1 37.7 39.2 Mixed Word/Character PBMT [15] LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att [45] Deep-Att + PosUnk [45] per sentence (s) 0.2226 1.0530 0.1919 0.1874 0.2118 0.2774 Similarly, the results of WMT Enâ De are presented in Table 5. Again, we ï¬
1609.08144#52
1609.08144#54
1609.08144
[ "1603.06147" ]
1609.08144#54
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
nd that wordpiece models 16 achieves the best BLEU scores. Table 5: Single model results on WMT Enâ De (newstest2014) Model BLEU CPU decoding time Word Character (512 nodes) 23.12 22.62 WPM-8K 23.50 WPM-16K 24.36 WPM-32K 24.61 24.17 20.7 16.5 16.9 16.9 20.6 Mixed Word/Character PBMT [6] RNNSearch [37] RNNSearch-LV [37] RNNSearch-LV [37] Deep-Att [45] per sentence (s) 0.2972 0.8011 0.2079 0.1931 0.1882 0.3268 WMT Enâ
1609.08144#53
1609.08144#55
1609.08144
[ "1603.06147" ]
1609.08144#55
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
De is considered a more diï¬ cult task than WMT Enâ Fr as it has much less training data, and German, as a more morphologically rich language, needs a huge vocabulary for word models. Thus it is more advantageous to use wordpiece or mixed word/character models, which provide a gain of more than 2 BLEU points on top of the word model and about 4 BLEU points on top of previously reported results in [6, 45]. Our best model, WPM-32K, achieves a BLEU score of 24.61, which is averaged over 8 runs. Consistently, on the production corpora, wordpiece models tend to be better than other models both in terms of speed and accuracy. # 8.5 Evaluation of RL-reï¬ ned Models The models trained in the previous section are optimized for log-likelihood of the next step prediction which may not correlate well with translation quality, as discussed in section 5. We use RL training to ï¬ ne-tune sentence BLEU scores after normal maximum-likelihood training. The results of RL ï¬ ne-tuning on the best Enâ Fr and Enâ De models are presented in Table 6, which show that ï¬ ne-tuning the models with RL can improve BLEU scores. On WMT Enâ Fr, model reï¬
1609.08144#54
1609.08144#56
1609.08144
[ "1603.06147" ]
1609.08144#56
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
nement improves BLEU score by close to 1 point. On Enâ De, RL-reï¬ nement slightly hurts the test performance even though we observe about 0.4 BLEU points improvement on the development set. The results presented in Table 6 are the average of 8 independent models. We also note that there is an overlap between the wins from the RL reï¬ nement and the decoder ï¬ ne-tuning (i.e., the introduction of length normalization and coverage penalty). On a less ï¬ ne-tuned decoder (e.g., if the decoder does beam search by log-probability only), the win from RL would have been bigger (as is evident from comparing results in Table 2 and Table 3). Table 6: Single model test BLEU scores, averaged over 8 runs, on WMT Enâ Fr and Enâ De Dataset Trained with log-likelihood Reï¬ ned with RL Enâ Fr Enâ De
1609.08144#55
1609.08144#57
1609.08144
[ "1603.06147" ]
1609.08144#57
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
38.95 24.67 39.92 24.60 # 8.6 Model Ensemble and Human Evaluation We ensemble 8 RL-reï¬ ned models to obtain a state-of-the-art result of 41.16 BLEU points on the WMT Enâ Fr dataset. Our results are reported in Table 7. We ensemble 8 RL-reï¬ ned models to obtain a state-of-the-art result of 26.30 BLEU points on the WMT Enâ De dataset. Our results are reported in Table 8. Finally, to better understand the quality of our models and the eï¬ ect of RL reï¬ nement, we carried out a four-way side-by-side human evaluation to compare our NMT translations against the reference translations
1609.08144#56
1609.08144#58
1609.08144
[ "1603.06147" ]
1609.08144#58
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
17 Table 7: Model ensemble results on WMT Enâ Fr (newstest2014) Model BLEU 40.35 41.16 35.6 37.5 40.4 WPM-32K (8 models) RL-reï¬ ned WPM-32K (8 models) LSTM (6 layers) [31] LSTM (6 layers + PosUnk) [31] Deep-Att + PosUnk (8 models) [45] Table 8: Model ensemble results on WMT Enâ De (newstest2014). See Table 5 for a comparison against non-ensemble models. Model BLEU 26.20 26.30 WPM-32K (8 models) RL-reï¬ ned WPM-32K (8 models) and the best phrase-based statistical machine translations. During the side-by-side comparison, humans are asked to rate four translations given a source sentence. The four translations are: 1) the best phrase- based translations as downloaded from http://matrix.statmt.org/systems/show/2065, 2) an ensemble of 8 ML-trained models, 3) an ensemble of 8 ML-trained and then RL-reï¬ ned models, and 4) reference human translations as taken directly from newstest2014, Our results are presented in Table 9. Table 9: Human side-by-side evaluation scores of WMT Enâ Fr models. Model BLEU PBMT [15] NMT before RL NMT after RL Human 37.0 40.35 41.16 Side-by-side averaged score 3.87 4.46 4.44 4.82 The results show that even though RL reï¬ nement can achieve better BLEU scores, it barely improves the human impression of the translation quality. This could be due to a combination of factors including: 1) the relatively small sample size for the experiment (only 500 examples for side-by-side), 2) the improvement in BLEU score by RL is relatively small after model ensembling (0.81), which may be at a scale that human side-by-side evaluations are insensitive to, and 3) the possible mismatch between BLEU as a metric and real translation quality as perceived by human raters.
1609.08144#57
1609.08144#59
1609.08144
[ "1603.06147" ]
1609.08144#59
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Table 11 contains some example translations from PBMT, "NMT before RL" and "Human", along with the side-by-side scores that human raters assigned to each translation (some of which we disagree with, see the table caption). # 8.7 Results on Production Data We have carried out extensive experiments on many Google-internal production data sets. As the experiments above cast doubt on whether RL improves the real translation quality or simply the BLEU metric, RL-based model reï¬ nement is not used during these experiments. Given the larger volume of training data available in the Google corpora, dropout is also not needed in these experiments. In this section we describe our experiments with human perception of the translation quality. We asked human raters to rate translations in a three-way side-by-side comparison. The three sides are from: 1) translations from the production phrase-based statistical translation system used by Google, 2) translations from our GNMT system, and 3) translations by humans ï¬ uent in both languages. Reported here in Table 10 are averaged rated scores for English â French, English â Spanish and English â Chinese.
1609.08144#58
1609.08144#60
1609.08144
[ "1603.06147" ]
1609.08144#60
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
All the GNMT models are wordpiece models, without model ensembling, and use a shared source and target vocabulary with 32K wordpieces. On each pair of languages, the evaluation data consist of 500 randomly sampled sentences from Wikipedia and news websites, and the corresponding human translations to the target language. The 18 Table 10: Mean of side-by-side scores on production data Relative Improvement 87% 64% 58% 63% 83% 60% results show that our model reduces translation errors by more than 60% compared to the PBMT model on these major pairs of languages. A typical distribution of side-by-side scores is shown in Figure 6. 400 200 Count (total 500) 7 ln ll 0 on LL. 2 3 4 5 6 PBMT - GNMT - Human Figure 6: Histogram of side-by-side scores on 500 sampled sentences from Wikipedia and news websites for a typical language pair, here English â Spanish (PBMT blue, GNMT red, Human orange). It can be seen that there is a wide distribution in scores, even for the human translation when rated by other humans, which shows how ambiguous the task is. It is clear that GNMT is much more accurate than PBMT. As expected, on this metric the GNMT system improves also compared to the PBMT system. In some cases human and GNMT translations are nearly indistinguishable on the relatively simplistic and isolated sentences sampled from Wikipedia and news articles for this experiment. Note that we have observed that human raters, even though ï¬ uent in both languages, do not necessarily fully understand each randomly sampled sentence suï¬ ciently and hence cannot necessarily generate the best possible translation or rate a given translation accurately. Also note that, although the scale for the scores goes from 0 (complete nonsense) to 6 (perfect translation) the human translations get an imperfect score of only around 5 in Table 10, which shows possible ambiguities in the translations and also possibly non-calibrated raters and translators with a varying level of proï¬ ciency. Testing our GNMT system on particularly diï¬ cult translation cases and longer inputs than just single sentences is the subject of future work.
1609.08144#59
1609.08144#61
1609.08144
[ "1603.06147" ]
1609.08144#61
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
19 # 9 Conclusion In this paper, we describe in detail the implementation of Googleâ s Neural Machine Translation (GNMT) system, including all the techniques that are critical to its accuracy, speed, and robustness. On the public WMTâ 14 translation benchmark, our systemâ s translation quality approaches or surpasses all currently published results. More importantly, we also show that our approach carries over to much larger production data sets, which have several orders of magnitude more data, to deliver high quality translations.
1609.08144#60
1609.08144#62
1609.08144
[ "1603.06147" ]
1609.08144#62
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Our key ï¬ ndings are: 1) that wordpiece modeling eï¬ ectively handles open vocabularies and the challenge of morphologically rich languages for translation quality and inference speed, 2) that a combination of model and data parallelism can be used to eï¬ ciently train state-of-the-art sequence-to-sequence NMT models in roughly a week, 3) that model quantization drastically accelerates translation inference, allowing the use of these large models in a deployed production environment, and 4) that many additional details like length-normalization, coverage penalties, and similar are essential to making NMT systems work well on real data. Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets. In particular, compared to the previous phrase-based production system, this GNMT system delivers roughly a 60% reduction in translation errors on several popular language pairs.
1609.08144#61
1609.08144#63
1609.08144
[ "1603.06147" ]
1609.08144#63
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
# Acknowledgements We would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project. # References [1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X. Tensorï¬ ow:
1609.08144#62
1609.08144#64
1609.08144
[ "1603.06147" ]
1609.08144#64
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
A system for large-scale machine learning. Tech. rep., Google Brain, 2016. arXiv preprint. [2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (2015). [3] Brown, P., Cocke, J., Pietra, S. D., Pietra, V. D., Jelinek, F., Mercer, R., and Roossin, P.
1609.08144#63
1609.08144#65
1609.08144
[ "1603.06147" ]
1609.08144#65
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
A statistical approach to language translation. In Proceedings of the 12th Conference on Computational Linguistics - Volume 1 (Stroudsburg, PA, USA, 1988), COLING â 88, Association for Computational Linguistics, pp. 71â 76. [4] Brown, P. F., Cocke, J., Pietra, S. A. D., Pietra, V. J. D., Jelinek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S.
1609.08144#64
1609.08144#66
1609.08144
[ "1603.06147" ]
1609.08144#66
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
A statistical approach to machine translation. Computational linguistics 16, 2 (1990), 79â 85. [5] Brown, P. F., Pietra, V. J. D., Pietra, S. A. D., and Mercer, R. L. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist. 19, 2 (June 1993), 263â 311. [6] Buck, C., Heafield, K., and Van Ooyen, B.
1609.08144#65
1609.08144#67
1609.08144
[ "1603.06147" ]
1609.08144#67
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
N-gram counts and language models from the common crawl. In LREC (2014), vol. 2, Citeseer, p. 4. [7] Cho, K., van Merrienboer, B., Gülçehre, à ., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (2014).
1609.08144#66
1609.08144#68
1609.08144
[ "1603.06147" ]
1609.08144#68
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
[8] Chrisman, L. Learning recursive distributed representations for holistic computation. Connection Science 3, 4 (1991), 345â 366. 20 [9] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147 (2016). [10] Chung, J., Cho, K., and Bengio, Y. A character-level decoder without explicit segmentation for neural machine translation. CoRR abs/1603.06147 (2016). [11] Costa-Jussà, M. R., and Fonollosa, J. A. R.
1609.08144#67
1609.08144#69
1609.08144
[ "1603.06147" ]
1609.08144#69
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Character-based neural machine translation. CoRR abs/1603.00810 (2016). [12] Dean, J., Corrado, G. S., Monga, R., Chen, K., Devin, M., Le, Q. V., Mao, M. Z., Ranzato, M., Senior, A., Tucker, P., Yang, K., and Ng, A. Y. Large scale distributed deep networks. In NIPS (2012). [13] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R. M., and Makhoul, J. Fast and robust neural network joint models for statistical machine translation. In ACL (1) (2014), Citeseer, pp. 1370â
1609.08144#68
1609.08144#70
1609.08144
[ "1603.06147" ]
1609.08144#70
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
1380. [14] Dong, D., Wu, H., He, W., Yu, D., and Wang, H. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (2015), pp. 1723â 1732. [15] Durrani, N., Haddow, B., Koehn, P., and Heafield, K. Edinburghâ s phrase-based machine translation systems for WMT-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation (2014), Association for Computational Linguistics Baltimore, MD, USA, pp. 97â
1609.08144#69
1609.08144#71
1609.08144
[ "1603.06147" ]
1609.08144#71
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
104. [16] Fahlman, S. E., and Lebiere, C. The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems 2 (1990), Morgan Kaufmann, pp. 524â 532. [17] Gers, F. A., Schmidhuber, J., and Cummins, F. Learning to forget: Continual prediction with LSTM. Neural computation 12, 10 (2000), 2451â 2471. [18] Gülçehre, à ., Ahn, S., Nallapati, R., Zhou, B., and Bengio, Y. Pointing the unknown words. CoRR abs/1603.08148 (2016). [19] Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep learning with limited numerical precision. CoRR abs/1502.02551 (2015). [20] Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huï¬ man coding.
1609.08144#70
1609.08144#72
1609.08144
[ "1603.06147" ]
1609.08144#72
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
CoRR abs/1510.00149 (2015). [21] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (2015). [22] Hochreiter, S., Bengio, Y., Frasconi, P., and Schmidhuber, J. Gradient ï¬ ow in recurrent nets: the diï¬ culty of learning long-term dependencies, 2001. [23] Hochreiter, S., and Schmidhuber, J. Long short-term memory. Neural computation 9, 8 (1997), 1735â
1609.08144#71
1609.08144#73
1609.08144
[ "1603.06147" ]
1609.08144#73
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
1780. [24] Kalchbrenner, N., and Blunsom, P. Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Processing (2013). [25] Kingma, D. P., and Ba, J. Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014). [26] Koehn, P., Och, F. J., and Marcu, D. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics (2003). [27] Li, F., and Liu, B. Ternary weight networks. CoRR abs/1605.04711 (2016).
1609.08144#72
1609.08144#74
1609.08144
[ "1603.06147" ]
1609.08144#74
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
21 [28] Luong, M., and Manning, C. D. Achieving open vocabulary neural machine translation with hybrid word-character models. CoRR abs/1604.00788 (2016). [29] Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., and Kaiser, L. Multi-task sequence to sequence learning. In International Conference on Learning Representations (2015). [30] Luong, M.-T., Pham, H., and Manning, C. D. Eï¬
1609.08144#73
1609.08144#75
1609.08144
[ "1603.06147" ]
1609.08144#75
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
ective approaches to attention-based neural machine translation. In Conference on Empirical Methods in Natural Language Processing (2015). [31] Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. Addressing the rare word problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015). [32] Norouzi, M., Bengio, S., Chen, Z., Jaitly, N., Schuster, M., Wu, Y., and Schuurmans, D. Reward augmented maximum likelihood for neural structured prediction. In Neural Information Processing Systems (2016). [33] Pascanu, R., Mikolov, T., and Bengio, Y. Understanding the exploding gradient problem. CoRR abs/1211.5063 (2012). [34] Ranzato, M., Chopra, S., Auli, M., and Zaremba, W.
1609.08144#74
1609.08144#76
1609.08144
[ "1603.06147" ]
1609.08144#76
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Sequence level training with recurrent neural networks. In International Conference on Learning Representations (2015). [35] Schuster, M., and Nakajima, K. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (2012). [36] Schuster, M., and Paliwal, K. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45, 11 (Nov. 1997), 2673â 2681. [37] Sébastien, J., Kyunghyun, C., Memisevic, R., and Bengio, Y. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015). [38] Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016). [39] Shen, S., Cheng, Y., He, Z., He, W., Wu, H., Sun, M., and Liu, Y. Minimum risk training In Proceedings of the 54th Annual Meeting of the Association for for neural machine translation. Computational Linguistics (2016). [40] Srivastava, R. K., Greff, K., and Schmidhuber, J. Highway networks. CoRR abs/1505.00387 (2015).
1609.08144#75
1609.08144#77
1609.08144
[ "1603.06147" ]
1609.08144#77
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
[41] Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (2014), pp. 3104â 3112. [42] Tu, Z., Lu, Z., Liu, Y., Liu, X., and Li, H. Coverage-based neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016). [43] Wu, J., Leng, C., Wang, Y., Hu, Q., and Cheng, J. Quantized convolutional neural networks for mobile devices. CoRR abs/1512.06473 (2015). [44] Zaremba, W., Sutskever, I., and Vinyals, O. Recurrent neural network regularization, 2014. [45] Zhou, J., Cao, Y., Wang, X., Li, P., and Xu, W. Deep recurrent models with fast-forward connections for neural machine translation. CoRR abs/1606.04199 (2016).
1609.08144#76
1609.08144#78
1609.08144
[ "1603.06147" ]
1609.08144#78
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
22 Table 11: Some example translations from PBMT [15], our GNMT system (the "NMT before RL", Table 9), and Human. Source and target sentences (human translations) are from the public benchmark WMT Enâ Fr (newstest2014) data set. The right-hand column shows the human ratings on a scale of 0 (complete nonsense) to 6 (perfect translation). We disagree with some of the human ratings, e.g., the translation â
1609.08144#77
1609.08144#79
1609.08144
[ "1603.06147" ]
1609.08144#79
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Elle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrièreâ contains grammatical mistakes and changes semantics, and is still scored 6. We present it to illustrate the potential problems of the scoring process. Source PBMT GNMT Human Source PBMT GNMT Human Source PBMT GNMT "The reason Boeing are doing this is to cram more seats in to make their plane more competitive with our products," said Kevin Keniston, head of passenger comfort at Europeâ s Airbus. "La raison pour laquelle Boeing sont en train de faire, câ est de concentrer davantage de sièges pour prendre leur avion plus compétitive avec nos produits", a déclaré Kevin M. Keniston, chef du confort des passagers de lâ Airbus de lâ
1609.08144#78
1609.08144#80
1609.08144
[ "1603.06147" ]
1609.08144#80
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Europe. "La raison pour laquelle Boeing fait cela est de créer plus de sièges pour rendre son avion plus compétitif avec nos produits", a déclaré Kevin Keniston, chef du confort des passagers chez Airbus. "Boeing fait ça pour pouvoir caser plus de sièges et rendre ses avions plus compétitifs par rapports à nos produits", a déclaré Kevin Keniston, directeur de Confort Passager chez lâ avionneur européen Airbus. When asked about this, an oï¬ cial of the American administration replied: "The United States is not conducting electronic surveillance aimed at oï¬ ces of the World Bank and IMF in Washington." Interrogé à ce sujet, un responsable de lâ administration américaine a répondu : "Les Etats-Unis nâ est pas eï¬ ectuer une surveillance électronique destiné aux bureaux de la Banque mondiale et du FMI à Washington". Interrogé à ce sujet, un fonctionnaire de lâ administration américaine a répondu: "Les à tats-Unis nâ eï¬ ectuent pas de surveillance électronique à lâ
1609.08144#79
1609.08144#81
1609.08144
[ "1603.06147" ]
1609.08144#81
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
intention des bureaux de la Banque mondiale et du FMI à Washington". Interrogé sur le sujet, un responsable de lâ administration américaine a répondu: "les Etats-Unis ne mènent pas de surveillance électronique visant les sièges de la Banque mondiale et du FMI à Washington". Martin told CNN that he asked Daley whether his then-boss knew about the potential shuï¬
1609.08144#80
1609.08144#82
1609.08144
[ "1603.06147" ]
1609.08144#82
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
e. Martin a déclaré à CNN quâ il a demandé Daley si son patron de lâ époque connaissaient le potentiel remaniement ministériel. Martin a dit à CNN quâ il avait demandé à Daley si son patron dâ alors était au courant du remaniement potentiel. Martin a dit sur CNN quâ il avait demandé à Daley si son patron dâ alors était au courant du remaniement éventuel. She was spotted three days later by a dog walker trapped in the quarry Human Source PBMT Elle a été repéré trois jours plus tard par un promeneur de chien piégé dans la carrière GNMT Elle a été repérée trois jours plus tard par un traîneau à chiens piégé dans la carrière.
1609.08144#81
1609.08144#83
1609.08144
[ "1603.06147" ]
1609.08144#83
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Human Source PBMT GNMT 3.0 6.0 6.0 3.0 6.0 6.0 2.0 6.0 5.0 6.0 2.0 5.0 5.0 2.0 Elle a été repérée trois jours plus tard par une personne qui promenait son chien coincée dans la carrière Analysts believe the country is unlikely to slide back into full-blown conï¬ ict, but recent events have unnerved foreign investors and locals. Les analystes estiment que le pays a peu de chances de retomber dans un conï¬ it total, mais les événements récents ont inquiété les investisseurs étrangers et locaux. Selon les analystes, il est peu probable que le pays retombe dans un conï¬ it généralisé, mais les événements récents ont attiré des investisseurs étrangers et des habitants locaux. Les analystes pensent que le pays ne devrait pas retomber dans un conï¬ it ouvert, mais les récents évènements ont ébranlé les investisseurs étrangers et la population locale.
1609.08144#82
1609.08144#84
1609.08144
[ "1603.06147" ]
1609.08144#84
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
# Human 23 5.0
1609.08144#83
1609.08144
[ "1603.06147" ]
1609.07410#0
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
2016: 6 1 0 2 t c O 9 2 ] L M . t a t s [ 2 v 0 1 4 7 0 . 9 0 6 1 : v i X r a arXiv:1609.07410v2 # One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities Michalis K. Titsias Department of Informatics Athens University of Economics and Business [email protected] # Abstract
1609.07410#1
1609.07410
[ "1609.07410" ]
1609.07410#1
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
The softmax representation of probabilities for categorical variables plays a promi- nent role in modern machine learning with numerous applications in areas such as large scale classiï¬ cation, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we in- troduce an efï¬ cient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classiï¬
1609.07410#0
1609.07410#2
1609.07410
[ "1609.07410" ]
1609.07410#2
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
cation problems. # 1 Introduction Based on the softmax representation, the probability of a variable y to take the value k â {1, . . . , K}, where K is the number of categorical symbols or classes, is modeled by p(y = k|x) = efk(x;w) K m=1 efm(x;w) , (1) where each fk(x; w) is often referred to as the score function and it is a real-valued function in- P dexed by an input vector x and parameterized by w. The score function measures the compatibility of input x with symbol y = k so that the higher the score is the more compatible x becomes with y = k. The most common application of softmax is multiclass classiï¬ cation where x is an observed input vector and fk(x; w) is often chosen to be a linear function or more generally a non-linear func- tion such as a neural network (Bishop, 2006; Goodfellow et al., 2016). Several other applications of softmax arise, for instance, in neural language modeling for learning word vector embeddings (Mnih and Teh, 2012; Mikolov et al., 2013; Pennington et al., 2014) and also in collaborating ï¬ lter- ing for representing probabilities of (user, item) pairs (Paquet et al., 2012). In such applications the number of symbols K could often be very large, e.g. of the order of tens of thousands or millions, which makes the computation of softmax probabilities very expensive due to the large sum in the normalizing constant of Eq. (1). Thus, exact training procedures based on maximum likelihood or Bayesian approaches are computationally prohibitive and approximations are needed. While some rigorous bound-based approximations to the softmax exists (Bouchard, 2007), they are not so accu- rate or scalable and therefore it would be highly desirable to develop accurate and computationally efï¬ cient approximations. In this paper we introduce a new efï¬ cient approximation to softmax probabilities which takes the form of a lower bound on the probability of Eq. (1). This bound draws an interesting connection be- tween the exact softmax probability and all its one-vs-each pairwise probabilities, and it has several
1609.07410#1
1609.07410#3
1609.07410
[ "1609.07410" ]
1609.07410#3
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. desirable properties. Firstly, for the non-parametric estimation case it leads to an approximation of the likelihood that shares the same global optimum with exact maximum likelihood, and thus estima- tion based on the approximation is a perfect surrogate for the initial estimation problem. Secondly, the bound allows for scalable learning through stochastic optimization where data subsampling can be combined with subsampling categorical symbols. Thirdly, whenever the initial exact softmax cost function is convex the bound remains also convex. Regarding related work, there exist several other methods that try to deal with the high cost of softmax such as methods that attempt to perform the exact computations (Gopal and Yang, 2013; Vijayanarasimhan et al., 2014), methods that change the model based on hierarchical or stick- breaking constructions (Morin and Bengio, 2005; Khan et al., 2012) and sampling-based methods (Bengio and Sénécal, 2003; Mikolov et al., 2013; Devlin et al., 2014; Ji et al., 2015). Our method is a lower bound based approach that follows the variational inference framework. Other rigorous variational lower bounds on the softmax have been used before (Bohning, 1992; Bouchard, 2007), however they are not easily scalable since they require optimizing data-speciï¬ c variational param- eters. In contrast, the bound we introduce in this paper does not contain any variational parameter, which greatly facilitates stochastic minibatch training. At the same time it can be much tighter than previous bounds (Bouchard, 2007) as we will demonstrate empirically in several classiï¬ cation datasets. # 2 One-vs-each lower bound on the softmax Here, we derive the new bound on the softmax (Section 2.1) and we prove its optimality property when performing approximate maximum likelihood estimation (Section 2.2). Such a property holds for the non-parametric case, where we estimate probabilities of the form p(y = k), without condi- tioning on some x, so that the score functions fk(x; w) reduce to unrestricted parameters fk; see Eq. (2) below.
1609.07410#2
1609.07410#4
1609.07410
[ "1609.07410" ]
1609.07410#4
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Finally, we also analyze the related bound derived by Bouchard (Bouchard, 2007) and we compare it with our approach (Section 2.3). # 2.1 Derivation of the bound Consider a discrete random variable y â {1, . . . , K} that takes the value k with probability, p(y = k) = Softmaxk(f1, . . . , fK) = efk K m=1 efm , (2) where each fk is a free real-valued scalar parameter. We wish to express a lower bound on p(y = k) and the key step of our derivation is to re-write p(y = k) as p(y = k) = 1 m6=k eâ (fkâ fm) . 1 + (3) Then, by exploiting the fact that for any non-negative numbers α1 and α2 it holds 1 + α1 + α2 â ¤ P 1 + α1 + α2 + α1α2 = (1 + α1)(1 + α2), and more generally it holds (1 + i(1 + αi) where each αi â ¥ 0, we obtain the following lower bound on the above probability, # P # Q p(y = k) â ¥ 1 1 + eâ (fkâ fm) = efk efk + efm = Ï (fk â fm). (4) # m6=k Y # m6=k Y # m6=k Y where Ï (·) denotes the sigmoid function. Clearly, the terms in the product are pairwise probabilities each corresponding to the event y = k conditional on the union of pairs of events, i.e. y â {k, m} where m is one of the remaining values. We will refer to this bound as one-vs-each bound on the softmax probability, since it involves K â
1609.07410#3
1609.07410#5
1609.07410
[ "1609.07410" ]
1609.07410#5
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
1 comparisons of a speciï¬ c event y = k versus each of the K â 1 remaining events. Furthermore, the above result can be stated more generally to deï¬ ne bounds on arbitrary probabilities as the following statement shows. Proposition 1. Assume a probability model with state space â ¦ and probability measure P (·). For any event A â â ¦ and an associated countable set of disjoint events {Bi} such that â ªiBi = â ¦ \ A, it holds P (A) â ¥ P (A|A â ª Bi). (5) i Y 2 Proof. Given that P (A) = P (A) (1 + P (A) P (A)+Pi P (Bi) , the result follows by applying the inequality P (â ¦) = i(1 + αi) exactly as done above for the softmax parameterization. i αi) â
1609.07410#4
1609.07410#6
1609.07410
[ "1609.07410" ]
1609.07410#6
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
¤ # P # Q Remark. If the set {Bi} consists of a single event B then by deï¬ nition B = â ¦ \ A and the bound is exact since in such case P (A|A â ª B) = P (A). Furthermore, based on the above construction we can express a full class of hierarchically ordered bounds. For instance, if we merge two events Bi and Bj into a single one, then the term P (A|A â ª Bi)P (A|A â ª Bj) in the initial bound is replaced with P (A|A â ª Bi â ª Bj) and the associated new bound, obtained after this merge, can only become tighter. To see a more speciï¬
1609.07410#5
1609.07410#7
1609.07410
[ "1609.07410" ]
1609.07410#7
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
c example in the softmax probabilistic model, assume a small subset of categorical symbols Ck, that does not include k, and denote the remaining symbols excluding k as ¯Ck so that k â ª Ck â ª ¯Ck = {1, . . . , K}. Then, a tighter bound, that exists higher in the hierarchy, than the one-vs-each bound (see Eq. 4) takes the form, p(y = k) â ¥ Softmaxk(fk, fCk ) à Softmaxk(fk, f ¯Ck ) â ¥ Softmaxk(fk, fCk ) Ã
1609.07410#6
1609.07410#8
1609.07410
[ "1609.07410" ]
1609.07410#8
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Ï (fk â fm), (6) Ymâ ¯Ck efk efk +Pmâ ¯Ck # efk where Softmaxk(fk, fCk ) = efm . For sim- plicity of our presentation in the remaining of the paper we do not discuss further these more general bounds and we focus only on the one-vs-each bound. The computationally useful aspect of the bound in Eq. (4) is that it factorizes into a product, where each factor depends only on a pair of parameters (fk, fm). Crucially, this avoids the evaluation of the normalizing constant associated with the global probability in Eq. (2) and, as discussed in Section 3, it leads to scalable training using stochastic optimization that can deal with very large K. Furthermore, approximate maximum likelihood estimation based on the bound can be very accurate and, as shown in the next section, it is exact for the non-parametric estimation case. The fact that the one-vs-each bound in (4) is a product of pairwise probabilities suggests that there is a connection with Bradley-Terry (BT) models (Bradley and Terry, 1952; Huang et al., 2006) for learning individual skills from paired comparisons and the associated multiclass classiï¬ cation systems obtained by combining binary classiï¬ ers, such as one-vs-rest and one-vs-one approaches (Huang et al., 2006). Our method differs from BT models, since we do not combine binary proba- bilistic models to a posteriori form a multiclass model. Instead, we wish to develop scalable approx- imate algorithms that can surrogate the training of multiclass softmax-based models by maximizing lower bounds on the exact likelihoods of these models. # 2.2 Optimality of the bound for maximum likelihood estimation Assume a set of observation (y1, . . . , yN ) where each yi â {1, . . . , K}. The log likelihood of the data takes the form, N K L(f ) = log p(yi) = log p(y = k)Nk , (7) # i=1 Y # k=1 Y where f = (f1, . . . , fK) and Nk denotes the number of data points with value k.
1609.07410#7
1609.07410#9
1609.07410
[ "1609.07410" ]
1609.07410#9
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
By substitut- ing p(y = k) from Eq. (2) and then taking derivatives with respect to f we arrive at the standard stationary conditions of the maximum likelihood solution, Nk N These stationary conditions are satisï¬ ed for fk = log Nk + c where c â R is an arbitrary constant. What is rather surprising is that the same solutions fk = log Nk + c satisfy also the stationary conditions when maximizing a lower bound on the exact log likelihood obtained from the product of one-vs-each probabilities. More precisely, by replacing p(y = k) with the bound from Eq. (4) we obtain a lower bound on the exact log likelihood, F (f ) = log K efk efk + efm Nk = log P (fk, fm), (9) # k=1 Y # m6=k Y # k>m X
1609.07410#8
1609.07410#10
1609.07410
[ "1609.07410" ]
1609.07410#10
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
3 â # Nm is a likelihood involving only the data of the pair where P (fk, fm) = of states (k, m), while there exist K(Kâ 1)/2 possible such pairs. If instead of maximizing the exact log likelihood from Eq. (7) we maximize the lower bound we obtain the same parameter estimates. Proposition 2. The maximum likelihood parameter estimates fk = log Nk + c, k = 1, . . . , K for the exact log likelihood from Eq. (7) globally also maximize the lower bound from Eq. (9). Proof. By computing the derivatives of F (f ) we obtain the following stationary conditions
1609.07410#9
1609.07410#11
1609.07410
[ "1609.07410" ]
1609.07410#11
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
K â 1 = Nk + Nm Nk efk efk + efm , k = 1, . . . , K, (10) # m6=k X which form a system of K non-linear equations over the unknowns (f1, . . . , fK). By substituting the values fk = log Nk + c we can observe that all K equations are simultaneously satisï¬ ed which means that these values are solutions. Furthermore, since F (f ) is a concave function of f we can conclude that the solutions fk = log Nk + c globally maximize F (f ).
1609.07410#10
1609.07410#12
1609.07410
[ "1609.07410" ]
1609.07410#12
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Remark. Not only is F (f ) globally maximized by setting fk = log Nk + c, but also each pairwise likelihood P (fk, fm) in Eq. (9) is separately maximized by the same setting of parameters. # 2.3 Comparison with Bouchardâ s bound Bouchard (Bouchard, 2007) proposed a related bound that next we analyze in terms of its ability to approximate the exact maximum likelihood training in the non-parametric case, and then we com- pare it against our method. Bouchard (Bouchard, 2007) was motivated by the problem of applying variational Bayesian inference to multiclass classiï¬ cation and he derived the following upper bound on the log-sum-exp function, K K log efm â ¤ α + log 1 + efmâ α , (11) m=1 X where α â
1609.07410#11
1609.07410#13
1609.07410
[ "1609.07410" ]
1609.07410#13
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
R is a variational parameter that needs to be optimized in order for the bound to become as tight as possible. The above induces a lower bound on the softmax probability p(y = k) from Eq. (2) that takes the form # m=1 X p(y = k) â ¥ efkâ α K m=1 (1 + efmâ α) . (12) This is not the same as Eq. (4), since there is not a value for α for which the above bound will reduce to our proposed one. For instance, if we set α = fk, then Bouchardâ s bound becomes half the one in Eq. (4) due to the extra term 1 + efkâ fk = 2 in the product in the denominator.1 Furthermore, such a value for α may not be the optimal one and in practice α must be chosen by minimizing the upper bound in Eq. (11). While such an optimization is a convex problem, it requires iterative optimization since there is not in general an analytical solution for α. However, for the simple case where K = 2 we can analytically ï¬ nd the optimal α and the optimal f parameters. The following proposition carries out this analysis and provides a clear understanding of how Bouchardâ s bound behaves when applied for approximate maximum likelihood estimation. Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and p(y = 2) from (2) with the corresponding Bouchardâ s bounds given by (1+ef1 â α)(1+ef2 â α) and (1+ef1 â α)(1+ef2 â α) . These bounds are used to approximate the maximum likelihood solution by maximizing a bound F (f1, f2, α) which is globally maximized for α = f1 + f2 2 , fk = 2 log Nk + c, k = 1, 2. (13) The proof of the above is given in the Appendix. Notice that the above estimates are biased so that the probability of the most populated class (say the y = 1 for which N1 > N2) is overestimated 1Notice that the product in Eq. (4) excludes the value k, while Bouchardâ s bound includes it. 4
1609.07410#12
1609.07410#14
1609.07410
[ "1609.07410" ]
1609.07410#14
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
while the other probability is underestimated. This is due to the factor 2 that multiplies log N1 and log N2 in (13). Also notice that the solution α = f1+f2 is not a general trend, i.e. for K > 2 the optimal α is not the mean of fks. In such cases approximate maximum likelihood estimation based on Bouchardâ s bound requires iterative optimization. Figure 1a shows some estimated softmax probabilities, using a dataset of 200 points each taking one out of ten values, where f is found by exact maximum likelihood, the proposed one-vs-each bound and Bouchardâ s method. As expected estimation based on the bound in Eq. (4) gives the exact probabilities, while Bouchardâ s bound tends to overestimate large probabilities and underestimate small ones.
1609.07410#13
1609.07410#15
1609.07410
[ "1609.07410" ]
1609.07410#15
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
â 50 y t i l i 0.25 2 â 100 b a b o r P d e t a m i t s E 0.2 0.15 0.1 0.05 1 0 â 1 d n u o b r e w o L â 150 â 200 â 250 â 2 0 1 2 3 4 5 6 Values (a) 7 8 9 10 â 3 â 2 â 1 (b) 0 1 2 â 300 0 2000 4000 6000 Iterations (c) 8000 10000 Figure 1: (a) shows the probabilities estimated by exact softmax (blue bar), one-vs-each approxima- tion (red bar) and Bouchardâ s method (green bar). (b) shows the 5-class artiï¬ cial data together with the decision boundaries found by exact softmax (blue line), one-vs-each (red line) and Bouchardâ s bound (green line). (c) shows the maximized (approximate) log likelihoods for the different ap- proaches when applied to the data of panel (b) (see Section 3). Notice that the blue line in (c) is the exact maximized log likelihood while the remaining lines correspond to lower bounds. # 3 Stochastic optimization for extreme classiï¬
1609.07410#14
1609.07410#16
1609.07410
[ "1609.07410" ]
1609.07410#16
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
cation Here, we return to the general form of the softmax probabilities as deï¬ ned by Eq. (1) where the score functions are indexed by input x and parameterized by w. We consider a classiï¬ cation task n=1, where yn â {1, . . . , K}, we wish to ï¬ t the parameters w where given a training set {xn, yn}N by maximizing the log likelihood, # N N efyn (xn;w) K m=1 efm(xn;w) L = log . (14) # n=1 Y
1609.07410#15
1609.07410#17
1609.07410
[ "1609.07410" ]
1609.07410#17
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
When the number of training instances is very large, the above maximization can be carried out by ap- plying stochastic gradient descent (by minimizing â L) where we cycle over minibatches. However, this stochastic optimization procedure cannot deal with large values of K because the normalizing constant in the softmax couples all scores functions so that the log likelihood cannot be expressed as a sum across class labels. To overcome this, we can use the one-vs-each lower bound on the softmax probability from Eq. (4) and obtain the following lower bound on the previous log likelihood,
1609.07410#16
1609.07410#18
1609.07410
[ "1609.07410" ]
1609.07410#18
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
N N 1 1 + eâ [fyn (xn;w)â fm(xn;w)] F = log 1 + eâ [fyn (xn;w)â fm(xn;w)] = â log # n=1 Y # n=1 X # m6=yn Y # m6=yn X n=lm#un n=1m#yn (15) which now consists of a sum over both data points and labels. Interestingly, the sum over the la- bels, ing yn? runs over all remaining classes that are different from the label y,, assigned to x,. Each term in the sum is a logistic regression cost, that depends on the pairwise score difference fun (ni W) â fm(Xn; w), and encourages the n-th data point to get separated from the m-th remain- ing class. The above lower bound can be optimized by stochastic gradient descent by subsampling terms in the double sum in Eq. (15), thus resulting in a doubly stochastic approximation scheme. Next we further discuss the stochasticity associated with subsampling remaining classes. The gradient for the cost associated with a single training instance (x, yn) is
1609.07410#17
1609.07410#19
1609.07410
[ "1609.07410" ]
1609.07410#19
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
â Fn = Ï (fm(xn; w) â fyn(xn; w)) [â wfyn(xn; w) â â wfm(xn; w)] . (16) # m6=yn X 5 This gradient consists of a weighted sum where the sigmoidal weights Ï (fm(xn; w) â fyn(xn; w)) quantify the contribution of the remaining classes to the whole gradient; the more a remaining class overlaps with yn (given xn) the higher its contribution is. A simple way to get an unbiased stochastic estimate of (16) is to randomly subsample a small subset of remaining classes from the set {m|m 6= yn}. More advanced schemes could be based on importance sampling where we introduce a proposal distribution pn(m) deï¬ ned on the set {m|m 6= yn} that could favor selecting classes with large sigmoidal weights. While such more advanced schemes could reduce variance, they require prior knowledge (or on-the-ï¬ y learning) about how classes overlap with one another. Thus, in Section 4 we shall experiment only with the simple random subsampling approach and leave the above advanced schemes for future work. To illustrate the above stochastic gradient descent algorithm we simulated a two-dimensional data set of 200 instances, shown in Figure 1b, that belong to ï¬
1609.07410#18
1609.07410#20
1609.07410
[ "1609.07410" ]
1609.07410#20
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
ve classes. We consider a linear classiï¬ - cation model where the score functions take the form fk(xn, w) = wT xn and where the full set of k parameters is w = (w1, . . . , wK). We consider minibatches of size ten to approximate the sum n . Figure 1c shows the stochastic and subsets of remaining classes of size one to approximate evolution of the approximate log likelihood (dashed red line), i.e. the unbiased subsampling based approximation of (15), together with the maximized exact softmax log likelihood (blue line), the non-stochastically maximized approximate lower bound from (15) (red solid line) and Bouchardâ s method (green line). To apply Bouchardâ s method we construct a lower bound on the log likelihood by replacing each softmax probability with the bound from (12) where we also need to optimize a separate variational parameter αn for each data point. As shown in Figure 1c our method provides a tighter lower bound than Bouchardâ s method despite the fact that it does not contain any variational parameters. Also, Bouchardâ s method can become very slow when combined with stochastic gra- dient descent since it requires tuning a separate variational parameter αn for each training instance. Figure 1b also shows the decision boundaries discovered by the exact softmax, one-vs-each bound and Bouchardâ s bound. Finally, the actual parameters values found by maximizing the one-vs-each bound were remarkably close (although not identical) to the parameters found by the exact softmax. # 4 Experiments # 4.1 Toy example in large scale non-parametric estimation Here, we illustrate the ability to stochastically maximize the bound in Eq. (9) for the simple non- parametric estimation case. In such case, we can also maximize the bound based on the analytic formulas and therefore we will be able to test how well the stochastic algorithm can approximate the optimal/known solution. We consider a data set of N = 106 instances each taking one out of K = 104 possible categorical values. The data were generated from a distribution p(k) â u2 k, where each uk was randomly chosen in [0, 1]. The probabilities estimated based on the analytic formulas are shown in Figure 2a.
1609.07410#19
1609.07410#21
1609.07410
[ "1609.07410" ]
1609.07410#21
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
To stochastically estimate these probabilities we follow the doubly stochas- tic framework of Section 3 so that we subsample data instances of minibatch size b = 100 and for each instance we subsample 10 remaining categorical values. We use a learning rate initialized to 0.5/b (and then decrease it by a factor of 0.9 after each epoch) and performed 2 à 105 iterations. Fig- ure 2b shows the ï¬ nal values for the estimated probabilities, while Figure 2c shows the evolution of the estimation error during the optimization iterations. We can observe that the algorithm performs well and exhibits a typical stochastic approximation convergence.
1609.07410#20
1609.07410#22
1609.07410
[ "1609.07410" ]
1609.07410#22
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
x 10 â 4 x 10 â 4 0.7 3.5 3.5 0.6 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 y t i l i b a b o r P d e a m t 3 2.5 2 1.5 r o r r E 0.5 0.4 0.3 i t s E 1 i t s E 1 0.2 0.5 0.5 0.1 0 0 2000 4000 6000 Values 8000 10000 0 0 2000 4000 6000 Values 8000 10000 0 0 0.5 1 Iterations 1.5 (a) (c) 2 5 x 10 (b) Figure 2: (a) shows the optimally estimated probabilities which have been sorted for visualizations purposes. (b) shows the corresponding probabilities estimated by stochastic optimization. (c) shows the absolute norm for the vector of differences between exact estimates and stochastic estimates. 6 4.2 Classiï¬ cation Small scale classiï¬ cation comparisons. Here, we wish to investigate whether the proposed lower bound on the softmax is a good surrogate for exact softmax training in classiï¬ cation. More precisely, we wish to compare the parameter estimates obtained by the one-vs-each bound with the estimates obtained by exact softmax training. To quantify closeness we use the normalized absolute norm norm = |wsoftmax â wâ | |wsoftmax| , (17) where wsoftmax denotes the parameters obtained by exact softmax training and wâ denotes estimates obtained by approximate training. Further, we will also report predictive performance measured by classiï¬ cation error and negative log predictive density (nlpd) averaged across test data, Ntest Ntest error = (1/Ntest) I(yi 6= ti), nlpd = (1/Ntest) â log p(ti|xi), (18)
1609.07410#21
1609.07410#23
1609.07410
[ "1609.07410" ]
1609.07410#23
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
# i=1 X # i=1 X where ti denotes the true label of a test point and yi the predicted one. We trained the linear multi- class model of Section 3 with the following alternative methods: exact softmax training (SOFT), the one-vs-each bound (OVE), the stochastically optimized one-vs-each bound (OVE-SGD) and Bouchardâ s bound (BOUCHARD). For all approaches, the associated cost function was maximized 2 λ||w||2, which ensures that the global max- together with an added regularization penalty term, â 1 imum of the cost function is achieved for ï¬ nite w. Since we want to investigate how well we surrogate exact softmax training, we used the same ï¬ xed value λ = 1 in all experiments. We considered three small scale multiclass classiï¬ cation datasets: MNIST2, 20NEWS3 and BIBTEX (Katakis et al., 2008); see Table 1 for details. Notice that BIBTEX is originally a multi-label classiï¬ - cation dataset (Bhatia et al., 2015). where each example may have more than one labels. Here, we maintained only a single label for each data point in order to apply standard multiclass classiï¬ cation. The maintained label was the ï¬ rst label appearing in each data entry in the repository ï¬ les4 from which we obtained the data. Figure 3 displays convergence of the lower bounds (and for the exact softmax cost) for all meth- ods. Recall, that the methods SOFT, OVE and BOUCHARD are non-stochastic and therefore their optimization can be carried out by standard gradient descent. Notice that in all three datasets the one-vs-each bound gets much closer to the exact softmax cost compared to Bouchardâ s bound. Thus, OVE tends to give a tighter bound despite that it does not contain any variational parameters, while BOUCHARD has N extra variational parameters, i.e. as many as the training instances. The appli- cation of OVE-SGD method (the stochastic version of OVE) is based on a doubly stochastic scheme where we subsample minibatches of size 200 and subsample remaining classes of size one.
1609.07410#22
1609.07410#24
1609.07410
[ "1609.07410" ]
1609.07410#24
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
We can observe that OVE-SGD is able to stochastically approach its maximum value which corresponds to OVE. Table 2 shows the parameter closeness score from Eq. (17) as well as the classiï¬ cation predictive scores. We can observe that OVE and OVE-SGD provide parameters closer to those of SOFT than the parameters provided by BOUCHARD. Also, the predictive scores for OVE and OVE-SGD are similar to SOFT, although they tend to be slightly worse. Interestingly, BOUCHARD gives the best classiï¬ cation error, even better than the exact softmax training, but at the same time it always gives the worst nlpd which suggests sensitivity to overï¬ tting. However, recall that the regularization parameter λ was ï¬ xed to the value one and it was not optimized separately for each method using cross validation. Also notice that BOUCHARD cannot be easily scaled up (with stochastic optimization) to massive datasets since it introduces an extra variational parameter for each training instance. Large scale classiï¬
1609.07410#23
1609.07410#25
1609.07410
[ "1609.07410" ]
1609.07410#25
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
cation. Here, we consider AMAZONCAT-13K (see footnote 4) which is a large scale classiï¬ cation dataset. This dataset is originally multi-labelled (Bhatia et al., 2015) and here we maintained only a single label, as done for the BIBTEX dataset, in order to apply standard multiclass classiï¬ cation. This dataset is also highly imbalanced since there are about 15 classes having the half of the training instances while they are many classes having very few (or just a single) training instances. # 2http://yann.lecun.com/exdb/mnist 3http://qwone.com/~jason/20Newsgroups/ 4http://research.microsoft.com/en-us/um/people/manik/downloads/XC/XMLRepository.html
1609.07410#24
1609.07410#26
1609.07410
[ "1609.07410" ]
1609.07410#26
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
7 Table 1: Summaries of the classiï¬ cation datasets. Name Dimensionality Classes Training examples Test examples MNIST 20NEWS BIBTEX AMAZONCAT-13K 784 61188 1836 203882 10 20 148 2919 60000 11269 4880 1186239 10000 7505 2515 306759 Table 2: Score measures for the small scale classiï¬ cation datasets. SOFT (error, nlpd) BOUCHARD (norm, error, nlpd) OVE (norm, error, nlpd) (0.074, 0.271) (0.272, 1.263) (0.622, 2.793) (0.64, 0.073, 0.333) (0.65, 0.249, 1.337) (0.25, 0.621, 2.955) (0.50, 0.082, 0.287) (0.05, 0.276, 1.297) (0.09, 0.636, 2.888) (0.53, 0.080, 0.278) (0.14, 0.276, 1.312) (0.10, 0.633, 2.875) # MNIST 20NEWS BIBTEX 4 x 10 0 d n u o b r e w o L â 2 â 3 â 4 â 5 â 6 SOFT OVO OVOâ SGD BOUCHARD d n u o b r e w o L â 1500 â 2000 â 2500 â 3000 â 3500 d n u o b r e w o L â 3000 â 4000 â 5000 d n u o b r e w o L â 200 â 400 â 600 â 800 â 7 0 0.5 1 Iterations 1.5 2 5 x 10 â 4000 0 5 Iterations 10 5 x 10 â 6000 0 5 Iterations 10 5 x 10 â
1609.07410#25
1609.07410#27
1609.07410
[ "1609.07410" ]
1609.07410#27
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
1000 0 5 Iterations 10 5 x 10 (a) (b) (c) (d) Figure 3: (a) shows the evolution of the lower bound values for MNIST, (b) for 20NEWS and (c) for BIBTEX. For more clear visualization the bounds of the stochastic OVE-SGD have been smoothed using a rolling window of 400 previous values. (d) shows the evolution of the OVE-SGD lower bound (scaled to correspond to a single data point) in the large scale AMAZONCAT-13K dataset. Here, the plotted values have been also smoothed using a rolling window of size 4000 and then thinned by a factor of 5. Further, notice that in this large dataset the number of parameters we need to estimate for the linear classiï¬ cation model is very large: K à (D + 1) = 2919 à 203883 parameters where the plus one accounts for the biases. All methods apart from OVE-SGD are practically very slow in this massive dataset, and therefore we consider OVE-SGD which is scalable. We applied OVE-SGD where at each stochastic gradient update we consider a single training instance (i.e. the minibatch size was one) and for that instance we randomly select ï¬
1609.07410#26
1609.07410#28
1609.07410
[ "1609.07410" ]
1609.07410#28
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
ve remaining classes. This leads to sparse parameter updates, where the score function parameters of only six classes (the class of the current training instance plus the remaining ï¬ ve ones) are updated at each iteration. We used a very small learning rate having value 10â 8 and we performed ï¬ ve epochs across the full dataset, that is we performed in total 5 à 1186239 stochastic gradient updates. After each epoch we halve the value of the learning rate before next epoch starts. By taking into account also the sparsity of the input vectors each iteration is very fast and full training is completed in just 26 minutes in a stand- alone PC. The evolution of the variational lower bound that indicates convergence is shown in Figure 3d.
1609.07410#27
1609.07410#29
1609.07410
[ "1609.07410" ]
1609.07410#29
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Finally, the classiï¬ cation error in test data was 53.11% which is signiï¬ cantly better than random guessing or by a method that decides always the most populated class (where in AMAZONCAT-13K the most populated class occupies the 19% of the data so the error of that method is around 79%). # 5 Discussion We have presented the one-vs-each lower bound on softmax probabilities and we have analyzed its theoretical properties. This bound is just the most extreme case of a full family of hierarchi- cally ordered bounds. We have explored the ability of the bound to perform parameter estimation through stochastic optimization in models having large number of categorical symbols, and we have demonstrated this ability to classiï¬
1609.07410#28
1609.07410#30
1609.07410
[ "1609.07410" ]
1609.07410#30
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
cation problems. There are several directions for future research. Firstly, it is worth investigating the usefulness of the bound in different applications from classiï¬ cation, such as for learning word embeddings in natural 8 language processing and for training recommendation systems. Another interesting direction is to consider the bound not for point estimation, as done in this paper, but for Bayesian estimation using variational inference. # Acknowledgments We thank the reviewers for insightful comments. We would like also to thank Francisco J. R. Ruiz for useful discussions and David Blei for suggesting the name one-vs-each for the proposed method. # A Proof of Proposition 3 Here we re-state and prove Proposition 3. Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and p(y = 2) from (2) with the corresponding Bouchardâ s bounds given by (1+ef1 â α)(1+ef2 â α) and (1+ef1 â α)(1+ef2 â α) . These bounds are used to approximate the maximum likelihood solution for (f1, f2) by maximizing the lower bound F (f1, f2, α) = log eN1(f1â α)+N2(f2â α) [(1 + ef1â α)(1 + ef2â α)]N1+N2 , (19) obtained by replacing p(y = 1) and p(y = 2) in the exact log likelihood with Bouchardâ s bounds. Then, the global maximizer of F (f1, f2, α) is such that α = f1 + f2 2 , fk = 2 log Nk + c, k = 1, 2. (20)
1609.07410#29
1609.07410#31
1609.07410
[ "1609.07410" ]
1609.07410#31
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Proof. The lower bound is written as N1(f1 â α) + N2(f2 â α) â (N1 + N2) log(1 + ef1â α) + log(1 + ef2â α) . . We will ï¬ rst maximize this quantity wrt α. For that is sufï¬ ces to minimize the upper bound on the following log-sum-exp function α + log(1 + ef1â α) + log(1 + ef2â α), which is a convex function of α. By taking the derivative wrt α and setting to zero we obtain the stationary condition ef1â α 1 + ef1â α + ef2â α 1 + ef2â α = 1. Clearly, the value of α that satisï¬ es the condition is α = f1+f2 into the initial bound we have 2 . Now if we substitute this value back N1 f1 â f2 2 + N2 f2 â f1 2 â (N1 + N2) log(1 + e f1 â f2 2 ) + log(1 + e f2â f1 2 ) # h # i which is concave wrt f1 and f2. Then, by taking derivatives wrt f1 and f2 we obtain the conditions N1 â N2 2 = (N1 + N2) 2 " e f1 â f2 2 1 + e f1â f2 2 â e f2â f1 2 1 + e f2â f1 2 # N2 â N1 2 = (N1 + N2) 2 " e f2 â f1 2 1 + e f2â f1 2 â e f1â f2 2 1 + e f1â f2 2 # Now we can observe that these conditions are satisï¬ ed by f1 = 2 log N1 + c and f2 = 2 log N2 + c which gives the global maximizer since F (f1, f2, α) is concave.
1609.07410#30
1609.07410#32
1609.07410
[ "1609.07410" ]
1609.07410#32
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
9 # References Bengio, Y. and Sénécal, J.-S. (2003). Quick training of probabilistic neural nets by importance sampling. In Proceedings of the conference on Artiï¬ cial Intelligence and Statistics (AISTATS). Bhatia, K., Jain, H., Kar, P., Varma, M., and Jain, P. (2015). Sparse local embeddings for extreme multi-label classiï¬ cation. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 730â 738. Curran Associates, Inc. Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Bohning, D. (1992).
1609.07410#31
1609.07410#33
1609.07410
[ "1609.07410" ]
1609.07410#33
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math, 44:197â 200. Bouchard, G. (2007). Efï¬ cient bounds for the softmax function and applications to approximate inference in hybrid models. Technical report. Bradley, R. A. and Terry, M. E. (1952). Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4):324â 345. Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., and Makhoul, J. (2014). Fast and robust neural net- In Proceedings of the 52nd Annual Meeting of the work joint models for statistical machine translation. Association for Computational Linguistics (Volume 1: Long Papers), pages 1370â
1609.07410#32
1609.07410#34
1609.07410
[ "1609.07410" ]
1609.07410#34
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
1380, Baltimore, Mary- land. Association for Computational Linguistics. Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. Book in preparation for MIT Press. In Dasgupta, S. and Mcallester, D., editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 289â 297. JMLR Workshop and Conference Proceedings. Huang, T.-K., Weng, R. C., and Lin, C.-J. (2006). Generalized Bradley-Terry models and multi-class probability estimates. J. Mach. Learn.
1609.07410#33
1609.07410#35
1609.07410
[ "1609.07410" ]
1609.07410#35
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Res., 7:85â 115. Ji, S., Vishwanathan, S. V. N., Satish, N., Anderson, M. J., and Dubey, P. (2015). Blackout: Speeding up recurrent neural network language models with very large vocabularies. Katakis, I., Tsoumakas, G., and Vlahavas, I. (2008). Multilabel text classiï¬ cation for automated tag suggestion. In In: Proceedings of the ECML/PKDD-08 Workshop on Discovery Challenge. Khan, M. E., Mohamed, S., Marlin, B. M., and Murphy, K. P. (2012).
1609.07410#34
1609.07410#36
1609.07410
[ "1609.07410" ]
1609.07410#36
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
A stick-breaking likelihood for categorical data analysis with latent Gaussian models. In Proceedings of the Fifteenth International Conference on Artiï¬ cial Intelligence and Statistics, AISTATS 2012, La Palma, Canary Islands, April 21-23, 2012, pages 610â 618. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems 26, pages 3111â 3119. Curran Associates, Inc. Mnih, A. and Teh, Y. W. (2012).
1609.07410#35
1609.07410#37
1609.07410
[ "1609.07410" ]
1609.07410#37
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751â 1758. Morin, F. and Bengio, Y. (2005). Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artiï¬ cial Intelligence and Statistics, pages 246â 252. Citeseer. Paquet, U., Koenigstein, N., and Winther, O. (2012). Scalable Bayesian modelling of paired symbols. CoRR, abs/1409.2824. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532â 1543, Doha, Qatar. Association for Computational Linguistics.
1609.07410#36
1609.07410#38
1609.07410
[ "1609.07410" ]
1609.07410#38
One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities
Vijayanarasimhan, S., Shlens, J., Monga, R., and Yagnik, J. (2014). Deep networks with large output spaces. CoRR, abs/1412.7479. 10
1609.07410#37
1609.07410
[ "1609.07410" ]
1609.07061#0
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
6 1 0 2 p e S 2 2 ] E N . s c [ 1 v 1 6 0 7 0 . 9 0 6 1 : v i X r a Quantized Neural Networks # Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations Itay Hubara* Department of Electrical Engineering Technion - Israel Institute of Technology Haifa, Israel [email protected] Matthieu Courbariaux* Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada [email protected] # Daniel Soudry Department of Statistics Columbia University New York, USA [email protected] Ran El-Yaniv Department of Computer Science Technion - Israel Institute of Technology Haifa, Israel [email protected] Yoshua Bengio Department of Computer Science and Department of Statistics Universit´e de Montr´eal Montr´eal, Canada [email protected]
1609.07061#1
1609.07061
[ "1509.08985" ]
1609.07061#1
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
*Indicates ï¬ rst authors. Editor: # Abstract We introduce a method to train Quantized Neural Networks (QNNs) â neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train- time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise opera- tion. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suï¬ ering any loss in classiï¬ cation accuracy. The QNN code is available online. 1 Hubara, Courbariaux, Soudry, El-Yaniv and Bengio Keywords: Deep Learning, Neural Networks Compression, Energy Eï¬ cient Neural Net- works, Computer vision, Language Models.
1609.07061#0
1609.07061#2
1609.07061
[ "1509.08985" ]
1609.07061#2
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
# 1. Introduction Deep Neural Networks (DNNs) have substantially pushed Artiï¬ cial Intelligence (AI) lim- its in a wide range of tasks, including but not limited to object recognition from im- ages (Krizhevsky et al., 2012; Szegedy et al., 2014), speech recognition (Hinton et al., 2012; Sainath et al., 2013), statistical machine translation (Devlin et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), Atari and Go games (Mnih et al., 2015; Silver et al., 2016), and even computer generation of abstract art (Mordvintsev et al., 2015). Training or even just using neural network (NN) algorithms on conventional general- purpose digital hardware (Von Neumann architecture) has been found highly ineï¬ cient due to the massive amount of multiply-accumulate operations (MACs) required to compute the weighted sums of the neuronsâ inputs. Today, DNNs are almost exclusively trained on one or many very fast and power-hungry Graphic Processing Units (GPUs) (Coates et al., 2013). As a result, it is often a challenge to run DNNs on target low-power devices, and substantial research eï¬ orts are invested in speeding up DNNs at run-time on both general- purpose (Vanhoucke et al., 2011; Gong et al., 2014; Romero et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011a,b; Pham et al., 2012; Chen et al., 2014a,b; Esser et al., 2015).
1609.07061#1
1609.07061#3
1609.07061
[ "1509.08985" ]