id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1603.09025#25
Recurrent Batch Normalization
A method for stochastic optimization. arXiv:1412.6980, 2014. D Krueger and R. Memisevic. Regularizing rnns by stabilizing activations. ICLR, 2016. 9 Published as a conference paper at ICLR 2017 David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, and Aaron Courville.
1603.09025#24
1603.09025#26
1603.09025
[ "1609.01704" ]
1603.09025#26
Recurrent Batch Normalization
Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv:1606.01305, 2016. C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio. Batch normalized recurrent neural networks. ICASSP, 2016. Quoc V Le, N. Jaitly, and G. Hinton. A simple way to initialize recurrent networks of rectiï¬ ed linear units. arXiv:1504.00941, 2015. Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv:1604.03640, 2016.
1603.09025#25
1603.09025#27
1603.09025
[ "1609.01704" ]
1603.09025#27
Recurrent Batch Normalization
M. Mahoney. Large text compression benchmark. 2009. M. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 1993. J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimization. In ICML, 2011. T. Mikolov, I. Sutskever, A. Deoras, H. Le, S. Kombrink, and J. Cernocky.
1603.09025#26
1603.09025#28
1603.09025
[ "1609.01704" ]
1603.09025#28
Recurrent Batch Normalization
Subword language modeling with neural networks. preprint, 2012. Yann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR, abs/1306.0514, 2013. Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language mod- els: when are they needed? arXiv:1301.5650, 2013. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.
1603.09025#27
1603.09025#29
1603.09025
[ "1609.01704" ]
1603.09025#29
Recurrent Batch Normalization
On the difï¬ culty of training recurrent neural networks. arXiv:1211.5063, 2012. H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 2000. The Theano Development Team et al. Theano: A Python framework for fast computation of mathe- matical expressions. arXiv e-prints, abs/1605.02688, May 2016. T. Tieleman and G. Hinton. Lecture 6.5â RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde- Farley, Jan Chorowski, and Yoshua Bengio.
1603.09025#28
1603.09025#30
1603.09025
[ "1609.01704" ]
1603.09025#30
Recurrent Batch Normalization
Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. URL http://arxiv.org/abs/1506.00619. K. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv:1502.03044, 2015. L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville.
1603.09025#29
1603.09025#31
1603.09025
[ "1609.01704" ]
1603.09025#31
Recurrent Batch Normalization
Describing videos by exploiting temporal structure. In ICCV, 2015. S. Zhang, Y. Wu, T. Che, Z. Lin, R. Memisevic, R. Salakhutdinov, and Y. Bengio. Architectural complexity measures of recurrent neural networks. arXiv:1602.08210, 2016. 10 Published as a conference paper at ICLR 2017 # A CONVERGENCE OF POPULATION STATISTICS mean of recurrent term mean of cell state 20 variance of recurrent term 15) J 1.0 4 0.5 4 0.0 . {¢) 10 20 30 40 50 [¢) 10 20 30 40 50 time steps time steps Figure 5: Convergence of population statistics to stationary distributions on the Penn Treebank task. The horizontal axis denotes RNN time. Each curve corresponds to a single hidden unit. Only a random subset of units is shown. See Section 3 for discussion. # B SENSITIVITY TO INITIALIZATION OF γ In Section 4 we investigated the effect of initial γ on gradient ï¬
1603.09025#30
1603.09025#32
1603.09025
[ "1609.01704" ]
1603.09025#32
Recurrent Batch Normalization
ow. To show the practical implica- tions of this, we performed several experiments on the pMNIST and Penn Treebank benchmarks. The resulting performances are shown in Figure 6. The pMNIST training curves conï¬ rm that higher initial values of γ are detrimental to the optimiza- tion of the model. For the Penn Treebank task however, the effect is gone. We believe this is explained by the difference in the nature of the two tasks. For pMNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence. In the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on pMNIST wich is dominated by long-term dependencies (Arjovsky et al., 2015). # C TEACHING MACHINES TO READ AND COMPREHEND: TASK SETUP We evaluate the models on the question answering task using the CNN corpus (Hermann et al., 2015), with placeholders for the named entities. We follow a similar preprocessing pipeline as Her- mann et al. (2015). During training, we randomly sample the examples with replacement and shufï¬ e the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words. We deviate from Hermann et al. (2015) in order to save computation: we use only the 4 most relevant sentences from the description, as identiï¬ ed by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the
1603.09025#31
1603.09025#33
1603.09025
[ "1609.01704" ]
1603.09025#33
Recurrent Batch Normalization
11 Published as a conference paper at ICLR 2017 25 Permuted MNIST train 25 Permuted MNIST valid â gamma 0.10 â "gamma 0.10 â gamma 0.30 â gamma 0.30 20 â gamma 0.50 2.0 â gamma 0.50 â gamma 0.70 â gamma 0.70 > _ > â is gamma 1.00 is gamma 1.00 â ¬ â ¬ 5 5 6 1.0 6 10 0s 0.5 0.0 0.0 0 10000 20000 +=« 30000 += 40000~=â «50000 0 10000 20000 +~«30000 += 40000:~=â «50000 training steps training steps PTB train PTB valid 1.10 1.10 â gamma 0.10 â gamma 0.10 1.05 â gamma 0.30 â gamma 0.30 â gamma 0.50 1.08 â gamma 0.50 5 â gamma 0.70 5 â gamma 0.70 = 1.00 â gamma 1.00 o â gamma 1.00 © & 1.06 S 0.95 s a & 1.04 £090 g FI FI 0.85 1.02 0.80 1.00 0 5000 10000 15000 0 5000 10000 15000 training steps training steps Figure 6: Training curves on pMNIST and Penn Treebank for various initializations of γ. answers from the passage, putting an upper bound of 57% on the validation accuracy that can be achieved. For the reported performances, the ï¬ rst three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on mini- batches of size 64, with gradient clipping at 10 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 8 à 10â 5.
1603.09025#32
1603.09025#34
1603.09025
[ "1609.01704" ]
1603.09025#34
Recurrent Batch Normalization
For BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to 8 à 10â 4 and the minibatch size to 40. # D HYPERPARAMETER SEARCHES Table 5 reports hyperparameter values that were tried in the experiments. (a) MNIST and pMNIST (b) Penn Treebank Learning rate: RMSProp momentum: Hidden state size: Initial γ: 1e-2, 1e-3, 1e-4 0.5, 0.9 100, 200, 400 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Learning rate: Hidden state size: Batch size: Initial γ: (c) Text8 (d) Attentive Reader Learning rate: Hidden state size: 1e-1, 1e-2, 1e-3 500, 1000, 2000, 4000 Learning rate: Hidden state size: 8e-3, 8e-4, 8e-5, 8e-6 60, 120, 240, 280 1e-1, 1e-2, 2e-2, 1e-3 800, 1000, 1200, 1500, 2000 32, 64, 100, 128 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Table 5: Hyperparameter values that have been explored in the experiments. For MNIST and pMNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity
1603.09025#33
1603.09025#35
1603.09025
[ "1609.01704" ]
1603.09025#35
Recurrent Batch Normalization
12 Published as a conference paper at ICLR 2017 analysis on the batch size and initial γ. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size. The same values were tried for both the baseline and our BN-LSTM. In each case, our reported results are those of the model with the best validation performance. 13
1603.09025#34
1603.09025
[ "1609.01704" ]
1603.08983#0
Adaptive Computation Time for Recurrent Neural Networks
7 1 0 2 b e F 1 2 ] E N . s c [ 6 v 3 8 9 8 0 . 3 0 6 1 : v i X r a # Adaptive Computation Time for Recurrent Neural Networks Alex Graves Google DeepMind [email protected] # Abstract This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neu- ral networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and diï¬ eren- tiable, and does not add any noise to the parameter gradients.
1603.08983#1
1603.08983
[ "1502.04623" ]
1603.08983#1
Adaptive Computation Time for Recurrent Neural Networks
Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying binary logic operations, adding integers, and sorting real numbers. Overall, performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. We also present character-level language modelling results on the Hutter prize Wikipedia dataset. In this case ACT does not yield large gains in performance; however it does provide in- triguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could provide a generic method for inferring segment boundaries in sequence data. # Introduction
1603.08983#0
1603.08983#2
1603.08983
[ "1502.04623" ]
1603.08983#2
Adaptive Computation Time for Recurrent Neural Networks
The amount of time required to pose a problem and the amount of thought required to solve it are notoriously unrelated. Pierre de Fermat was able to write in a margin the conjecture (if not the proof) of a theorem that took three and a half centuries and reams of mathematics to solve [35]. More mundanely, we expect the eï¬ ort required to ï¬ nd a satisfactory route between two cities, or the number of queries needed to check a particular fact, to vary greatly, and unpredictably, from case to case.
1603.08983#1
1603.08983#3
1603.08983
[ "1502.04623" ]
1603.08983#3
Adaptive Computation Time for Recurrent Neural Networks
Most machine learning algorithms, however, are unable to dynamically adapt the amount of computation they employ to the complexity of the task they perform. For artiï¬ cial neural networks, where the neurons are typically arranged in densely connected layers, an obvious measure of computation time is the number of layer-to-layer transformations the network performs. In feedforward networks this is controlled by the network depth, or number of layers stacked on top of each other. For recurrent networks, the number of transformations also depends on the length of the input sequence â which can be padded or otherwise extended to allow for extra computation. The evidence that increased depth leads to more performant networks is by now inarguable [5, 4, 19, 9], and recent results show that increased sequence length can be similarly beneï¬ cial [31, 33, 25]. However it remains necessary for the experimenter to decide a priori on the amount of computation allocated to a particular input vector or sequence. One solution is to simply 1 make every network very deep and design its architecture in such a way as to mitigate the vanishing gradient problem [13] associated with long chains of iteration [29, 17]. However in the interests of both computational eï¬ ciency and ease of learning it seems preferable to dynamically vary the number of steps for which the network â pondersâ each input before emitting an output. In this case the eï¬ ective depth of the network at each step along the sequence becomes a dynamic function of the inputs received so far. The approach pursued here is to augment the network output with a sigmoidal halting unit whose activation determines the probability that computation should continue. The resulting halting distribution is used to deï¬
1603.08983#2
1603.08983#4
1603.08983
[ "1502.04623" ]
1603.08983#4
Adaptive Computation Time for Recurrent Neural Networks
ne a mean-ï¬ eld vector for both the network output and the internal network state propagated along the sequence. A stochastic alternative would be to halt or continue according to binary samples drawn from the halting distributionâ a technique that has recently been applied to scene understanding with recurrent networks [7]. However the mean-ï¬ eld approach has the advantage of using a smooth function of the outputs and states, with no need for stochastic gradient estimates. We expect this to be particularly beneï¬ cial when long sequences of halting decisions must be made, since each decision is likely to aï¬ ect all subsequent ones, and sampling noise will rapidly accumulate (as observed for policy gradient methods [36]). A related architecture known as Self-Delimiting Neural Networks [26, 30] employs a halting neuron to end a particular update within a large, partially activated network; in this case however a simple activation threshold is used to make the decision, and no gradient with respect to halting time is propagated. More broadly, learning when to halt can be seen as a form of conditional computing, where parts of the network are selectively enabled and disabled according to a learned policy [3, 6]. We would like the network to be parsimonious in its use of computation, ideally limiting itself to the minimum number of steps necessary to solve the problem. Finding this limit in its most general form would be equivalent to determining the Kolmogorov complexity of the data (and hence solving the halting problem) [21]. We therefore take the more pragmatic approach of adding a time cost to the loss function to encourage faster solutions. The network then has to learn to trade oï¬ accuracy against speed, just as a person must when making decisions under time pressure. One weakness is that the numerical weight assigned to the time cost has to be hand-chosen, and the behaviour of the network is quite sensitive to its value. The rest of the paper is structured as follows: the Adaptive Computation Time algorithm is presented in Section 2, experimental results on four synthetic problems and one real-world dataset are reported in Section 3, and concluding remarks are given in Section 4. # 2 Adaptive Computation Time Consider a recurrent neural network R composed of a matrix of input weights Wx, a parametric state transition model S, a set of output weights Wy and an output bias by.
1603.08983#3
1603.08983#5
1603.08983
[ "1502.04623" ]
1603.08983#5
Adaptive Computation Time for Recurrent Neural Networks
When applied to an input sequence x = (x1, . . . , xT ), R computes the state sequence s = (s1, . . . , sT ) and the output sequence y = (y1, . . . , yT ) by iterating the following equations from t = 1 to T : (1) # st = S(stâ 1, Wxxt) yt = Wyst + by (2) The state is a ï¬ xed-size vector of real numbers containing the complete dynamic information of the network. For a standard recurrent network this is simply the vector of hidden unit activations. For a Long Short-Term Memory network (LSTM) [14], the state also contains the activations of the memory cells. For a memory augmented network such as a Neural Turing Machine (NTM) [10], the state contains both the complete state of the controller network and the complete state of the memory. In general some portions of the state (for example the NTM memory contents) will not be visible to the output units; in this case we consider the corresponding columns of Wy to be ï¬ xed to 0. 2 Adaptive Computation Time (ACT) modiï¬ es the conventional setup by allowing R to perform a variable number of state transitions and compute a variable number of outputs at each input step. Let N (t) be the total number of updates performed at step t.
1603.08983#4
1603.08983#6
1603.08983
[ "1502.04623" ]
1603.08983#6
Adaptive Computation Time for Recurrent Neural Networks
Then deï¬ ne the intermediate state sequence (s1 S(stâ 1, x1 S(snâ 1 , xn t t = Wysn yn t + by sn t = t ) if n = 1 t ) otherwise (3) (4) where xn t = xt + δn,1 is the input at time t augmented with a binary ï¬ ag that indicates whether the input step has just been incremented, allowing the network to distinguish between repeated inputs and repeated computations for the same input. Note that the same state function is used for all state transitions (intermediate or otherwise), and similarly the output weights and bias are shared for all outputs. It would also be possible to use diï¬ erent state and output parameters for each intermediate step; however doing so would cloud the distinction between increasing the number of parameters and increasing the number of computational steps. We leave this for future work. To determine how many updates R performs at each input step an extra sigmoidal halting unit h is added to the network output, with associated weight matrix Wh and bias bh:
1603.08983#5
1603.08983#7
1603.08983
[ "1502.04623" ]
1603.08983#7
Adaptive Computation Time for Recurrent Neural Networks
t = Ï (Whsn hn t + bh) (5) As with the output weights, some columns of Wh may be ï¬ xed to zero to give selective access to the network state. The activation of the halting unit is then used to determine the halting probability pn t of the intermediate steps: pn t = R(t) if n = N (t) hn t otherwise (6) where , n N(t) = min{nâ : Ss h? >=1-e} (7) n=l the remainder R(t) is deï¬ ned as follows N(t)-1 R(t)=1â So ap (8) n=1 and â ¬ is a small constant (0.01 for the experiments in this paper), whose purpose is to allow compu- tation to halt after a single update if h} >= 1 â , as otherwise a minimum of two updates would be required for every input step. It follows directly from the definition that yh) p; = 1 and 0 < p? <1 Vn, so this is a valid probability distribution. A similar distribution was recently used to define differentiable push and pop operations for neural stacks and queues [i]. t , yt = y Ë n t . However we will eschew sampling techniques and the associated problems of noisy gradients, instead using pn N(t) N(t) =i vese w= Sovree (9) n=1 n=1 The implicit assumption is that the states and outputs are approximately linear, in the sense that a linear interpolation between a pair of state or output vectors will also interpolate between the
1603.08983#6
1603.08983#8
1603.08983
[ "1502.04623" ]
1603.08983#8
Adaptive Computation Time for Recurrent Neural Networks
3 (8) Figure 1: RNN Computation Graph. An RNN unrolled over two input steps (separated by vertical dotted lines). The input and output weights Wx, Wy, and the state transition operator S are shared over all steps. Figure 2: RNN Computation Graph with Adaptive Computation Time. The graph is equivalent to Figure 1, only with each state and output computation expanded to a variable number of intermediate updates. Arrows touching boxes denote operations applied to all units in the box, while arrows leaving boxes denote summations over all units in the box. properties the vectors embody.
1603.08983#7
1603.08983#9
1603.08983
[ "1502.04623" ]
1603.08983#9
Adaptive Computation Time for Recurrent Neural Networks
There are several reasons to believe that such an assumption is reasonable. Firstly, it has been observed that the high-dimensional representations present in neu- ral networks naturally tend to behave in a linear way [32, 20], even remaining consistent under arithmetic operations such as addition and subtraction [22]. Secondly, neural networks have been successfully trained under a wide range of adversarial regularisation constraints, including sparse internal states [23], stochastically masked units [28] and randomly perturbed weights [1]. This leads us to believe that the relatively benign constraint of approximately linear representations will not be too damaging. Thirdly, as training converges, the tendency for both mean-ï¬ eld and stochastic latent variables is to concentrate all the probability mass on a single value. In this case that yields a standard RNN with each input duplicated a variable, but deterministic, number of times, rendering the linearity assumption irrelevant. A diagram of the unrolled computation graph of a standard RNN is illustrated in Figure 1, while Figure 2 provides the equivalent diagram for an RNN trained with ACT.
1603.08983#8
1603.08983#10
1603.08983
[ "1502.04623" ]
1603.08983#10
Adaptive Computation Time for Recurrent Neural Networks
4 # 2.1 Limiting Computation Time If no constraints are placed on the number of updates R can take at each step it will naturally tend to â ponderâ each input for as long as possible (so as to avoid making predictions and incurring errors). We therefore require a way of limiting the amount of computation the network performs. Given a length T input sequence x, deï¬ ne the ponder sequence (Ï 1, . . . , Ï T ) of R as Ï t = N (t) + R(t) (10) and the ponder cost P(x) as T P(x) = opt (11) t=1 Since R(t) â ¬ (0,1), P(x) is an upper bound on the ( to reduce, namely the total computation a N(t) during the sequen non-differentiable) property we ultimately want We can encourage the network to minimise P(x) by modifying the sequence loss function L(x, y) used for training:
1603.08983#9
1603.08983#11
1603.08983
[ "1502.04623" ]
1603.08983#11
Adaptive Computation Time for Recurrent Neural Networks
Ë L(x, y) = L(x, y) + Ï P(x) (12) where Ï is a time penalty parameter that weights the relative cost of computation versus error. As we will see in the experiments section the behaviour of the network is quite sensitive to the value of Ï , and it is not obvious how to choose a good value. If computation time and prediction error can be meaningfully equated (for example if the relative ï¬ nancial cost of both were known) a more principled technique for selecting Ï should be possible.
1603.08983#10
1603.08983#12
1603.08983
[ "1502.04623" ]
1603.08983#12
Adaptive Computation Time for Recurrent Neural Networks
To prevent very long sequences at the beginning of training (while the network is learning how to use the halting unit) the bias term bh can be initialised to a positive value. In addition, a hard limit M on the maximum allowed value of N (t) can be imposed to avoid excessive space and time costs. In this case Equation (7) is modiï¬ ed to N(t) = min{M, min{nâ : S hf? >=1-e}} (13) n=1 # 2.2 Error Gradients The ponder costs p; are discontinuous with respect to the halting probabilities at the points where N(t) increments or decrements (that is, when the summed probability mass up to some n either decreases below or increases above 1 â
1603.08983#11
1603.08983#13
1603.08983
[ "1502.04623" ]
1603.08983#13
Adaptive Computation Time for Recurrent Neural Networks
â ¬). However they are continuous away from those points, as N(t) remains constant and R(t) is a linear function of the probabilities. In practice we simply ignore the discontinuities by treating N(t) as constant and minimising R(t) everywhere. Given this approximation, the gradient of the ponder cost with respect to the halting activations is straightforward: â P(x) â hn t = 0 if n = N (t) â 1 otherwise (14) For a stochastic ACT network, a more natural halting distribution than the one described in Equations to would be to simply treat h? as the probability of halting at step n, in which case p? = h? nae nrâ ), One could nf=1 then set pe = ri np; â i.e. the expected ponder time under the stochastic distribution. However experiments show that networks trained to minimise expected rather than total halting time learn to â
1603.08983#12
1603.08983#14
1603.08983
[ "1502.04623" ]
1603.08983#14
Adaptive Computation Time for Recurrent Neural Networks
cheatâ in the following ingenious way: they set ht to a value just below the halting threshold, then keep h} = 0 until some N(t) when they set nO high enough to ensure they halt. In this case pp <p}, so the states and outputs at n = N(t) have much lower weight in the mean field updates (Equation (9}) than those at n = 1; however by making the magnitudes of the states and output vectors much larger at N(t) than n = 1 the network can still ensure that the update is dominated by the final vectors, despite having paid a low ponder penalty.
1603.08983#13
1603.08983#15
1603.08983
[ "1502.04623" ]
1603.08983#15
Adaptive Computation Time for Recurrent Neural Networks
5 and hence â Ë L(x, y) â hn t = â L(x, y) â hn t â 0 if n = N (t) Ï otherwise (15) The halting activations only inï¬ uence L via their eï¬ ect on the halting probabilities, therefore a£(x,y) <8 ac(x,y) apy ane Ope Oh (16) nf=1 Furthermore, since the halting probabilities only inï¬ uence L via their eï¬ ect on the states and outputs, it follows from Equation (9) that â L(x, y) â pn t = â L(x, y) â yt yn t + â L(x, y) â st sn t (17) while, from Equations (6) and (8) , Sn ifnâ < N(t) and n < N(t) â lifnâ = N(t) and n < N(t) (18) 0 if n = N(t) Combining Equations (15), (17) and (18) gives, for n < N (t) dL(x%y) _ ALY) (nw) , OLOGY) (on NW) _ ~The Oye (uh we) GES (sh 8) 1 (19) while for n = N (t) â Ë L(x, y) â hN (t) t = 0 (20) Thereafter the network can be diï¬ erentiated as usual (e.g. with backpropagation through time [36]) and trained with gradient descent.
1603.08983#14
1603.08983#16
1603.08983
[ "1502.04623" ]
1603.08983#16
Adaptive Computation Time for Recurrent Neural Networks
# 3 Experiments We tested recurrent neural networks (RNNs) with and without ACT on four synthetic tasks and one real-world language processing task. LSTM was used as the network architecture for all experiments except one, where a simple RNN was used. However we stress that ACT is equally applicable to any recurrent architecture. All the tasks were supervised learning problems with discrete targets and cross-entropy loss. The data for the synthetic tasks was generated online and cross-validation was therefore not needed. Similarly, the character prediction dataset was suï¬ ciently large that the network did not overï¬ t.
1603.08983#15
1603.08983#17
1603.08983
[ "1502.04623" ]
1603.08983#17
Adaptive Computation Time for Recurrent Neural Networks
The performance metric for the synthetic tasks was the sequence error rate: the fraction of examples where any mistakes were made in the complete output sequence. This metric is useful as it is trivial to evaluate without decoding. For character prediction the metric was the average log-loss of the output predictions, in units of bits per character. Most of the training parameters were fixed for all experiments: Adam was used for optimi- sation with a learning rate of 10-4, the Hogwild! algorithm was used for asynchronous training with 16 threads; the initial halting unit bias b, mentioned in Equation (5) was 1; the « term from Equation was 0.01. The synthetic tasks were all trained for 1M iterations, where an iteration
1603.08983#16
1603.08983#18
1603.08983
[ "1502.04623" ]
1603.08983#18
Adaptive Computation Time for Recurrent Neural Networks
6 loco Input seq. Target seq. Figure 3: Parity training Example. Each sequence consists of a single input and target vector. Only 8 of the 64 input bits are shown for clarity. is deï¬ ned as a weight update on a single thread (hence the total number of weight updates is ap- proximately 16 times the number of iterations). The character prediction task was trained for 10K iterations. Early stopping was not used for any of the experiments. A logarithmic grid search over time penalties was performed for each experiment, with 20 ran- domly initialised networks trained for each value of Ï
1603.08983#17
1603.08983#19
1603.08983
[ "1502.04623" ]
1603.08983#19
Adaptive Computation Time for Recurrent Neural Networks
. For the synthetic problems the range of the grid search was from i à 10â j with integer i in the range 1â 10 and the exponent j in the range 1â 4. For the language modelling task, which took many days to complete, the range of j was limited to 1â 3 to reduce training time (lower values of Ï , which naturally induce more pondering, tend to give greater data eï¬ ciency but slower wall clock training time). Unless otherwise stated the maximum computation time M (Equation (13)) was set to 100. In all experiments the networks converged on learned values of N (t) that were far less than M , which functions mainly as safeguard against excessively long ponder times early in training.
1603.08983#18
1603.08983#20
1603.08983
[ "1502.04623" ]
1603.08983#20
Adaptive Computation Time for Recurrent Neural Networks
# 3.1 Parity Determining the parity of a sequence of binary numbers is a trivial task for a recurrent neural network [27], which simply needs to implement an internal switch that changes sign every time a one is received. For shallow feedforward networks receiving the entire sequence in one vector, however, the number of distinct input patterns, and hence diï¬ culty of the task, grows exponentially with the number of bits. We gauged the ability of ACT to infer an inherently sequential algorithm from statically presented data by presenting large binary vectors to the network and asking it to determine the parity. By varying the number of binary bits for which parity must be calculated we were also able to assess ACTâ s ability to adapt the amount of computation to the diï¬ culty of the vector. The input vectors had 64 elements, of which a random number from 1 to 64 were randomly set to 1 or â 1 and the rest were set to 0. The corresponding target was 1 if there was an odd number of ones and 0 if there was an even number of ones. Each training sequence consisted of a single input and target vector, an example of which is shown in Figure 3. The network architecture was a simple RNN with a single hidden layer containing 128 tanh units and a single sigmoidal output unit, trained with binary cross-entropy loss on minibatches of size 128. Note that without ACT the recurrent connection in the hidden layer was never used since the data had no sequential component, and the network reduced to a feedforward network with a single hidden layer. Figure 4 demonstrates that the network was unable to reliably solve the problem without ACT, with a mean of almost 40% error compared to 50% for random guessing. For penalties of 0.03 and below the mean error was below 5%. Figure 5 reveals that the solutions were both more rapid and more accurate with lower time penalties. It also highlights the relationship between the time penalty, the classiï¬ cation error rate and the average ponder time per input. The variance in ponder time for low Ï networks is very high, indicating that many correct solutions with widely varying runtime can be discovered. We speculate that progressively higher Ï values lead the network to compute
1603.08983#19
1603.08983#21
1603.08983
[ "1502.04623" ]
1603.08983#21
Adaptive Computation Time for Recurrent Neural Networks
7 0.007 - 0.008 om 0.002 â 0.003 - 8 > s 6 0.40 $ 035 0.30 © 025 E 0.20 W015 0.10 @ 0.05 D 0.00 mala on a ie | i = © Py = 2 © 2 s 2 = 2 © e @ ea 3 $ 8 3 $3 8 g és se 8 § 8 8 & 3 8 8 8 8s 8 8 $6 6 6 6 6 Ss 3 S 3 6 3s o6 6 3 3 3 0.0002 i 0.0003 } 0.0005 0.0006 ft 0.0007 f 0.0009 t No ACT Time Penalty Figure 4: Parity Error Rates. Bar heights show the mean error rates for diï¬ erent time penalties at the end of training. The error bars show the standard error in the mean. 0.5 0.5 Time Penalty â 0.0001 0.0002 0.0003, â 0.0008 0.4 4 Be Oe eee eee eee eee See â 0.0005 2 £ = ovoos S © ° â 0.0007 . 0.0008 ra © 03 ooo . 0.001 gs e . . â o02 â 0003 w wi â â 0004 o © . â 000s Q Qo? s . 0.006 â ¬ 02 © : 0.007 oO oO â 0008 5 =] 0.009 loâ loâ â 001 & & or â oo2 â 003 01 0.04 005 006 9.0 â 007 â ove 0.0 mony = â â â â â 009 ) 200000 400000 600000 800000 1000000 oO 2 4 6 8 â o 5 = without acr Iterations Ponder Figure 5: Parity Learning Curves and Error Rates Versus Ponder Time.
1603.08983#20
1603.08983#22
1603.08983
[ "1502.04623" ]
1603.08983#22
Adaptive Computation Time for Recurrent Neural Networks
Left: faint coloured curves show the errors for individual runs. Bold lines show the mean errors over all 20 runs for each Ï value. â Iterationsâ is the number of gradient updates per asynchronous worker. Right: Small circles represent individual runs after training is complete, large circles represent the mean over 20 runs for each Ï value. â Ponderâ is the mean number of computation steps per input timestep (minimum 1). The black dotted line shows the mean error for the networks without ACT. The height of the ellipses surrounding the mean values represents the standard error over error rates for that value of Ï , while the width shows the standard error over ponder times. the parities of successively larger chunks of the input vector at each ponder step, then iteratively combine these calculations to obtain the parity of the complete vector. Figure 6 shows that for the networks without ACT and those with overly high time penalties, the error rate increases sharply with the diï¬ culty of the task (where diï¬ culty is deï¬
1603.08983#21
1603.08983#23
1603.08983
[ "1502.04623" ]
1603.08983#23
Adaptive Computation Time for Recurrent Neural Networks
ned as the number of bits whose parity must be determined), while the amount of ponder remains roughly constant. For the more successful networks, with intermediate Ï values, ponder time appears to grow linearly with diï¬ culty, with a slope that generally increases as Ï decreases. Even for the best networks the error rate increased somewhat with diï¬ culty. For some of the lowest Ï networks there is a dramatic increase in ponder after about 32 bits, suggesting an ineï¬ cient algorithm. # 3.2 Logic Like parity, the logic task tests if an RNN with ACT can sequentially process a static vector. Unlike parity it also requires the network to internally transfer information across successive input timesteps, thereby testing whether ACT can propagate coherent internal states. Each input sequence consists of a random number from 1 to 10 of size 102 input vectors.
1603.08983#22
1603.08983#24
1603.08983
[ "1502.04623" ]
1603.08983#24
Adaptive Computation Time for Recurrent Neural Networks
The ï¬ rst two elements of each input represent a pair of binary numbers; the remainder of the vector is divided up into 10 chunks of size 10. The ï¬ rst B chunks, where B is a random number from 8 Ponder 0 50 60 10 20 30 40 50 60 =o 30 4 Difficulty Difficulty werhour AT Figure 6: Parity Ponder Time and Error Rate Versus Input Diï¬ culty. Faint lines are individual runs, bold lines are means over 20 networks. â Diï¬ cultyâ
1603.08983#23
1603.08983#25
1603.08983
[ "1502.04623" ]
1603.08983#25
Adaptive Computation Time for Recurrent Neural Networks
is the number of bits in the parity vectors, with a mean over 1,000 random vectors used for each data-point. Table 1: Binary Truth Tables for the Logic Task P Q NOR Xq ABJ XOR NAND AND XNOR if/then T T T F F T F F F F F T F F T F F T F F F T T F F T T T T F F F T F F T T F T T then/if OR T T F T T T T F 1 to 10, contain one-hot representations of randomly chosen numbers between 1 and 10; each of these numbers correspond to an index into the subset of binary logic gates whose truth tables are listed in Table 1. The remaining 10 â B chunks were zeroed to indicate that no further binary operations were deï¬ ned for that vector. The binary target bB+1 for each input is the truth value yielded by recursively applying the B binary gates in the vector to the two initial bits b1, b0. That is for 1 â ¤ b â ¤ B: bi+1 = Ti(bi, biâ 1) (21) where Ti(., .) is the truth table indexed by chunk i in the input vector. For the ï¬ rst vector in the sequence, the two input bits b0, b1 were randomly chosen to be false (0) or true (1) and assigned to the ï¬ rst two elements in the vector. For subsequent vectors, only b1 was random, while b0 was implicitly equal to the target bit from the previous vector (for the purposes of calculating the current target bit), but was always set to zero in the input vector. To solve the task, the network therefore had to learn both how to calculate the sequence of binary operations represented by the chunks in each vector, and how to carry the ï¬ nal output of that sequence over to the next timestep. An example input-target sequence pair is shown in Figure 7. The network architecture was single-layer LSTM with 128 cells. The output was a single sigmoidal unit, trained with binary cross-entropy, and the minibatch size was 16. Figure 8 shows that the network reaches a minimum sequence error rate of around 0.2 without ACT (compared to 0.5 for random guessing), and virtually zero error for all Ï
1603.08983#24
1603.08983#26
1603.08983
[ "1502.04623" ]
1603.08983#26
Adaptive Computation Time for Recurrent Neural Networks
â ¤ 0.01. From Figure 9 it can be seen that low Ï ACT networks solve the task very quickly, requiring about 10,000 training iterations. For higher Ï values ponder time reduces to 1, at which point the networks trained with ACT behave identically to those without. For lower Ï values, the spread of ponder values, and hence computational cost, is quite large. Again we speculate that this is due to the network learning more or less â chunkedâ solutions in which composite truth table are learned for multiple successive logic operations. This is somewhat supported by the clustering of the lowest Ï networks around a ponder time of 5â 6, which is approximately the mean number of logic gates applied per sequence, 9 F w i 0} |O} |0 a = Gate2 /1| /0] |0 = = o| fol |4 xX 3 ae od Su g O} jO| |4 Z2PEL Gatet |0) ]1) jo) â r> EES folly input bits al lol lo ollallo Input seq. Target seq. Figure 7: Logic training Example. Both the input and target sequences consist of 3 vectors. For simplicity only 2 of the 10 possible logic gates represented in the input are shown, and each is restricted to one of the ï¬ rst 3 gates in Table 1 (NOR, Xq, and ABJ). The segmentation of the input vectors is show on the left and the recursive application of Equation (21) required to determine the targets (and subsequent b0 values) is shown in italics above the target vectors. 0.15 i 0.10 goes mL [o % 0.00 : oi gs 3 a 4 8 8 gos 3 2 s 36 3 = 89 9 ¢ 8 © & 2 @ee me» © re B® ee Ps or @ 5 sss 8 8 8 § 8§ 8 8s §& § §$ &â
1603.08983#25
1603.08983#27
1603.08983
[ "1502.04623" ]
1603.08983#27
Adaptive Computation Time for Recurrent Neural Networks
¬& & & § BB é6¢ 8 6 8 is} §ss 8 88888 8S 8 8 S&B EEBSB SB $eé eee x eesegeegeeegsegeecscss6 56 6 6 6 6 Ps 6 6 6 6 6 6 6 6 6 s Time Penalty Figure 8: Logic Error Rates. and hence the minimum number of computations the network would need if calculating single binary operations at a time. Figure 10 shows a surprisingly high ponder time for the least diï¬ cult inputs, with some networks taking more than 10 steps to evaluate a single logic gate. From 5 to 10 logic gates, ponder gradually increases with diï¬ culty as expected, suggesting that a qualitatively diï¬ erent solution is learned for the two regimes. This is supported by the error rates for the non ACT and high Ï networks, which increase abruptly after 5 gates. It may be that 5 is the upper limit on the number of successive gates the network can learn as a single composite operation, and thereafter it is forced to apply an iterative algorithm. # 3.3 Addition The addition task presents the network with a input sequence of 1 to 5 size 50 input vectors. Each vector represents a D digit number, where D is drawn randomly from 1 to 5, and each digit is drawn randomly from 0 to 9.
1603.08983#26
1603.08983#28
1603.08983
[ "1502.04623" ]
1603.08983#28
Adaptive Computation Time for Recurrent Neural Networks
The ï¬ rst 10D elements of the vector are a concatenation of one-hot encodings of the D digits in the number, and the remainder of the vector is set to 0. The required output is the cumulative sum of all inputs up to the current one, represented as a set of 6 simultaneous classiï¬ cations for the 6 possible digits in the sum. There is no target for the ï¬ rst vector in the sequence, as no sums have yet been calculated. Because the previous sum must be carried over by the network, this task again requires the internal state of the network to remain coherent.
1603.08983#27
1603.08983#29
1603.08983
[ "1502.04623" ]
1603.08983#29
Adaptive Computation Time for Recurrent Neural Networks
Each classiï¬ cation is modelled by a size 11 softmax, where the ï¬ rst 10 classes are the digits and the 11th is a special marker used to indicate that the number is complete. An example input-target pair is shown in Figure 11. The network was single-layer LSTM with 512 memory cells. The loss function was the joint cross-entropy of all 6 targets at each time-step where targets were present and the minibatch size 10 07 0.30 Time Penalty â "oooor 0.6 0.25 2 2 © 05 Ch os [aa a 8 6 e 04 i 0.15 w w 8 8 0.3 0.10 fat c 3 g a iow oO 0.2 o 0.05 n n 0.0 oO 200000 400000 600000 800000 1000000 7 o 2 4 6 8 10 12 14 â _â o1 i without Act Iterations Ponder Figure 9: Logic Learning Curves and Error Rates Versus Ponder Time.
1603.08983#28
1603.08983#30
1603.08983
[ "1502.04623" ]
1603.08983#30
Adaptive Computation Time for Recurrent Neural Networks
os â Time Penalty ° Ponder Sequence Error Rate 7 8 9 10 I 2 3 4 7 8 9 10 56 56 Difficulty Difficulty Figure 10: Logic Ponder Time and Error Rate Versus Input Diï¬ culty. â Diï¬ cultyâ is the number of logic gates in each input vector; all sequences were length 5. 1}/3}/6 i $ 0//9)/8 3/18 3/|2||4| â â alls 8/|-]/5 «Ilo -}[-} [0 alfa nput seq. Target seq. Figure 11: Addition training Example. Each digit in the input sequence is represented by a size 10 one hot encoding. Unused input digits, marked â -â , are represented by a vector of 10 zeros. The black vector at the start of the target sequence indicates that no target was required for that step. The target digits are represented as 1-of-11 classes, where the 11th class, marked â *â , is used for digits beyond the end of the target number. 11 se eeeesepeeeggegeeeexecpeeganzeeepegare s§ssssss88s88s88e8s8s8s88s8sesesgssesg8eggge¢8gsgsgegegseX eeeegsgse 8 8G e&sgsSée sé 6S SF SF 6 SC oe oo 8 8 ° $6666 6 5 6 6 2 Time Penalty Figure 12: Addition Error Rates. 1.0 07 Time Penalty â 00001 â 0.0002 06 â 0.0003 08 â 00004 â 0.0005 v Lv = cocoe oS _ os o 0.0007 fed e â 0.0008 â 0.0009 L Lo â 0001 £ 0.6 £ o4 G02 â 0003 uu Ww â 0.004 o © 03 â â 0.005 Vv Vv 0.006 S04 ij 0.007 o oO 0.008 3 302 0009 fou fog â oo oO oO â 00 eA) un â
1603.08983#29
1603.08983#31
1603.08983
[ "1502.04623" ]
1603.08983#31
Adaptive Computation Time for Recurrent Neural Networks
003 0.2 0.1 boa â 005 â 006 0.0 EDO D000=-0-=2 ND OO + er 0.0 â 009 oO 200000 400000 600000 800000 1000000 oO 2 4 6 8 10 2 14 â o jl = without act Iterations Ponder Figure 13: Addition Learning Curves and Error Rates Versus Ponder Time. was 32. The maximum ponder M was set to 20 for this task, as it was found that some networks had very high ponder times early in training. The results in Figure 12 show that the task was perfectly solved by the ACT networks for all values of Ï in the grid search. Unusually, networks with higher Ï solved the problem with fewer training examples. Figure 14 demonstrates that the relationship between the ponder time and the number of digits was approximately linear for most of the ACT networks, and that for the most eï¬ cient networks (with the highest Ï values) the slope of the line was close to 1, which matches our expectations that an eï¬
1603.08983#30
1603.08983#32
1603.08983
[ "1502.04623" ]
1603.08983#32
Adaptive Computation Time for Recurrent Neural Networks
cient long addition algorithm should need one computation step per digit. Figure 15 shows how the ponder time is distributed during individual addition sequences, pro- viding further evidence of an approximately linear-time long addition algorithm. # 3.4 Sort The sort task requires the network to sort sequences of 2 to 15 numbers drawn from a standard normal distribution in ascending order. The experiments considered so far have been designed to favour ACT by compressing sequential information into single vectors, and thereby requiring the use of multiple computation steps to unpack them. For the sort task a more natural sequential representation was used: the random numbers were presented one at a time as inputs, and the required output was the sequence of indices into the number sequence placed in sorted order; an example is shown in Figure 16. We were particularly curious to see how the number of ponder steps scaled with the number of elements to be sorted, knowing that eï¬ cient sorting algorithms have O(N log N ) computational cost.
1603.08983#31
1603.08983#33
1603.08983
[ "1502.04623" ]
1603.08983#33
Adaptive Computation Time for Recurrent Neural Networks
The network was single-layer LSTM with 512 cells. The output layer was a size 15 softmax, 12 â Time Penalty Sequence Error Rate 0 0.0 = â â â â 3 3 7 : Difficulty Difficulty â witouract Figure 14: Addition Ponder Time and Error Rate Versus Input Diï¬ culty. â Diï¬ cultyâ is the number of digits in each input vector; all sequences were length 3. ee StS edHHEH e EHHEG HEECHG) 1 1 1 2 °9 «9 1 o1 41 4 9 3 Outputs CO RRnnnaT astelet aneaeb EEE O10 1 8 0 SSIS HHH OHIHEUG GE -HIHIGHHI-HE2 HITE OHIELEHG 8 6 4 Ofte tees 5 5 8 0 6 Fae ieee eee eee . oO mo] Hee 2 7 8 Auiey 7 4 5 5 Inputs 3 5 4 0 9 a 6 1 en 1 9 6 6 3 paauncain) 2 8 8 0 3 4]/4 1 +0 3 2 6]/0 3 6 8 6 3 Figure 15: Ponder Time During Three Addition Sequences. The input sequence is shown along the bottom x-axis and the network output sequence is shown along the top x-axis. The ponder time Ï t at each input step is shown by the black lines; the actual number of computational steps taken at each point is Ï t rounded up to the next integer. The grey lines show the total number of digits in the two numbers being summed at each step; this appears to give a rough lower bound on the ponder time, suggesting an internal algorithm that is approximately linear in the number of digits.
1603.08983#32
1603.08983#34
1603.08983
[ "1502.04623" ]
1603.08983#34
Adaptive Computation Time for Recurrent Neural Networks
All plots were created using the same network, trained with Ï = 9eâ 4. trained with cross-entropy to classify the indices of the sorted inputs. The minibatch size was 16. Figure 17 shows that the advantage of using ACT is less dramatic for this task than the previous three, but still substantial (from around 12% error without ACT to around 6% for the best Ï value). However from Figure 18 it is clear that these gains come at a heavy computational cost, with the best networks requiring roughly 9 times as much computation as those without ACT. Not surprisingly, It Figure 19 shows that the error rate grew rapidly with the sequence length for all networks. also indicates that the better networks had a sublinear growth in computations per input step with sequence length, though whether this indicates a logarithmic time algorithm is unclear. One problem with the sort task was that the Gaussian samples were sometimes very close together, making it hard for the network to determine which was greater; enforcing a minimum separation between successive values would probably be beneï¬
1603.08983#33
1603.08983#35
1603.08983
[ "1502.04623" ]
1603.08983#35
Adaptive Computation Time for Recurrent Neural Networks
cial. Figure 20 shows the ponder time during three sort sequences of varying length. As can be seen, there is a large spike in ponder time near (though not precisely at) the end of the input sequence, presumably when the majority of the sort comparisons take place. Note that the spike is much higher for the longer two sequences than the length 5 one, again pointing to an algorithm that is nonlinear 13 wo 0.08 108 029 055 TTT | â Hil Input seq. Target seq. Figure 16: Sort training Example. Each size 2 input vector consists of one real number and one binary ï¬ ag to indicate the end of sequence to be sorted; inputs following the sort sequence are set to zero and marked in black. No targets are present until after the sort sequence; thereafter the size 15 target vectors represent the sorted indices of the input sequence. oy O14 ® 012 Bo £0.10 Ww . 0.08 oT PTT TTT 0.06 & is se 228 85 Benge egaeeveseeeaganzeeereaaer sss s 8 88 8 888 8s 8 8S SS se eEgEseEgE gs Fs sR g~ssesesgsesgesgpgeegesee S&S &S FSF FS FS ssseEsSs ss Ss Ss < gessgsegesgegsesse686 8686 8 $s6e8e8se8e88e88 g Time Penalty Figure 17: Sort Error Rates. 0.20 Time Penalty â 00001 = 0.0002 â 0.0003 . â 0.0004 â 00005 Lv 2 = doves oO Mois * = a0 fad fad â 0.0008 â 0.0009 50 5 = boon £ £ 0002 0003 u uw Fe 0.004 o o 0005 Q QO â 0006 c © 0.10 â 0007 G © oO â 0008 3 3 â 0009 fom fom â oo oO o â on A) wn â 003 â 00s â 005 0.05 â 006 â oo â 003 0.0 â 009 0 200000 400000 -~=â
1603.08983#34
1603.08983#36
1603.08983
[ "1502.04623" ]
1603.08983#36
Adaptive Computation Time for Recurrent Neural Networks
600000 = 8000001000000 0 2 4 6 8 10 R 01 7 = without act Iterations Ponder Figure 18: Sort Learning Curves and Error Rates Versus Ponder Time. Ponder Sequence Error Rate 0 2 14 2 4 0 2 4 8 Ft 8 1 Difficulty Difficulty Figure 19: Sort Ponder Time and Error Rate Versus Input Diï¬ culty. sorted. â Diï¬ cultyâ
1603.08983#35
1603.08983#37
1603.08983
[ "1502.04623" ]
1603.08983#37
Adaptive Computation Time for Recurrent Neural Networks
is the length of the sequence to be 14 Outputs 2 8117 00 053 180 064 041 065 024 020 090 051 140 067 034 Outputs Outputs 2 2 Ponder 834133 005 077 097 098 059 097 -0.82 1.74 22 0.83 155 025 0.64 inputs Inputs Figure 20: Ponder Time During Three Sort Sequences. The input sequences to be sorted are shown along the bottom x-axes and the network output sequences are shown along the top x-axes. All plots created using the same network, trained with Ï
1603.08983#36
1603.08983#38
1603.08983
[ "1502.04623" ]
1603.08983#38
Adaptive Computation Time for Recurrent Neural Networks
= 10â 3. 1.60 . o gue T 156 2 154 aan (4 ao 1.50 ga 299+ 2 © & @ e@eam se oer Bae ssssssss8eegse â ¬ §& §& §& FEE se<eseeseegesesescsssesé6 ss 6s 6 < 666 6 66 6 6S 6 ° 2 Time Penalty Figure 21:
1603.08983#37
1603.08983#39
1603.08983
[ "1502.04623" ]
1603.08983#39
Adaptive Computation Time for Recurrent Neural Networks
Wikipedia Error Rates. in sequence length (the average ponder per timestep is nonetheless lower for longer sequences, as little pondering is done away from the spike.). # 3.5 Wikipedia Character Prediction The Wikipedia task is character prediction on text drawn from the Hutter prize Wikipedia dataset [15]. Following previous RNN experiments on the same data [8], the raw unicode text was used, including XML tags and markup characters, with one byte presented per input timestep and the next byte predicted as a target. No validation set was used for early stopping, as the networks were unable to overï¬ t the data, and all error rates are recorded on the training set. Sequences of 500 consecutive bytes were randomly chosen from the training set and presented to the network, whose internal state was reset to 0 at the start of each sequence. LSTM networks were used with a single layer of 1500 cells and a size 256 softmax classiï¬ cation layer. As can be seen from Figures 21 and 22, the error rates are fairly similar with and without ACT, and across values of Ï (although the learning curves suggest that the ACT networks are somewhat more data eï¬ cient). Furthermore the amount of ponder per input is much lower than for the other problems, suggesting that the advantages of extra computation were slight for this task. However Figure 23 reveals an intriguing pattern of ponder allocation while processing a sequence. Character prediction networks trained with ACT consistently pause at spaces between words, and pause for longer at â boundaryâ characters such as commas and full stops. We speculate that the extra computation is used to make predictions about the next â chunkâ in the data (word, sentence, clause), much as humans have been found to do in self-paced reading experiments [16]. This suggests that ACT could be useful for inferring implicit boundaries or transitions in sequence data. Alternative measures for inferring transitions include the next-step prediction loss and predictive entropy, both of which tend to increase during harder predictions.
1603.08983#38
1603.08983#40
1603.08983
[ "1502.04623" ]
1603.08983#40
Adaptive Computation Time for Recurrent Neural Networks
However, as can be seen from the ï¬ gure, they 15 2.2 1.80 24 175 â 0003 = coos - . = aces . = avee G20 5 : ares 2 ra £ â coos U 5 0 = 00s K 15 o â oor 2 c ote GS ro â 003 ~ â 0s 5 18 o ts g a â 00s a â oor v 2 = ace 2a Pa = 00s a â o1 â Without act 16 145 15 1.40 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.9 1.0 11 12 13 14 15 16 17 Iterations Ponder Figure 22: Wikipedia Learning Curves (Zoomed) and Error Rates Versus Ponder Time. Entropy (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Loss (bits) and the many people caught in the middle of the two. In recent history, with scientists learning Ponder and the many people caught in the middle of the two. In recent history, with scientists learning
1603.08983#39
1603.08983#41
1603.08983
[ "1502.04623" ]
1603.08983#41
Adaptive Computation Time for Recurrent Neural Networks
Figure 23: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Text Sequence. Plot created using a network trained with Ï = 6eâ 3 are a less reliable indicator of boundaries, and are not likely to increase at points such as full stops and commas, as these are invariably followed by space characters. More generally, loss and entropy only indicate the diï¬ culty of the current prediction, not the degree to which the current input is likely to impact future predictions. Furthermore Figure 24 reveals that, as well as being an eï¬ ective detector of non-text transition markers such as the opening brackets of XML tags, ACT does not increase computation time during random or fundamentally unpredictable sequences like the two ID numbers. This is unsurprising, as doing so will not improve its predictions. In contrast, both entropy and loss are inevitably high for unpredictable data. We are therefore hopeful that computation time will provide a better way to distinguish between structure and noise (or at least data perceived by the network as structure or noise) than existing measures of predictive diï¬
1603.08983#40
1603.08983#42
1603.08983
[ "1502.04623" ]
1603.08983#42
Adaptive Computation Time for Recurrent Neural Networks
culty. # 4 Conclusion This paper has introduced Adaptive Computation time (ACT), a method that allows recurrent neural networks to learn how many updates to perform for each input they receive. Experiments on 16 Entropy (bits) » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> a; rand Ba 32 be United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> Be Boo gis gs » United States security treaty</title> <id>1157</id> <revision> <id>15899658</id> Figure 24: Ponder Time, Prediction loss and Prediction Entropy During a Wikipedia Sequence Containing XML Tags. Created using the same network as Figure 23. synthetic data prove that ACT can make otherwise inaccessible problems straightforward for RNNs to learn, and that it is able to dynamically adapt the amount of computation it uses to the demands of the data. An experiment on real data suggests that the allocation of computation steps learned by ACT can yield insight into both the structure of the data and the computational demands of predicting it. ACT promises to be particularly interesting for recurrent architectures containing soft attention modules [2, 10, 34, 12], which it could enable to dynamically adapt the number of glances or internal operations they perform at each time-step. One weakness of the current algorithm is that it is quite sensitive to the time penalty parameter that controls the relative cost of computation time versus prediction error. An important direction for future work will be to ï¬ nd ways of automatically determining and adapting the trade-oï¬ between accuracy and speed.
1603.08983#41
1603.08983#43
1603.08983
[ "1502.04623" ]
1603.08983#43
Adaptive Computation Time for Recurrent Neural Networks
# Acknowledgments The author wishes to thank Ivo Danihleka, Greg Wayne, Tim Harley, Malcolm Reynolds, Jacob Menick, Oriol Vinyals, Joel Leibo, Koray Kavukcuoglu and many others on the DeepMind team for valuable comments and suggestions, as well as Albert Zeyer, Martin Abadi, Dario Amodei, Eugene Brevdo and Christopher Olah for pointing out the discontinuity in the ponder cost, which was erroneously described as smooth in an earlier version of the paper. # References [1] G. An. The eï¬
1603.08983#42
1603.08983#44
1603.08983
[ "1502.04623" ]
1603.08983#44
Adaptive Computation Time for Recurrent Neural Networks
ects of adding noise during backpropagation training on a generalization perfor- mance. Neural Computation, 8(3):643â 674, 1996. [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. abs/1409.0473, 2014. [3] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015. [4] D. C. Ciresan, U. Meier, and J. Schmidhuber.
1603.08983#43
1603.08983#45
1603.08983
[ "1502.04623" ]
1603.08983#45
Adaptive Computation Time for Recurrent Neural Networks
Multi-column deep neural networks for image classiï¬ cation. In arXiv:1202.2745v1 [cs.CV], 2012. 17 [5] G. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, IEEE Trans- actions on, 20(1):30 â 42, jan. 2012. [6] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. [7] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016. [8] A. Graves.
1603.08983#44
1603.08983#46
1603.08983
[ "1502.04623" ]
1603.08983#46
Adaptive Computation Time for Recurrent Neural Networks
Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [9] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural net- works. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Con- ference on, pages 6645â 6649. IEEE, 2013. [10] A. Graves, G.
1603.08983#45
1603.08983#47
1603.08983
[ "1502.04623" ]
1603.08983#47
Adaptive Computation Time for Recurrent Neural Networks
Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [11] E. Grefenstette, K. M. Hermann, M. Suleyman, and P. Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â 1827, 2015. [12] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. [13] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient ï¬ ow in recurrent nets: the diï¬ culty of learning long-term dependencies, 2001. [14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. [15] M. Hutter. Universal artiï¬ cial intelligence. Springer, 2005. [16] M. A. Just, P. A. Carpenter, and J. D. Woolley.
1603.08983#46
1603.08983#48
1603.08983
[ "1502.04623" ]
1603.08983#48
Adaptive Computation Time for Recurrent Neural Networks
Paradigms and processes in reading compre- hension. Journal of experimental psychology: General, 111(2):228, 1982. [17] N. Kalchbrenner, I. Danihelka, and A. Graves. Grid long short-term memory. arXiv preprint arXiv:1507.01526, 2015. [18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬ cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â 1105, 2012. [20] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014. [21] M. Li and P. Vit´anyi.
1603.08983#47
1603.08983#49
1603.08983
[ "1502.04623" ]
1603.08983#49
Adaptive Computation Time for Recurrent Neural Networks
An introduction to Kolmogorov complexity and its applications. Springer Science & Business Media, 2013. [22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â 3119, 2013. 18 [23] B. A. Olshausen et al. Emergence of simple-cell receptive ï¬ eld properties by learning a sparse code for natural images. Nature, 381(6583):607â 609, 1996. [24] B. Recht, C. Re, S. Wright, and F. Niu.
1603.08983#48
1603.08983#50
1603.08983
[ "1502.04623" ]
1603.08983#50
Adaptive Computation Time for Recurrent Neural Networks
Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems, pages 693â 701, 2011. [25] S. Reed and N. de Freitas. Neural programmer-interpreters. Technical Report arXiv:1511.06279, 2015. [26] J. Schmidhuber. Self-delimiting neural networks. arXiv preprint arXiv:1210.0118, 2012. [27] J. Schmidhuber and S. Hochreiter.
1603.08983#49
1603.08983#51
1603.08983
[ "1502.04623" ]
1603.08983#51
Adaptive Computation Time for Recurrent Neural Networks
Guessing can outperform many long time lag algorithms. Technical report, 1996. [28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. The Journal of Machine Learning Research, 15(1):1929â 1958, 2014. [29] R. K. Srivastava, K.
1603.08983#50
1603.08983#52
1603.08983
[ "1502.04623" ]
1603.08983#52
Adaptive Computation Time for Recurrent Neural Networks
Greï¬ , and J. Schmidhuber. Training very deep networks. In Advances in Neural Information Processing Systems, pages 2368â 2376, 2015. [30] R. K. Srivastava, B. R. Steunebrink, and J. Schmidhuber. First experiments with powerplay. Neural Networks, 41:130â 136, 2013. [31] S. Sukhbaatar, J. Weston, R. Fergus, et al.
1603.08983#51
1603.08983#53
1603.08983
[ "1502.04623" ]
1603.08983#53
Adaptive Computation Time for Recurrent Neural Networks
End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431â 2439, 2015. [32] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215, 2014. [33] O. Vinyals, S. Bengio, and M. Kudlur. Order matters:
1603.08983#52
1603.08983#54
1603.08983
[ "1502.04623" ]
1603.08983#54
Adaptive Computation Time for Recurrent Neural Networks
Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. [34] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674â 2682, 2015. [35] A. J. Wiles. Modular elliptic curves and fermats last theorem. ANNALS OF MATH, 141:141, 1995. [36] R. J. Williams and D. Zipser.
1603.08983#53
1603.08983#55
1603.08983
[ "1502.04623" ]
1603.08983#55
Adaptive Computation Time for Recurrent Neural Networks
Gradient-based learning algorithms for recurrent networks and their computational complexity. Back-propagation: Theory, architectures and applications, pages 433â 486, 1995. 19
1603.08983#54
1603.08983
[ "1502.04623" ]
1603.06147#0
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
6 1 0 2 n u J 1 2 ] L C . s c [ 4 v 7 4 1 6 0 . 3 0 6 1 : v i X r a # A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation # Junyoung Chung Universit´e de Montr´eal [email protected] Kyunghyun Cho New York University Yoshua Bengio Universit´e de Montr´eal CIFAR Senior Fellow # Abstract The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoderâ decoder with a subword-level encoder and a character-level decoder on four language pairsâ En-Cs, En-De, En-Ru and En-Fiâ using the parallel corpora from WMTâ
1603.06147#1
1603.06147
[ "1605.02688" ]
1603.06147#1
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Further- more, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine trans- lation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru. tion, although neural networks do not suffer from character-level modelling and rather suffer from the issues speciï¬ c to word-level modelling, such as the increased computational complexity from a very large target vocabulary (Jean et al., 2015; Lu- ong et al., 2015b). Therefore, in this paper, we ad- dress a question of whether neural machine trans- lation can be done directly on a sequence of char- acters without any explicit word segmentation. To answer this question, we focus on represent- ing the target side as a character sequence. We evaluate neural machine translation models with a character-level decoder on four language pairs from WMTâ 15 to make our evaluation as convinc- ing as possible. We represent the source side as a sequence of subwords extracted using byte-pair encoding from Sennrich et al. (2015), and vary the target side to be either a sequence of subwords or characters. On the target side, we further design a novel recurrent neural network (RNN), called bi- scale recurrent network, that better handles multi- ple timescales in a sequence, and test it in addition to a naive, stacked recurrent neural network. 1 The existing machine translation systems have re- lied almost exclusively on word-level modelling with explicit segmentation. This is mainly due to the issue of data sparsity which becomes much more severe, especially for n-grams, when a sen- tence is represented as a sequence of characters rather than words, as the length of the sequence grows signiï¬
1603.06147#0
1603.06147#2
1603.06147
[ "1605.02688" ]
1603.06147#2
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
cantly. In addition to data sparsity, we often have a priori belief that a word, or its segmented-out lexeme, is a basic unit of meaning, making it natural to approach translation as map- ping from a sequence of source-language words to a sequence of target-language words. On all of the four language pairsâ En-Cs, En-De, En-Ru and En-Fiâ , the models with a character- level decoder outperformed the ones with a subword-level decoder. We observed a similar trend with the ensemble of each of these con- ï¬ gurations, outperforming both the previous best neural and non-neural translation systems on En- Cs, En-De and En-Fi, while achieving a compara- ble result on En-Ru. We ï¬ nd these results to be a strong evidence that neural machine translation can indeed learn to translate at the character-level and that in fact, it beneï¬
1603.06147#1
1603.06147#3
1603.06147
[ "1605.02688" ]
1603.06147#3
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
ts from doing so. # 2 Neural Machine Translation This has continued with the more recently proposed paradigm of neural machine transla- Neural machine translation refers to a recently proposed approach to machine translation (For- cada and Neco, 1997; Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). This approach aims at building an end-to-end neu- ral network that takes as input a source sentence X = (a1,...,@7,) and outputs its translation Y = (y1,.--,yr,), Where x; and yy are respec- tively source and target symbols. This neural net- work is constructed as a composite of an encoder network and a decoder network. The encoder network encodes the input sen- tence X into its continuous representation. In this paper, we closely follow the neural transla- tion model proposed in Bahdanau et al. (2015) and use a bidirectional recurrent neural network, which consists of two recurrent neural networks. The forward network reads the input sentence in a forward direction: Zt = B (ex(ae), Zi-1), where e,,(x) is a continuous embedding of the t-th input symbol, and ¢ is a recurrent activa- tion function. Similarly, the reverse network reads the sentence in a reverse direction (right to left): vi= (ex (22), Fis) At each loca- tion in the input sentence, we concatenate the hid- den states from the forward and reverse RNNs to form a context set C = {z1,...,27,}, where â = [Ze Z,j. Then the decoder computes the conditional dis- tribution over all possible translations based on this context set. This is done by first rewrit- ing the conditional probability of a translation: log p(Y|X) = Dy, log p(w lycv,X). For each conditional term in the summation, the decoder RNN updates its hidden state by hy = $(ey(yvâ 1), byâ 1, ev), (63) where e, is the continuous embedding of a target symbol. c; is a context vector computed by a soft- alignment mechanism: cy = falign(â ¬y(Yrâ 1); hy_1,C)). (2)
1603.06147#2
1603.06147#4
1603.06147
[ "1605.02688" ]
1603.06147#4
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
The soft-alignment mechanism falign weights each vector in the context set C according to its relevance given what has been translated. The weight of each vector zt is computed by Ong = Zor eee), (3) where fscore iS a parametric function returning an unnormalized score for z; given hy_; and y_1. We use a feedforward network with a single hid- den layer in this paper.! Z is a normalization con- stant: Z= ey efscore(ey(Yy a) shy 14") This procedure can be understood as computing the alignment probability between the ¢/-th target symbol and t-th source symbol. The hidden state hy, together with the previous target symbol y_; and the context vector cy, is fed into a feedforward neural network to result in the conditional distribution: v(ye | yer, X) x efote(eu(Yerâ a)sbyr ey). (4)
1603.06147#3
1603.06147#5
1603.06147
[ "1605.02688" ]
1603.06147#5
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
The whole model, consisting of the encoder, decoder and soft-alignment mechanism, is then tuned end-to-end to minimize the negative log- likelihood using stochastic gradient descent. # 3 Towards Character-Level Translation # 3.1 Motivation Let us revisit how the source and target sen- tences (X and Y ) are represented in neural ma- chine translation. For the source side of any given training corpus, we scan through the whole cor- pus to build a vocabulary Vx of unique tokens to which we assign integer indices. A source sen- tence X is then built as a sequence of the indices i.e., of such tokens belonging to the sentence, X = (x1, . . . , xTx), where xt â {1, 2, . . . , |Vx|}. The target sentence is similarly transformed into a target sequence of integer indices. Each token, or its index, is then transformed into a so-called one-hot vector of dimensionality |Vx|. All but one elements of this vector are set to 0. The only element whose index corresponds to the tokenâ s index is set to 1. This one-hot vector is the one which any neural machine translation model sees. The embedding function, ex or ey, is simply the result of applying a linear transforma- tion (the embedding matrix) to this one-hot vector. The important property of this approach based on one-hot vectors is that the neural network is oblivious to the underlying semantics of the to- kens. To the neural network, each and every token in the vocabulary is equal distance away from ev- ery other token. The semantics of those tokens are simply learned (into the embeddings) to maximize the translation quality, or the log-likelihood of the model. This property allows us great freedom in the choice of tokensâ
1603.06147#4
1603.06147#6
1603.06147
[ "1605.02688" ]
1603.06147#6
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
unit. Neural networks have been 1 For other possible implementations, see (Luong et al., 2015a). shown to work well with word tokens (Bengio et al., 2001; Schwenk, 2007; Mikolov et al., 2010) but also with ï¬ ner units, such as subwords (Sen- nrich et al., 2015; Botha and Blunsom, 2014; Lu- ong et al., 2013) as well as symbols resulting from compression/encoding (Chitnis and DeNero, 2015). Although there have been a number of previous research reporting the use of neural net- works with characters (see, e.g., Mikolov et al. (2012) and Santos and Zadrozny (2014)), the dom- inant approach has been to preprocess the text into a sequence of symbols, each associated with a se- quence of characters, after which the neural net- work is presented with those symbols rather than with characters. More recently in the context of neural machine translation, two research groups have proposed to directly use characters. Kim et al. (2015) proposed to represent each word not as a single integer index as before, but as a sequence of characters, and use a convolutional network followed by a highway network (Srivastava et al., 2015) to extract a con- tinuous representation of the word. This approach, which effectively replaces the embedding func- tion ex, was adopted by Costa-Juss`a and Fonollosa (2016) for neural machine translation. Similarly, Ling et al. (2015b) use a bidirectional recurrent neural network to replace the embedding functions ex and ey to respectively encode a character se- quence to and from the corresponding continuous word representation. A similar, but slightly differ- ent approach was proposed by Lee et al. (2015), where they explicitly mark each character with its relative location in a word (e.g., â
1603.06147#5
1603.06147#7
1603.06147
[ "1605.02688" ]
1603.06147#7
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
Bâ eginning and â Iâ ntermediate). Despite the fact that these recent approaches work at the level of characters, it is less satisfying that they all rely on knowing how to segment char- acters into words. Although it is generally easy for languages like English, this is not always the case. This word segmentation procedure can be as simple as tokenization followed by some punc- tuation normalization, but also can be as compli- cated as morpheme segmentation requiring a sep- arate model to be trained in advance (Creutz and Lagus, 2005; Huang and Zhao, 2007). Further- more, these segmentation2 steps are often tuned or designed separately from the ultimate objective of translation quality, potentially contributing to a 2From here on, the term segmentation broadly refers to any method that splits a given character sequence into a se- quence of subword symbols. suboptimal quality.
1603.06147#6
1603.06147#8
1603.06147
[ "1605.02688" ]
1603.06147#8
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
Based on this observation and analysis, in this paper, we ask ourselves and the readers a question which should have been asked much earlier: Is it possible to do character-level translation without any explicit segmentation? # 3.2 Why Word-Level Translation? (1) Word as a Basic Unit of Meaning A word can be understood in two different senses. In the abstract sense, a word is a basic unit of mean- ing (lexeme), and in the other sense, can be un- derstood as a â concrete word as used in a sen- tence.â
1603.06147#7
1603.06147#9
1603.06147
[ "1605.02688" ]
1603.06147#9
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
(Booij, 2012). A word in the former sense turns into that in the latter sense via a process of morphology, including inï¬ ection, compound- ing and derivation. These three processes do al- ter the meaning of the lexeme, but often it stays close to the original meaning. Because of this view of words as basic units of meaning (either in the form of lexemes or derived form) from lin- guistics, much of previous work in natural lan- guage processing has focused on using words as basic units of which a sentence is encoded as a sequence.
1603.06147#8
1603.06147#10
1603.06147
[ "1605.02688" ]
1603.06147#10
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
Also, the potential difï¬ culty in ï¬ nding a mapping between a wordâ s character sequence and meaning3 has likely contributed to this trend toward word-level modelling. (2) Data Sparsity There is a further technical reason why much of previous research on ma- chine translation has considered words as a ba- sic unit. This is mainly due to the fact that ma- jor components in the existing translation systems, such as language models and phrase tables, are a count-based estimator of probabilities. In other words, a probability of a subsequence of sym- bols, or pairs of symbols, is estimated by count- ing the number of its occurrences in a training corpus. This approach severely suffers from the issue of data sparsity, which is due to a large state space which grows exponentially w.r.t. the length of subsequences while growing only lin- early w.r.t. the corpus size. This poses a great chal- lenge to character-level modelling, as any subse- quence will be on average 4â 5 times longer when characters, instead of words, are used. Indeed, Vilar et al. (2007) reported worse performance when the character sequence was directly used by a phrase-based machine translation system. More 3For instance, â quitâ , â quiteâ and â quietâ
1603.06147#9
1603.06147#11
1603.06147
[ "1605.02688" ]
1603.06147#11
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
are one edit- distance away from each other but have distinct meanings. recently, Neubig et al. (2013) proposed a method to improve character-level translation with phrase- based translation systems, however, with only a limited success. (3) Vanishing Gradient Speciï¬ cally to neural machine translation, a major reason behind the wide adoption of word-level modelling is due to the difï¬ culty in modelling long-term dependen- cies with recurrent neural networks (Bengio et al., 1994; Hochreiter, 1998). As the lengths of the sentences on both sides grow when they are repre- sented in characters, it is easy to believe that there will be more long-term dependencies that must be captured by the recurrent neural network for suc- cessful translation. # 3.3 Why Character-Level Translation? Why not Word-Level Translation? The most pressing issue with word-level processing is that we do not have a perfect word segmentation al- gorithm for any one language. A perfect segmen- tation algorithm needs to be able to segment any given sentence into a sequence of lexemes and morphemes. This problem is however a difï¬ cult problem on its own and often requires decades of research (see, e.g., Creutz and Lagus (2005) for Finnish and other morphologically rich languages and Huang and Zhao (2007) for Chinese). There- fore, many opt to using either a rule-based tok- enization approach or a suboptimal, but still avail- able, learning based segmentation algorithm. The outcome of this naive, sub-optimal segmen- tation is that the vocabulary is often ï¬ lled with many similar words that share a lexeme but have different morphology. For instance, if we apply a simple tokenization script to an English corpus, â
1603.06147#10
1603.06147#12
1603.06147
[ "1605.02688" ]
1603.06147#12
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
runâ , â runsâ , â ranâ and â runningâ are all separate entries in the vocabulary, while they clearly share the same lexeme â runâ . This prevents any ma- chine translation system, in particular neural ma- chine translation, from modelling these morpho- logical variants efï¬ ciently. More speciï¬ cally in the case of neural machine translation, each of these morphological variantsâ â runâ , â runsâ , â ranâ and â runningâ â
1603.06147#11
1603.06147#13
1603.06147
[ "1605.02688" ]
1603.06147#13
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
will be as- signed a d-dimensional word vector, leading to four independent vectors, while it is clear that if we can segment those variants into a lexeme and other morphemes, we can model them more efï¬ - ciently. For instance, we can have a d-dimensional vector for the lexeme â runâ and much smaller vectors for â sâ andâ ingâ . Each of those variants will be then a composite of the lexeme vector (shared across these variants) and morpheme vec- tors (shared across words sharing the same sufï¬ x, for example) (Botha and Blunsom, 2014). This makes use of distributed representation, which generally yields better generalization, but seems to require an optimal segmentation, which is un- fortunately almost never available. In addition to inefï¬ ciency in modelling, there are two additional negative consequences from us- ing (unsegmented) words. First, the translation system cannot generalize well to novel words, which are often mapped to a token reserved for an unknown word. This effectively ignores any meaning or structure of the word to be incorpo- rated when translating. Second, even when a lex- eme is common and frequently observed in the training corpus, its morphological variant may not be. This implies that the model sees this speciï¬ c, rare morphological variant much less and will not be able to translate it well. However, if this rare morphological variant shares a large part of its spelling with other more common words, it is de- sirable for a machine translation system to exploit those common words when translating those rare variants. Why Character-Level Translation? All of these issues can be addressed to certain extent by directly modelling characters. Although the issue of data sparsity arises in character-level transla- tion, it is elegantly addressed by using a paramet- ric approach based on recurrent neural networks instead of a non-parametric count-based approach. Furthermore, in recent years, we have learned how to build and train a recurrent neural network that can well capture long-term dependencies by using more sophisticated activation functions, such as long short-term memory (LSTM) units (Hochre- iter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014).
1603.06147#12
1603.06147#14
1603.06147
[ "1605.02688" ]
1603.06147#14
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
Kim et al. (2015) and Ling et al. (2015a) re- cently showed that by having a neural network that converts a character sequence into a word vector, we avoid the issues from having many morpho- logical variants appearing as separate entities in a vocabulary. This is made possible by sharing the character-to-word neural network across all the unique tokens. A similar approach was applied to machine translation by Ling et al. (2015b). These recent approaches, however, still rely on the availability of a good, if not optimal, segmen- tation algorithm. Ling et al. (2015b) indeed states that â [m]uch of the prior information regarding morphology, cognates and rare word translation among others, should be incorporatedâ . It however becomes unnecessary to consider these prior information, if we use a neural net- work, be it recurrent, convolution or their combi- nation, directly on the unsegmented character se- quence. The possibility of using a sequence of un- segmented characters has been studied over many years in the ï¬ eld of deep learning. For instance, Mikolov et al. (2012) and Sutskever et al. (2011) trained a recurrent neural network language model (RNN-LM) on character sequences. The latter showed that it is possible to generate sensible text sequences by simply sampling a character at a time from this model. More recently, Zhang et al. (2015) and Xiao and Cho (2016) successfully applied a convolutional net and a convolutional- recurrent net respectively to character-level docu- ment classiï¬ cation without any explicit segmenta- tion. Gillick et al. (2015) further showed that it is possible to train a recurrent neural network on unicode bytes, instead of characters or words, to perform part-of-speech tagging and named entity recognition. These previous works suggest the possibility of applying neural networks for the task of machine translation, which is often considered a substan- tially more difï¬
1603.06147#13
1603.06147#15
1603.06147
[ "1605.02688" ]
1603.06147#15
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
cult problem compared to docu- ment classiï¬ cation and language modelling. # 3.4 Challenges and Questions There are two overlapping sets of challenges for the source and target sides. On the source side, it is unclear how to build a neural network that learns a highly nonlinear mapping from a spelling to the meaning of a sentence. On the target side, there are two challenges. The ï¬ rst challenge is the same one from the source side, as the decoder neural network needs to sum- marize what has been translated. In addition to this, the character-level modelling on the target side is more challenging, as the decoder network must be able to generate a long, coherent sequence of characters. This is a great challenge, as the size of the state space grows exponentially w.r.t. the number of symbols, and in the case of characters, it is often 300-1000 symbols long. All these challenges should ï¬
1603.06147#14
1603.06147#16
1603.06147
[ "1605.02688" ]
1603.06147#16
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
rst be framed as wa (a) Gating units (b) One-step processing Ct Ct Figure 1: Bi-scale recurrent neural network questions; whether the current recurrent neural networks, which are already widely used in neu- ral machine translation, are able to address these challenges as they are. In this paper, we aim at an- swering these questions empirically and focus on the challenges on the target side (as the target side shows both of the challenges). # 4 Character-Level Translation In this paper, we try to answer the questions posed earlier by testing two different types of recurrent neural networks on the target side (decoder). First, we test an existing recurrent neural net- work with gated recurrent units (GRUs). We call this decoder a base decoder. Second, we build a novel two-layer recurrent neural network, inspired by the gated-feedback network from Chung et al. (2015), called a bi- scale recurrent neural network. We design this network to facilitate capturing two timescales, mo- tivated by the fact that characters and words may work at two separate timescales. We choose to test these two alternatives for the following purposes. Experiments with the base decoder will clearly answer whether the existing neural network is enough to handle character-level decoding, which has not been properly answered in the context of machine translation. The alterna- tive, the bi-scale decoder, is tested in order to see whether it is possible to design a better decoder, if the answer to the ï¬ rst question is positive. # 4.1 Bi-Scale Recurrent Neural Network In this proposed bi-scale recurrent neural network, there are two sets of hidden units, h1 and h2. They contain the same number of units, i.e., dim(h1) = dim(h2).
1603.06147#15
1603.06147#17
1603.06147
[ "1605.02688" ]
1603.06147#17
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
The ï¬ rst set h1 models a fast-changing timescale (thereby, a faster layer), and h2 a slower timescale (thereby, a slower layer). For each hid- den unit, there is an associated gating unit, to which we refer by g! and g?. For the descrip- tion below, we use y_1 and c, for the previous target symbol and the context vector (see Eq. (2)), respectively. Let us start with the faster layer. The faster layer outputs two sets of activations, a normal output hi}, and its gated version h}. The activation of the faster layer is computed by h}, = tanh (w" [ev(owâ 1); hi; h?; cv) ; where hi , and hh? , are the gated activations of the faster and slower layers respectively. These gated activations are computed by hi =(1â gL) Oh}, h? = gh Ohi. In other words, the faster layerâ s activation is based on the adaptive combination of the faster and slower layersâ activations from the previous time step. Whenever the faster layer determines that it needs to reset, i.e., gh = 1, the next activation will be determined based more on the slower layerâ s activation. The faster layerâ s gating unit is computed by Bi =o (w? lev(w a): hp sh?_;ev' ) , where Ï is a sigmoid function. The slower layer also outputs two sets of acti- vations, a normal output h?, and its gated version h?. These activations are computed as follows: h? = (1â h? = (1- gh) @h?_, +g) oh}, gi) Ohi, where h?, is a candidate activation. The slower layerâ s gating unit g?, is computed by gi =o (we [(gz © hy); bh? _4; cv) . This adaptive leaky integration based on the gat- ing unit from the faster layer has a consequence that the slower layer updates its activation only when the faster layer resets. This puts a soft con- straint that the faster layer runs at a faster rate by preventing the slower layer from updating while the faster layer is processing a current chunk. The candidate activation is then computed by h?, = tanh (w"â [(gt © hy); hey; cr]) . (5) BPE BPE â
1603.06147#16
1603.06147#18
1603.06147
[ "1605.02688" ]
1603.06147#18
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
© BPE char (base) ++ BPE Char (bi-scale) Source Sentence Length GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Word Frequency BPE BPE â © BPE char (base) ++ BPE Char (bi-scale) GiM(GFE BPE, BPE Chr (Br seae) dae 8PE, BPE Cnar (base 75 a Source Sentence Length Word Frequency Figure 2: (left) The BLEU scores on En-Cs w.r.t. the length of source sentences. (right) The difference of word negative log-probabilities be- tween the subword-level decoder and either of the character-level base or bi-scale decoder. # Ë h2 h?_ , indicates the reset activation from the pre- vious time step, similarly to what happened in the faster layer, and cy is the input from the context. According to g}, ©h}, in Eq. (5), the faster layer influences the slower layer, only when the faster layer has finished processing the current chunk and is about to reset itself (gh = 1). In other words, the slower layer does not receive any in- put from the faster layer, until the faster layer has quickly processed the current chunk, thereby run- ning at a slower rate than the faster layer does. At each time step, the final output of the pro- posed bi-scale recurrent neural network is the con- catenation of the output vectors of the faster and slower layers, i.e., [h!; h?]. This concatenated vector is used to compute the probability distribu- ion over all the symbols in the vocabulary, as in Eq. (4). See Fig. 1 for graphical illustration. # 5 Experiment Settings
1603.06147#17
1603.06147#19
1603.06147
[ "1605.02688" ]
1603.06147#19
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
For evaluation, we represent a source sentence as a sequence of subword symbols extracted by byte- pair encoding (BPE, Sennrich et al. (2015)) and a target sentence either as a sequence of BPE-based symbols or as a sequence of characters. Corpora and Preprocessing We use all avail- able parallel corpora for four language pairs from WMTâ 15: En-Cs, En-De, En-Ru and En-Fi. They consist of 12.1M, 4.5M, 2.3M and 2M sentence pairs, respectively. We tokenize each corpus using a tokenization script included in Moses.4 We only use the sentence pairs, when the source side is up to 50 subword symbols long and the target side is either up to 100 subword symbols or 500 charac- ters. We do not use any monolingual corpus. 4Although tokenization is not necessary for character- level modelling, we tokenize the all target side corpora to make comparison against word-level modelling easier. e D - n E s C - n E u R - n E i F - n E (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) Attention h2 h1 D D D D D D D D D D c r S Trgt 1 2 2 2 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Base Base Bi-S D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char D D Base D Base D Bi-S 2 2 2 State-of-the-art Non-Neural Approachâ BPE E P B Char Development Single Ens 20.78 21.2621.45 20.62 21.5721.88 20.88 20.31 21.2921.43 21.13 20.78 20.08 â 23.49 23.14 â 23.05 â â â
1603.06147#18
1603.06147#20
1603.06147
[ "1605.02688" ]
1603.06147#20
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
16.1216.96 15.96 17.6817.78 17.39 17.6217.93 17.43 â 18.5618.70 18.26 18.5618.87 18.39 18.3018.54 17.88 â 9.6110.02 9.24 11.1911.55 11.09 10.7311.04 10.40 â 19.21 19.52 19.83 21.17 20.53 20.53 11.92 13.72 13.39 Test1 Single 19.98 20.4720.88 19.30 21.3321.56 19.82 19.70 21.2521.47 20.62 20.19 19.39 20.60(1) 17.1617.68 16.38 19.2519.55 18.89 19.2719.53 19.15 21.00(3) 25.3025.40 24.95 26.0026.07 25.04 25.5925.76 24.57 28.70(5) â â â â Ens â 23.10 23.11 â 23.04 â â 20.79 21.95 22.15 29.26 29.37 29.26 â â â Test2 Single 21.72 22.0222.21 21.35 23.4523.91 21.72 21.30 23.0623.47 22.85 22.26 20.94 24.00(2) 14.6315.09 14.26 16.9817.17 16.81 16.8617.10 16.68 18.20(4) 19.7220.29 19.02 21.1021.24 20.14 20.7321.02 19.97 24.30(6) 8.979.17 8.88 10.9311.56 10.11 10.2410.63 9.71 12.70(7) Ens â
1603.06147#19
1603.06147#21
1603.06147
[ "1605.02688" ]
1603.06147#21
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
24.83 25.24 â 25.44 â â 17.61 18.92 18.93 22.96 23.51 23.75 11.73 13.48 13.32 Table 1: BLEU scores of the subword-level, character-level base and character-level bi-scale decoders for both single models and ensembles. The best scores among the single models per language pair are bold-faced, and those among the ensembles are underlined. When available, we report the median value, and the minimum and maximum values as a subscript and a superscript, respectively. (â ) http: //matrix.statmt.org/ as of 11 March 2016 (constrained only). (1) Freitag et al. (2014). (2, 6) Williams et al. (2015). (3, 5) Durrani et al. (2014). (4) Haddow et al. (2015). (7) Rubino et al. (2015).
1603.06147#20
1603.06147#22
1603.06147
[ "1605.02688" ]
1603.06147#22
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
the pairs other than En-Fi, we use newstest-2013 as a development set, and newstest- 2014 (Test1) and newstest-2015 (Test2) as test sets. For En-Fi, we use newsdev-2015 and newstest- 2015 as development and test sets, respectively. given a source sentence. The beam widths are 5 and 15 respectively for the subword-level and character-level decoders. They were chosen based on the translation quality on the development set. The translations are evaluated using BLEU.5 Models and Training We test three models set- tings: (1) BPEâ BPE, (2) BPEâ Char (base) and (3) BPEâ Char (bi-scale).
1603.06147#21
1603.06147#23
1603.06147
[ "1605.02688" ]
1603.06147#23
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
The latter two differ by the type of recurrent neural network we use. We use GRUs for the encoder in all the settings. We used GRUs for the decoders in the ï¬ rst two set- tings, (1) and (2), while the proposed bi-scale re- current network was used in the last setting, (3). The encoder has 512 hidden units for each direc- tion (forward and reverse), and the decoder has 1024 hidden units per layer. Multilayer Decoder and Soft-Alignment Mech- anism When the decoder is a multilayer re- current neural network (including a stacked net- work as well as the proposed bi-scale network), the decoder outputs multiple hidden vectorsâ
1603.06147#22
1603.06147#24
1603.06147
[ "1605.02688" ]
1603.06147#24
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
{h',...,hâ } for L layers, at a time. This allows an extra degree of freedom in the soft-alignment mechanism (fscore in Eq. (3)). We evaluate using alternatives, including (1) using only hâ (slower layer) and (2) using all of them (concatenated). We train each model using stochastic gradient descent with Adam (Kingma and Ba, 2014). Each update is computed using a minibatch of 128 sen- tence pairs. The norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2013). Ensembles We also evaluate an ensemble of neural machine translation models and compare its performance against the state-of-the-art phrase- based translation systems on all four language pairs. We decode from an ensemble by taking the average of the output probabilities at each step. Decoding and Evaluation We use beamsearch to approximately ï¬ nd the most likely translation 5We used the multi-bleu.perl script from Moses. Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208 of| lights. Zwei Cie eo ets: Two sets| of| lights so close| to one| another| eos zwet Lichtersets so nah an elnander 208 of| lights. Zwei Cie eo ets: Figure 3: Alignment matrix of a test example from En-De using the BPEâ Char (bi-scale) model. # 6 Quantitative Analysis
1603.06147#23
1603.06147#25
1603.06147
[ "1605.02688" ]
1603.06147#25
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
Slower Layer for Alignment On En-De, we test which layer of the decoder should be used for computing soft-alignments. In the case of subword-level decoder, we observed no difference between choosing any of the two layers of the de- coder against using the concatenation of all the layers (Table 1 (aâ b)) On the other hand, with the character-level decoder, we noticed an improve- ment when only the slower layer (h2) was used for the soft-alignment mechanism (Table 1 (câ g)). This suggests that the soft-alignment mechanism beneï¬ ts by aligning a larger chunk in the target with a subword unit in the source, and we use only the slower layer for all the other language pairs. Single Models In Table 1, we present a com- prehensive report of the translation qualities of (1) subword-level decoder, (2) character-level base decoder and (3) character-level bi-scale decoder, for all the language pairs. We see that the both types of character-level decoder outperform the subword-level decoder for En-Cs and En-Fi quite signiï¬
1603.06147#24
1603.06147#26
1603.06147
[ "1605.02688" ]
1603.06147#26
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
cantly. On En-De, the character-level base decoder outperforms both the subword-level de- coder and the character-level bi-scale decoder, validating the effectiveness of the character-level modelling. On En-Ru, among the single mod- els, the character-level decoders outperform the subword-level decoder, but in general, we observe that all the three alternatives work comparable to each other. These results clearly suggest that it is indeed possible to do character-level translation without explicit segmentation. In fact, what we observed is that character-level translation often surpasses the translation quality of word-level translation. Of course, we note once again that our experiment is restricted to using an unsegmented character se- quence at the decoder only, and a further explo- ration toward replacing the source sentence with an unsegmented character sequence is needed. Ensembles Each ensemble was built using eight independent models. The ï¬ rst observation we make is that in all the language pairs, neural ma- chine translation performs comparably to, or often better than, the state-of-the-art non-neural transla- tion system. Furthermore, the character-level de- coders outperform the subword-level decoder in all the cases. # 7 Qualitative Analysis (1) Can the character-level decoder generate a long, coherent sentence? The translation in in characters is dramatically longer than that words, likely making it more difï¬ cult for a recur- rent neural network to generate a coherent sen- tence in characters.
1603.06147#25
1603.06147#27
1603.06147
[ "1605.02688" ]
1603.06147#27
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
This belief turned out to be false. As shown in Fig. 2 (left), there is no sig- niï¬ cant difference between the subword-level and character-level decoders, even though the lengths of the generated translations are generally 5â 10 times longer in characters. (2) Does the character-level decoder help with rare words? One advantage of character-level modelling is that it can model the composition of any character sequence, thereby better modelling rare morphological variants. We empirically con- ï¬ rm this by observing the growing gap in the aver- age negative log-probability of words between the subword-level and character-level decoders as the frequency of the words decreases. This is shown in Fig. 2 (right) and explains one potential cause behind the success of character-level decoding in our experiments (we deï¬ ne diï¬ (x, y) = x â y).
1603.06147#26
1603.06147#28
1603.06147
[ "1605.02688" ]
1603.06147#28
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
(3) Can the character-level decoder soft-align between a source word and a target charac- ter? In Fig. 3 (left), we show an example soft- alignment of a source sentence, â Two sets of light It is clear that the so close to one anotherâ . character-level translation model well captured the alignment between the source subwords and tar- get characters. We observe that the character- level decoder correctly aligns to â lightsâ and â sets ofâ when generating a German compound word â Lichtersetsâ
1603.06147#27
1603.06147#29
1603.06147
[ "1605.02688" ]
1603.06147#29
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
(see Fig. 3 (right) for the zoomed- in version). This type of behaviour happens simi- larly between â one anotherâ and â einanderâ . Of course, this does not mean that there exists an alignment between a source word and a target character. Rather, this suggests that the internal state of the character-level decoder, the base or bi- scale, well captures the meaningful chunk of char- acters, allowing the model to map it to a larger chunk (subword) in the source. (4) How fast is the decoding speed of the character-level decoder? We evaluate the de- coding speed of subword-level base, character- level base and character-level bi-scale decoders on newstest-2013 corpus (En-De) with a single Titan X GPU. The subword-level base decoder gener- ates 31.9 words per second, and the character-level base decoder and character-level bi-scale decoder generate 27.5 words per second and 25.6 words per second, respectively. Note that this is evalu- ated in an online setting, performing consecutive translation, where only one sentence is translated at a time. Translating in a batch setting could dif- fer from these results.
1603.06147#28
1603.06147#30
1603.06147
[ "1605.02688" ]
1603.06147#30
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
# 8 Conclusion In this paper, we addressed a fundamental ques- tion on whether a recently proposed neural ma- chine translation system can directly handle trans- lation at the level of characters without any word segmentation. We focused on the target side, in which a decoder was asked to generate one char- acter at a time, while soft-aligning between a tar- get character and a source subword. Our extensive experiments, on four language pairsâ En-Cs, En- De, En-Ru and En-Fiâ strongly suggest that it is indeed possible for neural machine translation to translate at the level of characters, and that it actu- ally beneï¬
1603.06147#29
1603.06147#31
1603.06147
[ "1605.02688" ]
1603.06147#31
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
ts from doing so. Our result has one limitation that we used sub- word symbols in the source side. However, this has allowed us a more ï¬ ne-grained analysis, but in the future, a setting where the source side is also represented as a character sequence must be inves- tigated. # Acknowledgments The authors would like to thank the developers of Theano (Team et al., 2016). We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs, CIFAR and Samsung. KC thanks the sup- port by Facebook, Google (Google Faculty Award 2016) and NVIDIA (GPU Center of Excellence 2015-2016). JC thanks Orhan Firat for his con- structive feedbacks. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations (ICLR). Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradi- ent descent is difï¬
1603.06147#30
1603.06147#32
1603.06147
[ "1605.02688" ]
1603.06147#32
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation
cult. IEEE Transactions on Neu- ral Networks, 5(2):157â 166. Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Ad- vances in Neural Information Processing Systems, pages 932â 938. Geert Booij. 2012. The grammar of words: An intro- duction to linguistic morphology. Oxford University Press. Jan A Botha and Phil Blunsom. 2014.
1603.06147#31
1603.06147#33
1603.06147
[ "1605.02688" ]