id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1701.08718#8
Memory Augmented Neural Networks with Wormhole Connections
# 2. TARDIS: A Memory Augmented Neural Network Neural network architectures with an external memory represent the memory in a matrix form, such that at each time step t the model can both read from and write to the external memory. The whole content of the external memory can be considered as a generalization of hidden state vector in a recurrent neural network. Instead of storing all the information into a single hidden state vector, our model can store them in a matrix which has a higher capacity and with more targeted ability to substantially change or use only a small subset of the memory at each time step. The neural Turing machine (NTM) (Graves et al., 2014) is such an example of a MANN, with both reading and writing into the memory. # 2.1 Model Outline In this subsection, we describe the basic structure of TARDIS 1 (Temporal Automatic Relation Discovery In Sequences). TARDIS is a MANN which has an external memory matrix Mt â Rkà q where k is the number of memory cells and q is the dimensionality of each cell. The model has an RNN controller which can read and write from the external memory at every time step. To read from the memory, the controller generates the read t â Rkà 1 and the reading operation is typically achieved by computing the dot weights wr product between the read weights wr t and the memory Mt, resulting in the content vector rt â Rqà 1: r= (M;)' wi, (1 TARDIS uses discrete addressing and hence wr t is a one-hot vector and the dot-product chooses one of the cells in the memory matrix (Zaremba and Sutskever, 2015; Gulcehre et al., t â R1à k, to write into the memory which 2016). The controller generates the write weights ww is also a one hot vector, with discrete addressing. We will omit biases from our equations
1701.08718#7
1701.08718#9
1701.08718
[ "1609.01704" ]
1701.08718#9
Memory Augmented Neural Networks with Wormhole Connections
1. Name of the model is inspired from the time-machine in a popular TV series Dr. Who. 3 # Gulcehre, Chandar, and Bengio for the simplicity in the rest of the paper. Let i be the index of the non-zero entry in the one-hot vector ww t , then the controller writes a linear projection of the current hidden state to the memory location Mt[i]: Mt[i] = Wmht, (2) where Wm â Rdmà dh is the projection matrix that projects the dh dimensional hidden state vector to a dm dimensional micro-state vector such that dh > dm. At every time step, the hidden state ht of the controller is also conditioned on the content rt read from the memory. The wormhole connections are created by conditioning ht on rt: ht = Ï (xt, htâ 1, rt). (3)
1701.08718#8
1701.08718#10
1701.08718
[ "1609.01704" ]
1701.08718#10
Memory Augmented Neural Networks with Wormhole Connections
As each cell in the memory is a linear projection of one of the previous hidden states, the conditioning of the controllerâ s hidden state with the content read from the memory can be interpreted as a way of creating short-cut connections across time (from the time tâ tha hy was written to the time ¢ when it was read through r;) which can help to the flow of gradients across time. This is possible because of the discrete addressing used for read and write operations. However, the main challenge for the model is to learn proper read and write mechanisms so that it can write the hidden states of the previous time steps that will be useful for future predictions and read them at the right time step. We call this the reader/writer synchronization problem. Instead of designing complicated addressing mechanisms to mitigate the diï¬ culty of learning how to properly address the external memory, TARDIS side-steps the reader/writer synchronization problem by using the following heuristics. For the ï¬ rst k time steps, our model writes the micro-states into the k cells of the memory in a sequential order. When the memory becomes full, the most eï¬ ective strategy in terms of preserving the information stored in the memory would be to replace the memory cell that has been read with the micro-state generated from the hidden state of the controller after it is conditioned on the memory cell that has been read. If the model needs to perfectly retain the memory cell that it has just overwritten, the controller can in principle learn to do that by copying its read input to its write output (into the same memory cell). The pseudocode and the details of the memory update algorithm for TARDIS is presented in Algorithm 1. There are two missing pieces in Algorithm 1: How to generate the read weights? What is the structure of the controller function Ï
1701.08718#9
1701.08718#11
1701.08718
[ "1609.01704" ]
1701.08718#11
Memory Augmented Neural Networks with Wormhole Connections
? We will answer these two questions in detail in next two sub-sections. 2.2 Addressing mechanism Similar to D-NTM, memory matrix Mt of TARDIS has disjoint address section At â Rkà a and content section Ct â Rkà c, Mt = [At; Ct] and Mt â Rkà q for q = c + a. However, unlike D-NTM address vectors are ï¬ xed to random sparse vectors. The controller reads both the address and the content parts of the memory, but it will only write into the content section of the memory. t are generated by an MLP which uses the information coming from ht, xt, Mt and the usage vector ut (described below). The MLP is parametrized as follows:
1701.08718#10
1701.08718#12
1701.08718
[ "1609.01704" ]
1701.08718#12
Memory Augmented Neural Networks with Wormhole Connections
4 # Memory Augmented Neural Networks with Wormhole Connections Algorithm 1 Pseudocode for the controller and memory update mechanism of TARDIS. Initialize ho Initialize Mo for t â ¬ {1,---T,} do Compute the read weights w} < read(hy, Mz, xz) Sample from/discretize W; and obtain wf Read from the memory, rz < (Mi)! wf. Compute a new controller hidden state, hy <â 6(x:, heâ 1,1r2) if t<kthen Wr else ite into the memory, M;[¢] < Wmhr Select the memory location to write into j + max,(w/[j]) Wr ite into the memory, M;[j] â Wmhr end if end for m[i] = a! tanh(W]h; + Wix, + W),M, [i] + W2u,) (4) W) = softmax(7;), (5) x, Wγ by either sampling from wr where {a, Wγ h, Wγ m, Wγ t or by using argmax over wr t . u} are learnable parameters. wr t is a one-hot vector obtained ut is the usage vector which denotes the frequency of accesses to each cell in the memory. ut is computed from the sum of discrete address vectors wr t and normalizing them. t-1 w= norm(}~ wi). (6) i=1 norm(·) applied in Equation 6 is a simple feature-wise computation of centering and divisive variance normalization. This normalization step makes the training easier with the usage vectors. The introduction of the usage vector can help the attention mechanism to choose between the diï¬ erent memory cells based on their frequency of accesses to each cell of the memory. For example, if a memory cell is very rarely accessed by the controller, for the next time step, it can learn to assign more weights to those memory cells by looking into the usage vector. By this way, the controller can learn an LRU access mechanism (Santoro et al., 2016; Gulcehre et al., 2016). Further, in order to prevent the model to learn deï¬
1701.08718#11
1701.08718#13
1701.08718
[ "1609.01704" ]
1701.08718#13
Memory Augmented Neural Networks with Wormhole Connections
cient addressing mechanisms, for e.g. reading the same memory cell which will not increase the memory capacity of the model, we decrease the probability of the last read memory location by subtracting 100 from the logit of wr 5 Gulcehre, Chandar, and Bengio # 2.3 TARDIS Controller We use an LSTM controller, and its gates are modiï¬ ed to take into account the content rt of the cell read from the memory: ft it ot = sigm sigm sigm (Whhtâ 1 + Wxxt + Wrrt) , (7) where ft, it, and ot are forget gate, input gate, and output gate respectively. αt, βt are the scalar RESET gates which control the magnitude of the information ï¬ owing from the memory and the previous hidden states to the cell of the LSTM ct. By controlling the ï¬ ow of information into the LSTM cell, those gates will allow the model to store the sub-sequences or chunks of sequences into the memory instead of the entire context. We use Gumbel sigmoid (Maddison et al., 2016; Jang et al., 2016) for αt and βt due to its behavior close to binary. a,\ _ (gumbel-sigmoid wel wel wel (â ') ~ ees we hia + wel xt wet Pe) (8) As in Equation 8 empirically, we ï¬
1701.08718#12
1701.08718#14
1701.08718
[ "1609.01704" ]
1701.08718#14
Memory Augmented Neural Networks with Wormhole Connections
nd gumbel-sigmoid to be easier to train than the regular sigmoid. The temperature of the Gumbel-sigmoid is ï¬ xed to 0.3 in all our experiments. The cell of the LSTM controller, ct is computed according to the Equation 9 with the αt and βt RESET gates. Ë ct = tanh(βtWg ct = ftctâ 1 + itË ct, hhtâ 1 + Wg xxt + αtWg rrt), The hidden state of the LSTM controller is computed as follows: ht = ot tanh(ct).
1701.08718#13
1701.08718#15
1701.08718
[ "1609.01704" ]
1701.08718#15
Memory Augmented Neural Networks with Wormhole Connections
(10) In Figure 1, we illustrate the interaction between the controller and the memory with various heads and components of the controller. # 2.4 Micro-states and Long-term Dependencies A micro-state of the LSTM for a particular time step is the summary of the information that has been stored in the LSTM controller of the model. By attending over the cells of the memory which contains previous micro-states of the LSTM, the model can explicitly learn to restore information from its own past. The controller can learn to represent high-level temporal abstractions by creating wormhole connections through the memory as illustrated in Figure 2. In this example, the model takes the token x0 at the ï¬ rst timestep and stores its representation to the ï¬ rst memory cell with address a0. In the second timestep, the controller takes x1 as input and writes into the second memory cell with the address a1. Furthermore, β1 gater blocks the connection from h1 to h2. At the third timestep, the controller starts reading. It receives x2 as input and
1701.08718#14
1701.08718#16
1701.08718
[ "1609.01704" ]
1701.08718#16
Memory Augmented Neural Networks with Wormhole Connections
6 (9) # Memory Augmented Neural Networks with Wormhole Connections â Legend: : MLP output : Read/Write output 7 ie) @) : Observed Input ° iS) : Output prediction : Controller â â p : General Connection â â @ : Multiplicative Connection â > : Affine Connection Figure 1: At each time step controller takes xt, the memory cell that has been read rt and the hidden state of the previous timestep htâ 1. Then, it generates αt which controls the contribution of the rt into the internal dynamics of the new controllerâ s state ht (We omit the βt in this visualization). Once the memory Mt becomes full, discrete addressing weights wr t is generated by the controller which will be used to both read from and write into the memory. To the predict the target yt, the model will have to use both ht and rt.
1701.08718#15
1701.08718#17
1701.08718
[ "1609.01704" ]
1701.08718#17
Memory Augmented Neural Networks with Wormhole Connections
reads the ï¬ rst memory cell where micro-state of h0 was stored. After reading, it computes the hidden-state h2 and writes the micro-state of h2 into the ï¬ rst memory cell. The length of the path passing through the microstates of h0 and h2 would be 1. The wormhole connection from h2 to h0 would skip a timestep. A regular single-layer RNN has a ï¬ xed graphical representation of a linear-chain when considering only the connections through its recurrent states or the temporal axis. However, TARDIS is more ï¬ exible in terms of that and it can learn directed graphs with more diverse structures using the wormhole connections and the RESET gates. The directed graph that TARDIS can learn through its recurrent states have at most the degree of 4 at each vertex (maximum 2 incoming and 2 outgoing edges) and it depends on the number of cells (k) that can be stored in the memory. In this work, we focus on a variation of TARDIS, where the controller maintains a ï¬
1701.08718#16
1701.08718#18
1701.08718
[ "1609.01704" ]
1701.08718#18
Memory Augmented Neural Networks with Wormhole Connections
xed-size external memory. However as in (Cheng et al., 2016), it is possible to use a memory that grows with respect to the length of its input sequences, but that would not scale and can be more diï¬ cult to train with discrete addressing. 7 # Gulcehre, Chandar, and Bengio GULCEHRE, CHANDAR, AND BENGIO M -- wo M, _ M3 Mg ___ Ms eo 0 Le, to 19 [fg hy» By ray ay He ray are Read ag Read ay Read on . [Write fig to ay Write hy to ay â \ , [Writeh, toay =~ (Write hg to ay Dependencies among the input tokens: a KAR KN oe + 2S â 4 Figure 2: TARDISâ s controller can learn to represent the dependencies among the inputs tokens by choosing which cells to read and write and creating wormhole connections. xt represents the input to the controller at timestep t and the ht is the hidden state of the controller RNN. # 3. Training TARDIS In this section, we explain how to train TARDIS as a language model. We use language modeling as an example application. However, we would like to highlight that TARDIS can also be applied to any complex sequence to sequence learning tasks. Consider N training examples where each example is a sequence of length T . At every time-step t, the model receives the input xt â {0, 1}|V | which is a one-hot vector of size equal to the size of the vocabulary |V | and should produce the output yt â {0, 1}|V | which is also a one-hot vector of size equal to the size of the vocabulary |V |. The output of the model for i-th example and t-th time-step is computed as follows: t = softmax(Wog(h(i) oi t , r(i) t )), (11) where Wo is the learnable parameters and g(ht, rt) is a single layer MLP which combines both ht and rt as in deep fusion by (Pascanu et al., 2013a). The task loss would be the categorical cross-entropy between the targets and model-outputs. Super-script i denotes that the variable is the output for the ith sample in the training set.
1701.08718#17
1701.08718#19
1701.08718
[ "1609.01704" ]
1701.08718#19
Memory Augmented Neural Networks with Wormhole Connections
N T \V| ; Lmodei(O = wh » sy! [k] log(of" [&]), (12) i=1 t=1 k=1 However, the discrete decisions taken for memory access during every time-step makes the model not diï¬ erentiable and hence we need to rely on approximate methods of computing gradients with respect to the discrete address vectors. In this paper we explore two such approaches: REINFORCE (Williams, 1992) and straight-through estimator (Bengio et al., 2013).
1701.08718#18
1701.08718#20
1701.08718
[ "1609.01704" ]
1701.08718#20
Memory Augmented Neural Networks with Wormhole Connections
8 Memory Augmented Neural Networks with Wormhole Connections # 3.1 Using REINFORCE REINFORCE is a likelihood-ratio method, which provides a convenient and simple way of estimating the gradients of the stochastic actions. In this paper, we focus on application of REINFORCE on sequential prediction tasks, such as language modelling. For example i, let R(wr(i) at timestep j. We are interested in maximizing j the expected return for the whole episode as deï¬ ned below: T(8) = ES) R(w!)) (13) Ideally we would like to compute the gradients for Equation 13, however computing the gradient of the expectation may not be feasible. We would have to use a Monte-Carlo approximation and compute the gradients by using the REINFORCE for the sequential prediction task which can be written as in Equation 14.
1701.08718#19
1701.08718#21
1701.08718
[ "1609.01704" ]
1701.08718#21
Memory Augmented Neural Networks with Wormhole Connections
N T ; ; Vo (8) = 5 1S (ROw)) =) Vo loutweâ )), (14) where bj is the reward baseline. However, we can further assume that the future actions do not depend on the past rewards in the episode/trajectory and further reduce the variance of REINFORCE as in Equation 15. N T T ; ; Vo (9) = = S-1S> S\(R(w)) = b;) Vo log we], (15) i=1 t=0 j=t 2 In our preliminary experiments, we ï¬
1701.08718#20
1701.08718#22
1701.08718
[ "1609.01704" ]
1701.08718#22
Memory Augmented Neural Networks with Wormhole Connections
nd out that the training of the model is easier with the discounted returns, instead of using the centered undiscounted return: 1 N T T ri) r(i) Vor(6 =yL Lr w') â b;)|Vo log(w;"â )]. (16) i=1 t=0 j=t Training REINFORCE with an Auxiliary Cost Training models with REINFORCE can be diï¬ cult, due to the variance imposed into the gradients. In the recent years, researchers have developed several tricks in order to mitigate the eï¬ ect of high-variance in the gradients. As proposed by (Mnih and Gregor, 2014), we also use variance normalization on the REINFORCE gradients. )) is the log-likelihood of the prediction at j that timestep. Our initial experiments showed that REINFORCE with this reward structue often tends to under-utilize the memory and mainly rely on the internal memory of the LSTM controller. Especially, in the beginning of the training model, it can just decrease the loss by relying on the memory of the controller and this can cause the REINFORCE to increase the log-likelihood of the random actions. In order to deal with this issue, instead of using the log-likelihood of the model as reward, we introduce an auxiliary cost to use as the reward Râ which is computed based on predictions
1701.08718#21
1701.08718#23
1701.08718
[ "1609.01704" ]
1701.08718#23
Memory Augmented Neural Networks with Wormhole Connections
9 # Gulcehre, Chandar, and Bengio which are only based on the memory cell rt which is read by the controller and not the hidden state of the controller: ial wi) =r" k] log(softmax(W?r; ©) 4 wex\)) [ak], (17) x â Rdoà dx} where do is the dimensionality of the output size and dx (for language modelling both do and dx would be do = |V |) is the dimensionality of the input of the model. We do not backpropagate through r(j) # i # i
1701.08718#22
1701.08718#24
1701.08718
[ "1609.01704" ]
1701.08718#24
Memory Augmented Neural Networks with Wormhole Connections
# 3.2 Using Gumbel Softmax Training with REINFORCE can be challenging due to the high variance of the gradients, gumbel-softmax provides a good alternative with straight-through estimator for REINFORCE to tackle the variance issue. Unlike (Maddison et al., 2016; Jang et al., 2016) instead of annealing the temperature or ï¬ xing it, our model learns the inverse-temperature with an MLP Ï (ht) which has a single scalar output conditioned on the hidden state of the controller. r(h;) = softplus(w7 "bh; +bâ ) + 1. gumbel-softmax(7;[i]) = softmax((m[i] + â ¬)7(hz)), (18) (19)
1701.08718#23
1701.08718#25
1701.08718
[ "1609.01704" ]
1701.08718#25
Memory Augmented Neural Networks with Wormhole Connections
We replace the softmax in Equation 5 with gumbel-softmax deï¬ ned above. During t for t for gradient computation and hence Learning the temperature of the Gumbel-Softmax reduces the burden of performing extensive hyper-parameter search for the temperature. # 4. Related Work Neural Turing Machine (NTM) (Graves et al., 2014) is the most related class of architecture to our model. NTMs have proven to be successful in terms of generalizing over longer sequences than the sequences that it has been trained on. Also NTM has been shown to be more eï¬ ective in terms of solving algorithmic tasks than the gated models such as LSTMs. However NTM can have limitations due to some of its design choices. Due to the controllerâ s lack of precise knowledge on the contents of the information, the contents of the memory can overlap. These memory augmented models are also known to be complicated, which yields to the diï¬ culties in terms of implementing the model and training it. The controller has no information about the sequence of operations and the information such as frequency of the read and write access to the memory.
1701.08718#24
1701.08718#26
1701.08718
[ "1609.01704" ]
1701.08718#26
Memory Augmented Neural Networks with Wormhole Connections
TARDIS tries to address these issues. Gulcehre et al. (2016) proposed a variant of NTM called dynamic NTM (D-NTM) which had learnable location based addressing. D-NTM can be used with both continuous addressing and discrete addressing. Discrete D-NTM is related to TARDIS in the sense that both models use discrete addressing for all the memory operations. However, discrete D-NTM expects the controller to learn to read/write and also learn reader/writer synchronization.
1701.08718#25
1701.08718#27
1701.08718
[ "1609.01704" ]
1701.08718#27
Memory Augmented Neural Networks with Wormhole Connections
10 # Memory Augmented Neural Networks with Wormhole Connections TARDIS do not have this synchronization problem since both reader and writer are tied. Rae et al. (2016) proposed sparse access memory (SAM) mechanism for NTMs which can be seen as a hybrid of continuous and discrete addressing. SAM uses continuous addressing over a selected set of top-K relevant memory cells. Recently, Graves et al. (2016) proposed a diï¬ erentiable neural computer (DNC) which is a successor of NTM. Rocktäschel et al. (2015) and (Cheng et al., 2016) proposed models that generate weights to attend over the previous hidden states of the RNN. However, since those models attend over the whole context, the computation of the attention can be ineï¬
1701.08718#26
1701.08718#28
1701.08718
[ "1609.01704" ]
1701.08718#28
Memory Augmented Neural Networks with Wormhole Connections
cient. Grefenstette et al. (2015) has proposed a model that can store the information in a data structure, such as in a stack, dequeue or queue in a diï¬ erentiable manner. Grave et al. (2016) has proposed to use a cache based memory representation which stores the last k states of the RNN in the memory and similar to the traditional cache-based models the model learns to choose a state of the memory for the prediction in the language modeling tasks (Kuhn and De Mori, 1990).
1701.08718#27
1701.08718#29
1701.08718
[ "1609.01704" ]
1701.08718#29
Memory Augmented Neural Networks with Wormhole Connections
# 5. Gradient Flow through the External Memory In this section, we analyze the ï¬ ow of the gradients through the external memory and will also investigate its eï¬ ciency in terms of dealing with the vanishing gradients problem (Hochreiter, 1991; Bengio et al., 1994). First, we describe the vanishing gradient problem in an RNN and then describe how an external memory model can deal with it. For the sake of simplicity, we will focus on vanilla RNNs during the entire analysis, but the same analysis can be extended to LSTMs. In our analysis, we also assume that the weights for the read/write heads are discrete. We will show that the rate of the gradients vanishing through time for a memory- augmented recurrent neural network is much smaller than of a regular vanilla recurrent neural network. Consider an RNN which at each timestep t takes an input xt â Rd and produces an output yt â
1701.08718#28
1701.08718#30
1701.08718
[ "1609.01704" ]
1701.08718#30
Memory Augmented Neural Networks with Wormhole Connections
Ro. The hidden state of the RNN can be written as, # zt = Whtâ 1 + Uxt, ht = f(zt). (20) (21) where W and U are the recurrent and the input weights of the RNN respectively and f(-) is a non-linear activation function. Let £ = an L; be the loss function that the RNN is trying to minimize. Given an input sequence of length Tâ , we can write the derivative of the loss £ with respect to parameters @ as, aL aL ALy, Dhy, Thy, 30 = 2s 00 DS oh, db, 00 (22) 1st <T 1s<ti ST 1<to<ti 11
1701.08718#29
1701.08718#31
1701.08718
[ "1609.01704" ]
1701.08718#31
Memory Augmented Neural Networks with Wormhole Connections
# Gulcehre, Chandar, and Bengio The multiplication of many Jacobians in the form of â ht â htâ 1 to obtain â ht1 â ht0 is the main reason of the vanishing and the exploding gradients (Pascanu et al., 2013b): Ohi, diag[fâ (z,) |W, 23 The, Il a I] diagttâ @:)) (23) to<t<ty to<t<ty Let us assume that the singular values of a matrix M are ordered as, Ï 1(M) â ¥ Ï 2(M) â ¥ · · · â ¥ Ï n(M). Let α be an upper bound on the singular values of W, s.t. α â ¥ Ï 1(W), then the norm of the Jacobian will satisfy (Zilly et al., 2016), Ohy, rg re Ilo, SIMI diag 2)I < « 01 (diaglt(2))), (24) Pascanu et al. (2013b) showed that for || â ht â htâ 1 || â ¤ Ï 1( â ht â htâ 1 ) â ¤ η < 1, the following inequality holds: i Oh; hy | I] toStcty Sls II toStcty 1 <7 ti â to (25) t0â ¤tâ ¤t1 Since η < 1 and the norm of the product of Jacobians grows exponentially on t1 â t0, the norm of the gradients will vanish exponentially fast. Now consider the MANN where the contents of the memory are linear projections of the previous hidden states as described in Equation 2. Let us assume that both reading and writing operation use discrete addressing. Let the content read from the memory at time step t correspond to some memory location i: rt = Mt[i] = Ahit, (26) where hit corresponds to the hidden state of the controller at some previous timestep it. Now the hidden state of the controller in the external memory model can be written as, zt = Whtâ 1 + Vrt + Uxt, ht = f(zt).
1701.08718#30
1701.08718#32
1701.08718
[ "1609.01704" ]
1701.08718#32
Memory Augmented Neural Networks with Wormhole Connections
(27) If the controller reads Mt[i] at time step t and its memory content is Ahit as described above, then the Jacobians associated with Equation 27 can be computed as follows: # â ht1 â ht0 Ohy Ohy_1 II to<t<ty I] diag lw to<t<ty ti-1 k=to k<t*<ty Ohi, + diag[f"(z:,)|WA Qiito + Risto . + S°( J] diagitâ (z-)JW) diaglf(2x)/VA ahi, oh, oh,
1701.08718#31
1701.08718#33
1701.08718
[ "1609.01704" ]
1701.08718#33
Memory Augmented Neural Networks with Wormhole Connections
12 # Memory Augmented Neural Networks with Wormhole Connections where Qt1t0 and Rt1t0 are deï¬ ned as below, Quito = [] dial @ Iw, (30) to<t<ty to<t<ty â â oy oy F) ahi, Rito = >> ( J] diaglfâ (ze-)]W) diagltâ (z,)|VA dh, 1 Uasli@a VAS. BL 0 0 k=to k<t*<t As shown in Equation 29, Jacobians of the MANN can be rewritten as a summation of two matrices, Qt1t0 and Rt1t0. The gradients ï¬ owing through Rt1t0 do not necessarily vanish through time, because it is the sum of jacobians computed over the shorter paths. The norm of the Jacobian can be lower bounded as follows by using Minkowski inequality: Ohy Oh, ll =I | (32) Ohi, U. Ohy,-1 = ||Qt1t0 + Rt1t0|| â ¥ ||Rt1t0|| â ||Qt1t0|| (33) Assuming that the length of the dependency is very long ||Qt1t0|| would vanish to 0. Then we will have, ||Qt1t0 + Rt1t0|| â ¥ ||Rt1t0|| (34) As one can see that the rate of the gradients vanishing through time depends on the length of the sequence passes through Rt1t0. This is typically lesser than the length of the sequence passing through Qt1t0. Thus the gradients vanish at lesser rate than in an RNN. In particular the rate would strictly depend on the length of the shortest paths from t1 to t0, because for the long enough dependencies, gradients through the longer paths would still vanish. We can also derive an upper bound for norm of the Jacobian as follows: Oh, Oh; I= I] - | (35 hin to<t<ty 1 = ||Qt1t0 + Rt1t0|| â ¤ Ï 1(Qt1t0 + Rt1t0) (36) Using the result from (Loyka, 2015), we can lower bound Ï 1(Qt1t0 + Rt1t0) as follows: Ï 1(Qt1t0 + Rt1t0) â
1701.08718#32
1701.08718#34
1701.08718
[ "1609.01704" ]
1701.08718#34
Memory Augmented Neural Networks with Wormhole Connections
¥ |Ï 1(Qt1t0) â Ï 1(Rt1t0)| (37) For long sequences we know that Ï 1(Qt1t0) will go to 0 (see equation 25). Hence, Ï 1(Qt1t0 + Rt1t0) â ¥ Ï 1(Rt1t0) (38) The rate at which Ï 1(Rt1t0) reaches zero is strictly smaller than the rate at which Ï 1(Qt1t0) reaches zero and with ideal memory access, it will not reach zero. Hence unlike vanilla RNNs, Equation 38 states that the upper bound of the norm of the Jacobian will not reach to zero for a MANN with ideal memory access.
1701.08718#33
1701.08718#35
1701.08718
[ "1609.01704" ]
1701.08718#35
Memory Augmented Neural Networks with Wormhole Connections
13 Gulcehre, Chandar, and Bengio Theorem 1 Consider a memory augmented neural network with T memory cells for a sequence of length T , and each hidden state of the controller is stored in diï¬ erent cells of the memory. If the prediction at time step t1 has only a long-term dependency to t0 and the prediction at t1 is independent from the tokens appear before t0, and the memory reading mechanism is perfect, the model will not suï¬ er from vanishing gradients when we back-propagate from t1 to t0.2 If the input sequence has a longest-dependency to t0 from t1, we would only be Proof: interested in gradients propagating from t1 to t0 and the Jacobians from t1 to t0, i.e. â ht1 . If â ht0 the controller learns a perfect reading mechanism at time step t1 it would read memory cell where the hidden state of the RNN at time step t0 is stored at. Thus following the jacobians deï¬ ned in the Equation 29, we can rewrite the jacobians as, # â ht1 â ht0 dhi, Il ah, Diy, Bie, Oba can ah, =[ [[ diagtt@ iw) + >°( [] diagttâ (z-)/W) diaglfâ (z,)/VA ah. to<t<ty k=to k<t*<ty fo a dh + diaglf"(z,)|VA oh, (39) In Equation 39, the first two terms might vanish as t; â
1701.08718#34
1701.08718#36
1701.08718
[ "1609.01704" ]
1701.08718#36
Memory Augmented Neural Networks with Wormhole Connections
to grows. However, the singular values of the third term do not change as t; â to grows. As a result, the gradients propagated from t, to to will not necessarily vanish through time. However, in order to obtain stable dynamics for the network, the initialization of the matrices, V and A is important. This analysis highlights the fact that an external memory model with optimal read/write mechanism can handle long-range dependencies much better than an RNN. However, this is applicable only when we use discrete addressing for read/write operations. Both NTM and D-NTM still have to learn how to read and write from scratch which is a challenging optimization problem. For TARDIS tying the read/write operations make the learning to become much simpler for the model. In particular, the results of the Theorem 1 points the importance of coming up with better ways of designing attention mechanisms over the memory. The controller of a MANN may not be able learn to use the memory eï¬ ciently. For example, some cells of the memory may remain empty or may never be read. The controller can overwrite the memory cells which have not been read. As a result the information stored in those overwritten memory cells can be lost completely.
1701.08718#35
1701.08718#37
1701.08718
[ "1609.01704" ]
1701.08718#37
Memory Augmented Neural Networks with Wormhole Connections
However TARDIS avoids most of these issues by the construction of the algorithm. # 6. On the Length of the Paths Through the Wormhole Connections As we have discussed in Section 5, the rate at which the gradients vanish for a MANN depends on the length of the paths passing along the wormhole connections. In this section 2. Let us note that, unlike an Markovian n-gram assumption, here we assume that at each time step the n can be diï¬ erent.
1701.08718#36
1701.08718#38
1701.08718
[ "1609.01704" ]
1701.08718#38
Memory Augmented Neural Networks with Wormhole Connections
14 # Memory Augmented Neural Networks with Wormhole Connections we will analyse those lengths in depth for untrained models such that the model will assign uniform probability to read or write all memory cells. This will give us a better idea on how each untrained model uses the memory at the beginning of the training. A wormhole connection can be created by reading a memory cell and writing into the same cell in TARDIS. For example, in Figure 2, while the actual path from h4 to h0 is of length 4, memory cell a0 creates a shorter path of length 2 (h0 â h2 â h4).
1701.08718#37
1701.08718#39
1701.08718
[ "1609.01704" ]
1701.08718#39
Memory Augmented Neural Networks with Wormhole Connections
We call the length of the actual path as T and length of the shorter path created by wormhole connection as Tmem. Consider a TARDIS model which has k cells in its memory. If TARDIS access each memory cell uniformly random, then the probability of accessing a random cell i, p[i] = i The expected length of the shorter path created by wormhole connections (Imem) would be proportional to the number of reads and writes into a memory cell. For TARDIS with reader choosing a memory cell uniformly random this would be Trem = an pli] = z â
1701.08718#38
1701.08718#40
1701.08718
[ "1609.01704" ]
1701.08718#40
Memory Augmented Neural Networks with Wormhole Connections
lat the end of the sequence. We verify this result by simulating the read and write heads of TARDIS as in Figure 3 a). a) # b) Figure 3: In these ï¬ gures we visualized the expected path length in the memory cells for a sequence of length 200, memory size 50 with 100 simulations. a) shows the results for the TARDIS and b) shows the simulation for a MANN with uniformly random read and write heads. Now consider a MANN with separate read and write heads each accessing the memory in discrete and uniformly random fashion. Let us call it as uMANN. We will compute the expected length of the shorter path created by wormhole connections (Tmem) for uMANN. wr t and ww t are the read and write head weights, each sampled from a multinomial distribution with uniform probability for each memory cells respectively. Let jt be the index of the memory cell read at timestep t. For any memory cell i, len(·), deï¬ ned below, is a recursive function that computes the length of the path created by wormhole connections in that cell. yy flen(My-1 [si], é, 92) +1 if we] =1 len(M; [i], i, je) = { ten(My lil.) if wi] =0 (40) t Ei,jt[len(Mt[i], i, jt)] will be T /k â 1 by induction for every memory cell. However, for proof assumes that when t is less than or equal to k,
1701.08718#39
1701.08718#41
1701.08718
[ "1609.01704" ]
1701.08718#41
Memory Augmented Neural Networks with Wormhole Connections
15 # Gulcehre, Chandar, and Bengio the length of all paths stored in the memory len(Mt[i], i, jt) should be 0. We have run simulations to compute the expected path length in a memory cell of uMANN as in Figure 3 (b). This analysis shows that while TARDIS with uniform read head maintains the same expected length of the shorter path created by wormhole connections as uMANN, it completely avoids the reader/writer synchronization problem. In expectation, Ï 1(Rt1t0) will decay proportionally to Tmem whereas Ï 1(Qt1t0) will decay proportional 3 to T . With ideal memory access, the rate at which Ï 1(Rt1t0) reaches zero would be strictly smaller than the rate at which Ï 1(Qt1t0) reaches zero. Hence, as per Equation 38, the upper bound of the norm of the Jacobian will vanish at a much smaller rate. However, this result assumes that the dependencies which the prediction relies are accessible through the memory cell which has been read by the controller. Figure 4: Assuming that the prediction at t1 depends on the t0, a wormhole connection can shorten the path by creating a connection from t1 â m to t0 + n. A wormhole connection may not directly create a connection from t1 to t0, but it can create shorter paths which the gradients can ï¬ ow without vanishing. In this ï¬ gure, we consider the case where a wormhole connection is created from t1 â m to t0 + n. This connections skips all the tokens in between t1 â m and t0 + n. In the more general case, consider a MANN with k â ¥ T . The writer just ï¬
1701.08718#40
1701.08718#42
1701.08718
[ "1609.01704" ]
1701.08718#42
Memory Augmented Neural Networks with Wormhole Connections
lls in the memory cells in a sequential manner and the reader chooses a memory cell uniformly at random. Let us call this model as urMANN. Let us assume that there is a dependency between two timesteps t0 and t1 as shown in Figure 4. If t0 was taken uniformly between 0 and t1 â 1, then there is a probability 0.5 that the read address invoked at time t1 will be greater than or equal to t0 (proof by symmetry). In that case, the expected shortest path length through that wormhole connection would be (t1 â t0)/2, but this still would not scale well. If the reader is very well trained, it could pick exactly t0 and the path length will be 1. Let us consider all the paths of length less than or equal to k + 1 of the form in Figure 4. Also, let n â ¤ k/2 and m â ¤ k/2. Then, the shortest path from t0 to t1 now has length n + m + 1 â ¤ k + 1, using a wormhole connection that connects the state at t0 + n with the state at t1 â m. There are O(k2) such paths that are realized, but we leave the distribution of the length of that shortest path as an open question. However, the probability of hitting a very short path (of length less than or equal to k + 1) increases exponentially with k. Let the probability of the read at t1 â m to hit the interval (t0, t0 + k/2) be p. Then the probability 3. Exponentially when the Equation 25 holds. 16
1701.08718#41
1701.08718#43
1701.08718
[ "1609.01704" ]
1701.08718#43
Memory Augmented Neural Networks with Wormhole Connections
# Memory Augmented Neural Networks with Wormhole Connections that the shorter paths over the last k reads hits that interval is 1 â (1 â p)k/2, where p is on the order of k/t1. On the other hand, the probability of not hitting that interval approaches to 0 exponentially with k. Figure 4 illustrates how wormhole connections can creater shorter paths. In Figure 5 (b), we show that the expected length of the path travelled outside the wormhole connections obtained from the simulations decreases as the size of the memory decreases. In particular, for urMANN and TARDIS the trend is very close to exponential. As shown in Figure 5 (a), this also inï¬ uences the total length of the paths travelled from timestep 50 to 5 as well. Writing into the memory by using weights sampled with uniform probability for all memory cells can not use the memory as eï¬ ciently as other approaches that we compare to. In particular ï¬ xing the writing mechanism seems to be useful. Even if the reader does not manage to learn where to read, there are many "short paths" which can considerably reduce the eï¬ ect of vanishing gradients.
1701.08718#42
1701.08718#44
1701.08718
[ "1609.01704" ]
1701.08718#44
Memory Augmented Neural Networks with Wormhole Connections
a) b) Figure 5: We have run simulations for TARDIS, MANN with uniform read and write mechanisms (uMANN) and MANN with uniform read and write head is ï¬ xed with a heuristic (urMANN). In our simulations, we assume that there is a dependency from timestep 50 to 5. We run 200 simulations for each one of them with diï¬ erent memory sizes for each model. In plot a) we show the results for the expected length of the shortest path from timestep 50 to 5. In the plots, as the size of the memory gets larger for both models, the length of the shortest path decreases dramatically. In plot b), we show the expected length of the shortest path travelled outside the wormhole connections with respect to diï¬ erent memory sizes. TARDIS seems to use the memory more eï¬ ciently compared to other models in particular when the size of the memory is small by creating shorter paths.
1701.08718#43
1701.08718#45
1701.08718
[ "1609.01704" ]
1701.08718#45
Memory Augmented Neural Networks with Wormhole Connections
# 7. On Generalization over the Longer Sequences Graves et al. (2014) have shown that the LSTMs can not generalize well on the sequences longer than the ones seen during the training. Whereas a MANN such as an NTM or a D-NTM has been shown to generalize to sequences longer than the ones seen during the training set on a set of toy tasks. We believe that the main reason of why LSTMs typically do not generalize to the sequences longer than the ones that are seen during the training is mainly because the hidden
1701.08718#44
1701.08718#46
1701.08718
[ "1609.01704" ]
1701.08718#46
Memory Augmented Neural Networks with Wormhole Connections
17 Gulcehre, Chandar, and Bengio state of an LSTM network utilizes an unbounded history of the input sequence and as a result, its parameters are optimized using the maximum likelihood criterion to ï¬ t on the sequences with lengths of the training examples. However, an n-gram language model or an HMM does not suï¬ er from this issue. In comparison, an n-gram LM would use an input context with a ï¬ xed window size and an HMM has the Markov property in its latent space. As argued below, we claim that while being trained a MANN can also learn the ability to generalize for sequences with a longer length than the ones that appear in the training set by modifying the contents of the memory and reading from it. A regular RNN will minimize the negative log-likelihood objective function for the targets yt by using the unbounded history represented with the hidden state of the RNN, and it will model the parametrized conditional distribution p(yt|ht; θ) for the prediction at timestep t and a MANN would learn p(yt|ht, rt; θ). If we assume that rt represents all the dependencies that yt depends on in the input sequence, we will have p(yt|ht, rt; θ) â p(yt|rt, xt; θ) where rt represents the dependencies in a limited context window that only contains paths shorter than the sequences seen during the training set. Due to this property, we claim that MANNs such as NTM, D-NTM or TARDIS can generalize to the longer sequences more easily. In our experiments on PennTreebank, we show that a TARDIS language model trained to minimize the log-likelihood for p(yt|ht, rt; θ) and on the test set both p(yt|ht, rt; θ) and p(yt|rt, xt; θ) for the same model yields to very close results. On the other hand, the fact that the best results on bAbI dataset obtained in (Gulcehre et al., 2016) is with feedforward controller and similarly in (Graves et al., 2014) feedforward controller was used to solve some of the toy tasks also conï¬ rms our hypothesis.
1701.08718#45
1701.08718#47
1701.08718
[ "1609.01704" ]
1701.08718#47
Memory Augmented Neural Networks with Wormhole Connections
As a result, what has been written into the memory and what has been read becomes very important to be able to generalize to the longer sequences. # 8. Experiments # 8.1 Character-level Language Modeling on PTB As a preliminary study on the performance of our model we consider character-level language modelling. We have evaluated our models on Penn TreeBank (PTB) corpus (Marcus et al., 1993) based on the train, valid and test used in (Mikolov et al., 2012). On this task, we are using layer-normalization (Ba et al., 2016) and recurrent dropout (Semeniuta et al., 2016) as those are also used by the SOTA results on this task. Using layer-normalization and the recurrent dropout improves the performance signiï¬ cantly and reduces the eï¬ ects of overï¬ tting.
1701.08718#46
1701.08718#48
1701.08718
[ "1609.01704" ]
1701.08718#48
Memory Augmented Neural Networks with Wormhole Connections
We train our models with Adam (Kingma and Ba, 2014) over the sequences of length 150. We show our results in Table 1. In addition to the regular char-LM experiments, in order to conï¬ rm our hypothesis regarding to the ability of MANNs generalizing to the sequences longer than the ones seen during the training. We have trained a language model which learns p(yt|ht, rt; θ) by using a softmax layer as described in Equation 11. However to measure the performance of p(yt|rt, xt; θ) on test set, we have used the softmax layer that gets into the auxiliary cost deï¬ ned for the REINFORCE as in Equation 17 for a model trained with REINFORCE and with the auxiliary cost. As in Table 1, the modelâ s performance by using p(yt|ht, rt; θ) is 1.26, however by using p(yt|ht, rt; θ) it becomes 1.28. This gap is small enough to conï¬ rm our assumption that p(yt|ht, rt; θ) â p(yt|rt, xt; θ).
1701.08718#47
1701.08718#49
1701.08718
[ "1609.01704" ]
1701.08718#49
Memory Augmented Neural Networks with Wormhole Connections
18 # Memory Augmented Neural Networks with Wormhole Connections Model CW-RNN (Koutnik et al., 2014) HF-MRNN (Sutskever et al., 2011) ME n-gram (Mikolov et al., 2012) BatchNorm LSTM (Cooijmans et al., 2016) Zoneout RNN (Krueger et al., 2016) LayerNorm LSTM (Ha et al., 2016) LayerNorm HyperNetworks (Ha et al., 2016) LayerNorm HM-LSTM & Step Fn. & Slope Annealing(Chung et al., 2016) Our LSTM + Layer Norm + Dropout TARDIS + REINFORCE + R TARDIS + REINFORCE + Auxiliary Cost TARDIS + REINFORCE + Auxiliary Cost + R TARDIS + Gumbel Softmax + ST + R
1701.08718#48
1701.08718#50
1701.08718
[ "1609.01704" ]
1701.08718#50
Memory Augmented Neural Networks with Wormhole Connections
Table 1: Character-level language modelling results on Penn TreeBank Dataset. TARDIS with Gumbel Softmax and straight-through (ST) estimator performs better than REINFORCE and it performs competitively compared to the SOTA on this task. "+ R" notiï¬ es the use of RESET gates α and β. # 8.2 Sequential Stroke Multi-digit MNIST task In this subsection, we introduce a new pen-stroke based sequential multi-digit MNIST prediction task as a benchmark for long term dependency modelling. We also benchmark the performance of LSTM and TARDIS in this challenging task. 8.2.1 Task and Dataset Recently (de Jong, 2016) introduced an MNIST pen stroke classiï¬ cation task and also provided dataset which consisted of pen stroke sequences representing the skeleton of the digits in the MNIST dataset. Each MNIST digit image I is represented as a sequence of quadruples {dxi, dyi, eosi, eodi}T i=1, where T is the number of pen strokes to deï¬ ne the digit, (dxi, dyi) denotes the pen oï¬ set from the previous to the current stroke (can be 1, -1 or 0), eosi is a binary valued feature to denote end of stroke and eodi is another binary valued feature to denote end of the digit. In the original dataset, ï¬ rst quadruple contains absolute value (x, y) instead of oï¬ sets (dx, dy). Without loss of generality, we set the starting position (x, y) to (0, 0) in our experiments.
1701.08718#49
1701.08718#51
1701.08718
[ "1609.01704" ]
1701.08718#51
Memory Augmented Neural Networks with Wormhole Connections
Each digit is represented by 40 strokes on an average and the task is to predict the digit at the end of the stroke sequence. While this dataset was proposed for incremental sequence learning in (de Jong, 2016), we consider the multi-digit version of this dataset to benchmark models that can handle long term dependencies. Speciï¬ cally, given a sequence of pen-stroke sequences, the task is to predict the sequence of digits corresponding to each pen-stroke sequences in the given order. This is a challenging task since it requires the model to learn to predict the digit based on the pen-stroke sequence, count the number of digits and remember them and generate them in the same order after seeing all the strokes. In our experiments we consider 3 versions of this task with 5,10, and 15 digit sequences respectively. We generated 200,000 training data
1701.08718#50
1701.08718#52
1701.08718
[ "1609.01704" ]
1701.08718#52
Memory Augmented Neural Networks with Wormhole Connections
19 Gulcehre, Chandar, and Bengio points by randomly sampling digits from the training set of the MNIST dataset. Similarly we generated 20,000 validation and test data points by randomly sampling digits from the validation set and test set of the MNIST dataset respectively. Average length of the stroke sequences in each of these tasks are 199, 399, and 599 respectively. Figure 6: An illustration of the sequential MNIST strokes task with multiple digits. The net- work is ï¬ rst provided with the sequence of strokes information for each MNIST digits(location information) as input, during the prediction the network tries to predict the MNIST digits that it has just seen. When the model tries to predict the predictions from the previous time steps are fed back into the network.
1701.08718#51
1701.08718#53
1701.08718
[ "1609.01704" ]
1701.08718#53
Memory Augmented Neural Networks with Wormhole Connections
For the ï¬ rst time step the model receives a special <bos> token which is fed into the model in the ï¬ rst time step when the prediction starts. # 8.2.2 Results We benchmark the performance of LSTM and TARDIS in this new task. Both models receive the sequence of pen strokes and at the end of the sequence are expected to generate the sequence of digits followed by a particular <bos> token. The tasks is illustrated in Figure 6. We evaluate the models based on per-digit error rate. We also compare the performance of TARDIS with REINFORCE with that of TARDIS with gumbel softmax. All the models were trained for same number of updates with early stopping based on the per-digit error rate in the validation set. Results for all 3 versions of the task are reported in Table-2. From the table, we can see that TARDIS performs better than LSTM in all the three versions of the task. Also TARDIS with gumbel-softmax performs slightly better than TARDIS with REINFORCE, which is consistent with our other experiments. Model 3.54% LSTM TARDIS with REINFORCE 2.56% TARDIS with gumbel softmax 1.89% 2.23% 5-digits 10-digits 15-digits 3.00% 2.09% 8.81% 3.67% 3.09% Table 2: Per-digit based test error in sequential stroke multi-digit MNIST task with 5,10, and 15 digits. We also compare the learning curves of all the three models in Figure-7.
1701.08718#52
1701.08718#54
1701.08718
[ "1609.01704" ]
1701.08718#54
Memory Augmented Neural Networks with Wormhole Connections
From the ï¬ gure we can see that TARDIS learns to solve the task faster that LSTM by eï¬ ectively utilizing 20 # Memory Augmented Neural Networks with Wormhole Connections the given memory slots. Also, TARDIS with gumbel softmax converges faster than TARDIS with REINFORCE. 5 digits 10 digits â â TARDIS+Gumbel 08 â â LSTM â â TARDIS+Gumbel â â TARDIS+REINFORCE 08 2 â â TARDIS+REINFORCE fae â â LSTM se : 2 Los eo 5 S Sos 3° Bos o o 7 02 > o2 on ot 00 00 0 5 10 15 20 25 ° 5 epochs epochs 15 digits os os â â LSTM â â TARDIS+REINFORCE â â TARDIS+Gumbel validation error rate o 0 2 9%» 4 so 6 7 8 epochs Figure 7: Learning curves for LSTM and TARDIS for sequential stroke multi-digit MNIST task with 5, 10, and 15 digits respectively. # 8.3 NTM Tasks Graves et al. (2014) proposed associative recall and the copy tasks to evaluate a modelâ s ability to learn simple algorithms and generalize to the sequences longer than the ones seen during the training. We trained a TARDIS model with 4 features for the address and 32 features for the memory content part of the model. We used a model with hidden state of size 120. Our model uses a memory of size 16. We train our model with Adam and used the learning rate of 3e-3. We show the results of our model in Table 3. TARDIS model was able to solve the both tasks, both with Gumbel-softmax and REINFORCE. # 8.4 Stanford Natural Language Inference Bowman et al. (2015) proposed a new task to test the machine learning algorithmsâ ability to infer whether two given sentences entail, contradict or are neutral(semantic independence) from each other.
1701.08718#53
1701.08718#55
1701.08718
[ "1609.01704" ]
1701.08718#55
Memory Augmented Neural Networks with Wormhole Connections
However, this task can be considered as a long-term dependency task, if the premise and the hypothesis are presented to the model in sequential order as also explored by Rocktäschel et al. (2015). Because the model should learn the dependency relationship between the hypothesis and the premise. Our model ï¬ rst reads the premise, then the hypothesis and at the end of the hypothesis the model predicts whether the premise 21 # Gulcehre, Chandar, and Bengio D-NTM cont. (Gulcehre et al., 2016) D-NTM discrete (Gulcehre et al., 2016) NTM (Graves et al., 2014) TARDIS + Gumbel Softmax + ST TARDIS REINFORCE + Auxiliary Cost Success Success Success Success Success Success Failure Success Success Success # Copy Task Associative Recall Table 3: In this table, we consider a model to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower than 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 to determine whether a model is successful on a task as in (Gulcehre et al., 2016). and the hypothesis contradicts or entails. The model proposed by Rocktäschel et al. (2015), applies attention over its previous hidden states over premise when it reads the hypothesis. In that sense their model can still be considered to have some task-speciï¬ c architectural design choice. TARDIS and our baseline LSTM models do not include any task-speciï¬ c architectural design choices. In Table 4, we compare the results of diï¬ erent models. Our model, performs signiï¬ cantly better than other models. However recently it has been shown that with architectural tweaks, it is possible to design a model speciï¬ cally to solve this task and achieve 88.2% test accuracy (Chen et al., 2016).
1701.08718#54
1701.08718#56
1701.08718
[ "1609.01704" ]
1701.08718#56
Memory Augmented Neural Networks with Wormhole Connections
Model Word by Word Attention(Rocktäschel et al., 2015) Word by Word Attention two-way(Rocktäschel et al., 2015) LSTM + LayerNorm + Dropout TARDIS + REINFORCE + Auxiliary Cost TARDIS + Gumbel Softmax + ST Test Accuracy 83.5 83.2 81.7 82.4 84.3 Table 4: Comparisons of diï¬ erent baselines on SNLI Task. # 9. Conclusion In this paper, we propose a simple and eï¬ cient memory augmented neural network model which can perform well both on algorithmic tasks and more realistic tasks. Unlike the previous approaches, we show better performance on real-world NLP tasks, such as language modelling and SNLI. We have also proposed a new task to measure the performance of the models dealing with long-term dependencies. We provide a detailed analysis on the eï¬ ects of using external memory for the gradients and justify the reason why MANNs generalize better on the sequences longer than the ones seen in the training set. We have also shown that the gradients will vanish at a much slower rate (if they vanish) when an external memory is being used. Our theoretical results should encourage further studies in the direction of developing better attention mechanisms that can create wormhole connections eï¬ ciently.
1701.08718#55
1701.08718#57
1701.08718
[ "1609.01704" ]
1701.08718#57
Memory Augmented Neural Networks with Wormhole Connections
22 Memory Augmented Neural Networks with Wormhole Connections # Acknowledgments We thank Chinnadhurai Sankar for suggesting the phrase "wormhole connections" and proof-reading the paper. We would like to thank Dzmitry Bahdanau for the comments and feedback for the earlier version of this paper. We would like to also thank the developers of Theano 4, for developing such a powerful tool for scientiï¬ c computing Theano Development Team (2016). We acknowledge the support of the following organizations for research funding and computing support: NSERC, Samsung, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. SC is supported by a FQRNT-PBEEE scholarship.
1701.08718#56
1701.08718#58
1701.08718
[ "1609.01704" ]
1701.08718#58
Memory Augmented Neural Networks with Wormhole Connections
4. http://deeplearning.net/software/theano/ 23 Gulcehre, Chandar, and Bengio # References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoï¬ rey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is diï¬
1701.08718#57
1701.08718#59
1701.08718
[ "1609.01704" ]
1701.08718#59
Memory Augmented Neural Networks with Wormhole Connections
cult. Neural Networks, IEEE Transactions on, 5(2):157â 166, 1994. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015. Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and combining sequential and tree lstm for natural language inference. arXiv preprint arXiv:1609.06038, 2016. Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. Tim Cooijmans, Nicolas Ballas, César Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. Edwin D. de Jong.
1701.08718#58
1701.08718#60
1701.08718
[ "1609.01704" ]
1701.08718#60
Memory Augmented Neural Networks with Wormhole Connections
Incremental sequence learning. arXiv preprint arXiv:1611.03068, 2016. Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
1701.08718#59
1701.08718#61
1701.08718
[ "1609.01704" ]
1701.08718#61
Memory Augmented Neural Networks with Wormhole Connections
24 Memory Augmented Neural Networks with Wormhole Connections Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- BarwiÅ ska, Sergio G. Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià P. Badia, Karl M. Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerï¬ eld, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis.
1701.08718#60
1701.08718#62
1701.08718
[ "1609.01704" ]
1701.08718#62
Memory Augmented Neural Networks with Wormhole Connections
Hybrid computing using a neural network with dynamic external memory. Nature, advance online publication, October 2016. ISSN 0028-0836. doi: 10.1038/nature20101. URL http://dx.doi.org/10.1038/nature20101. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pages 1819â
1701.08718#61
1701.08718#63
1701.08718
[ "1609.01704" ]
1701.08718#63
Memory Augmented Neural Networks with Wormhole Connections
1827, 2015. Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036, 2016. David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016. Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, page 91, 1991. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9 (8):1735â 1780, 1997. Eric Jang, Shixiang Gu, and Ben Poole.
1701.08718#62
1701.08718#64
1701.08718
[ "1609.01704" ]
1701.08718#64
Memory Augmented Neural Networks with Wormhole Connections
Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â 198, 2015. Å ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Diederik Kingma and Jimmy Ba.
1701.08718#63
1701.08718#65
1701.08718
[ "1609.01704" ]
1701.08718#65
Memory Augmented Neural Networks with Wormhole Connections
Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Jan Koutnik, Klaus Greï¬ , Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al.
1701.08718#64
1701.08718#66
1701.08718
[ "1609.01704" ]
1701.08718#66
Memory Augmented Neural Networks with Wormhole Connections
Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016. Roland Kuhn and Renato De Mori. A cache-based natural language model for speech recognition. IEEE transactions on pattern analysis and machine intelligence, 12(6):570â 583, 1990. 25 Gulcehre, Chandar, and Bengio Sergey Loyka. On singular value inequalities for the sum of two matrices. arXiv preprint arXiv:1507.06630, 2015. Chris J Maddison, Andriy Mnih, and Yee Whye Teh.
1701.08718#65
1701.08718#67
1701.08718
[ "1609.01704" ]
1701.08718#67
Memory Augmented Neural Networks with Wormhole Connections
The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â 330, 1993. Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cer- nocky. Subword language modeling with neural networks. preprint (http://www. ï¬ t. vutbr. cz/imikolov/rnnlm/char. pdf ), 2012. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013a. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.
1701.08718#66
1701.08718#68
1701.08718
[ "1609.01704" ]
1701.08718#68
Memory Augmented Neural Networks with Wormhole Connections
On the diï¬ culty of training recurrent neural networks. ICML (3), 28:1310â 1318, 2013b. Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Danihelka, Andrew W. Senior, Greg Wayne, Alex Graves, and Timothy P. Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. CoRR, abs/1610.09027, 2016.
1701.08718#67
1701.08718#69
1701.08718
[ "1609.01704" ]
1701.08718#69
Memory Augmented Neural Networks with Wormhole Connections
Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš KoÄ isk`y, and Phil Blun- som. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lil- licrap. One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬ cial Intelligence (AAAI-16), 2016.
1701.08718#68
1701.08718#70
1701.08718
[ "1609.01704" ]
1701.08718#70
Memory Augmented Neural Networks with Wormhole Connections
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. arXiv preprint arXiv:1503.08895, 2015. Ilya Sutskever, James Martens, and Geoï¬ rey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 1017â 1024, 2011. 26
1701.08718#69
1701.08718#71
1701.08718
[ "1609.01704" ]
1701.08718#71
Memory Augmented Neural Networks with Wormhole Connections
Memory Augmented Neural Networks with Wormhole Connections Theano Development Team. Theano: A Python framework for fast computation of arXiv e-prints, abs/1605.02688, May 2016. URL http: mathematical expressions. //arxiv.org/abs/1605.02688. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language compre- hension with the epireader. arXiv preprint arXiv:1606.02270, 2016. Endel Tulving. Chronesthesia: Conscious awareness of subjective time. 2002.
1701.08718#70
1701.08718#72
1701.08718
[ "1609.01704" ]
1701.08718#72
Memory Augmented Neural Networks with Wormhole Connections
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. In Press. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein- forcement learning. Machine Learning, 8:229â 256, 1992. Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio.
1701.08718#71
1701.08718#73
1701.08718
[ "1609.01704" ]
1701.08718#73
Memory Augmented Neural Networks with Wormhole Connections
Show, attend and tell: Neural image caption generation with visual attention. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015. Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR, abs/1505.00521, 2015. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnà k, and Jürgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
1701.08718#72
1701.08718#74
1701.08718
[ "1609.01704" ]
1701.08718#74
Memory Augmented Neural Networks with Wormhole Connections
27
1701.08718#73
1701.08718
[ "1609.01704" ]
1701.06538#0
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
7 1 0 2 n a J 3 2 ] G L . s c [ 1 v 8 3 5 6 0 . 1 0 7 1 : v i X r a Under review as a conference paper at ICLR 2017 # OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER Noam Shazeer1, Azalia Mirhoseiniâ â 1, Krzysztof Maziarzâ 2, Andy Davis1, Quoc Le1, Geoffrey Hinton1 and Jeff Dean1 1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff}@google.com 2Jagiellonian University, Cracow, [email protected] # ABSTRACT
1701.06538#1
1701.06538
[ "1502.03167" ]
1701.06538#1
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increas- ing model capacity without a proportional increase in computation. In practice, however, there are signiï¬ cant algorithmic and performance challenges. In this work, we address these challenges and ï¬ nally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efï¬ ciency on modern GPU clusters. We in- troduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve signiï¬ cantly better results than state-of-the-art at lower computational cost.
1701.06538#0
1701.06538#2
1701.06538
[ "1502.03167" ]
1701.06538#2
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
1 # INTRODUCTION AND RELATED WORK 1.1 CONDITIONAL COMPUTATION Exploiting scale in both training data and model size has been central to the success of deep learn- ing. When datasets are sufï¬ ciently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013; Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions.
1701.06538#1
1701.06538#3
1701.06538
[ "1502.03167" ]
1701.06538#3
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
â Equally major contributors â Work done as a member of the Google Brain Residency program (g.co/brainresidency) 1 # Under review as a conference paper at ICLR 2017 MoE layer Ge, MoE layer Figure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network. While these ideas are promising in theory, no work to date has yet demonstrated massive improve- ments in model capacity, training time, or model quality.
1701.06538#2
1701.06538#4
1701.06538
[ "1502.03167" ]
1701.06538#4
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
We blame this on a combination of the following challenges: â ¢ Modern computing devices, especially GPUs, are much faster at arithmetic than at branch- ing. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. â ¢ Large batch sizes are critical for performance, as they amortize the costs of parameter trans- fers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. â ¢ Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth.
1701.06538#3
1701.06538#5
1701.06538
[ "1502.03167" ]
1701.06538#5
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
To be com- putationally efï¬ cient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional com- putation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity. â ¢ Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing.
1701.06538#4
1701.06538#6
1701.06538
[ "1502.03167" ]
1701.06538#6
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
â ¢ Model capacity is most critical for very large data sets. The existing literature on condi- tional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufï¬ cient signal to adequately train a model with millions, let alone billions of parameters. In this work, we for the ï¬ rst time address all of the above challenges and ï¬ nally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efï¬ ciency and signiï¬ cantly advance the state-of-the-art results on public language modeling and translation data sets. 1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER Our approach to conditional computation is to introduce a new type of general purpose neural net- work component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num- ber of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure 1). All parts of the network are trained jointly by back-propagation.
1701.06538#5
1701.06538#7
1701.06538
[ "1502.03167" ]
1701.06538#7
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
2 # Under review as a conference paper at ICLR 2017 While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to beneï¬ t from very large models. In particular, we apply a MoE convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix E Table 9). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost. 1.3 RELATED WORK ON MIXTURES OF EXPERTS Since its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994), the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp, 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009), and deep networks. Other work has focused on different expert conï¬ gurations such as a hierarchical structure (Yao et al., 2009), inï¬ nite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob- lems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation. Our work builds on this use of MoEs as a general purpose neural network component.
1701.06538#6
1701.06538#8
1701.06538
[ "1502.03167" ]
1701.06538#8
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
While Eigen et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity. # 2 THE STRUCTURE OF THE MIXTURE-OF-EXPERTS LAYER The Mixture-of-Experts (MoE) layer consists of a set of n â expert networks" E1, · · · , En, and a â gating network" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters. Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module can be written as follows: y= Ga) Bi(2) (1) i=1 We save computation based on the sparsity of the output of G(x). Wherever G(x)i = 0, we need not compute Ei(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of â experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio, 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.
1701.06538#7
1701.06538#9
1701.06538
[ "1502.03167" ]
1701.06538#9
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
3 # Under review as a conference paper at ICLR 2017 2.1 GATING NETWORK Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is to multiply the input by a trainable weight matrix Wg and then apply the Sof tmax function. GÏ (x) = Sof tmax(x · Wg) (2) Noisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to â
1701.06538#8
1701.06538#10
1701.06538
[ "1502.03167" ]
1701.06538#10
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
â (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix Wnoise. G(x) = Sof tmax(KeepT opK(H(x), k)) (3) H(x)i = (x · Wg)i + StandardN ormal() · Sof tplus((x · Wnoise)i) (4)
1701.06538#9
1701.06538#11
1701.06538
[ "1502.03167" ]
1701.06538#11
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
# Ui KeepT opK(v, k)i = if vi is in the top k elements of v. â â otherwise. (5) Training the Gating Network We train the gating network by simple back-propagation, along with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectiï¬
1701.06538#10
1701.06538#12
1701.06538
[ "1502.03167" ]
1701.06538#12
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
ers. Gradients also back- propagate through the gating network to its inputs. Our method differs here from (Bengio et al., 2015) who use boolean gates and a REINFORCE-style approach to train the gating network. 3 ADDRESSING PERFORMANCE CHALLENGES 3.1 THE SHRINKING BATCH PROBLEM On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately ab < b examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: Mixing Data Parallelism and Model Parallelism: In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd n examples. Thus, we achieve a factor of d improvement in expert batch size. In the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device.
1701.06538#11
1701.06538#13
1701.06538
[ "1502.03167" ]
1701.06538#13
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
4 # Under review as a conference paper at ICLR 2017 This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion- parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. Taking Advantage of Convolutionality: In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to ï¬ nish, we can apply the MoE to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps. Increasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last para- graph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size.
1701.06538#12
1701.06538#14
1701.06538
[ "1502.03167" ]
1701.06538#14
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
3.2 NETWORK BANDWIDTH Another major performance concern in distributed computing is network bandwidth. Since the ex- perts are stationary (see above) and the number of gating parameters is small, most of the communi- cation involves sending the inputs and outputs of the experts across the network. To maintain com- putational efï¬ ciency, the ratio of an expertâ s computation to the size of its input and output must ex- ceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_sizeà hidden_size and hidden_size à output_size, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efï¬ ciency simply by using a larger hidden layer, or more hidden layers. # 4 BALANCING EXPERT UTILIZATION We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate.1 We take a soft constraint approach.
1701.06538#13
1701.06538#15
1701.06538
[ "1502.03167" ]
1701.06538#15
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
We deï¬ ne the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We deï¬ ne an additional loss Limportance, which is added to the overall loss function for the model. This loss is equal to the square of the coefï¬ cient of variation of the set of importance values, multiplied by a hand-tuned scaling factor wimportance. This additional loss encourages all experts to have equal importance. Importance(X) = G(x) xâ X (6) Limportance(X) = wimportance · CV (Importance(X))2 (7) 1Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the ï¬ xed value of k. A third loss encourages diversity of gate values.
1701.06538#14
1701.06538#16
1701.06538
[ "1502.03167" ]
1701.06538#16
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
In our experiments, we ï¬ nd that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values. 5 # Under review as a conference paper at ICLR 2017 While this loss function can ensure equal importance, experts may still receive very different num- bers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, Lload , which ensures balanced loads.
1701.06538#15
1701.06538#17
1701.06538
[ "1502.03167" ]
1701.06538#17
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Appendix A contains the deï¬ nition of this function, along with experimental results. # 5 EXPERIMENTS 1 BILLION WORD LANGUAGE MODELING BENCHMARK Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shufï¬ ed unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. Previous State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreiter & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure 2-right. MoE Models: Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix C. Low Computation, Varied Capacity: To investigate the effects of adding capacity, we trained a series of MoE models all with roughly equal computational costs: about 8 million multiply-and- adds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with ï¬ at MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. The results of these models are shown in Figure 2-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set.
1701.06538#16
1701.06538#18
1701.06538
[ "1502.03167" ]
1701.06538#18
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
FF Baseline Models 45 |F-FF Flat MoE Models [FHT Hierarchical MoE Models 2 g 240 S a % & 35 10" 10° 10° 10° Model Parameters Excluding Embedding and Softmax 55 [EL isi Models VM MoE Models 50 45 2 g 2340 S a % 35 & 30 10° 10" 10° Computational Budget (ops/timestep) FF Baseline Models 55 [EL isi Models 45 |F-FF Flat MoE Models VM MoE Models [FHT Hierarchical MoE Models 50 45 2 g 240 2340 S a % 35 & 35 30 10" 10° 10° 10° 10° 10" 10° Model Parameters Excluding Embedding and Softmax Computational Budget (ops/timestep) Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets. Varied Computation, High Capacity: In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts.
1701.06538#17
1701.06538#19
1701.06538
[ "1502.03167" ]
1701.06538#19
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Details 6 # Under review as a conference paper at ICLR 2017 Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C. Best Published Results Low-Budget MoE Model Medium-Budget MoE Model High-Budget MoE Model Test Test #Parameters Perplexity Perplexity excluding embedding 10 epochs 100 epochs and softmax layers 151 million 4303 million 4313 million 4371 million 34.7 34.1 31.3 28.0 30.6 Training Time 10 epochs 59 hours, 32 k40s 151 million 8.9 million 15 hours, 16 k40s 33.8 million 17 hours, 32 k40s 142.7 million 47 hours, 32 k40s ops/timestep TFLOPS /GPU 1.09 0.74 1.22 1.56 can be found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right. Table 1 compares the results of these models to the best previously-published result on this dataset . Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation. Computational Efï¬ ciency:
1701.06538#18
1701.06538#20
1701.06538
[ "1502.03167" ]
1701.06538#20
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
We trained our models using TensorFlow (Abadi et al., 2016) on clus- ters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efï¬ - ciency in TFLOPS/GPU by dividing the number of ï¬ oating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the ï¬ oating point operations involved in the experts represent between 37% and 46% of the total. For our baseline models wtih no MoE, observed computational efï¬ ciency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efï¬ ciency ranged from 0.74- 0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efï¬ cient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a signiï¬ cant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.
1701.06538#19
1701.06538#21
1701.06538
[ "1502.03167" ]
1701.06538#21
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
100 BILLION WORD GOOGLE NEWS CORPUS on a >< Alter Training on 10B words @-@ After Training on 100B words a 3} Ps a Test Perplexity o - & 8 o is} 10" 10° 10° 10° 10" Model Parameters Excluding Embedding and Softmax Figure 3: Language modeling on a 100 billion word corpus. Models have similar computational budgets (8 million ops/timestep). On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce signiï¬ cant quality improvements. We constructed a similar training set consisting of shufï¬ ed unique sentences from Googleâ s internal news corpus, totalling roughly 100 billion words.
1701.06538#20
1701.06538#22
1701.06538
[ "1502.03167" ]
1701.06538#22
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 7 # Under review as a conference paper at ICLR 2017 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix D. Results: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves signiï¬ cantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994% layer sparsity), computational efï¬ ciency for the model stays at a respectable 0.72 TFLOPS/GPU. 5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR) Model Architecture: Our model was a modiï¬ ed version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E. Datasets: We benchmarked our method on the WMTâ
1701.06538#21
1701.06538#23
1701.06538
[ "1502.03167" ]
1701.06538#23
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
14 Enâ Fr and Enâ De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto- cols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina- tion of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Googleâ
1701.06538#22
1701.06538#24
1701.06538
[ "1502.03167" ]
1701.06538#24
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
s Production English to French data. Table 2: Results on WMTâ 14 Enâ Fr newstest2014 (bold values represent best results). Model Test Test ops/timenstep Training Total Time #Parameters 3 days/64 k40s 8.7B 8.7B 6 days/64 k40s 278M 6 days/96 k80s 278M 6 days/96 k80s Perplexity BLEU 40.35 40.56 39.22 39.92 37.0 31.5 33.1 37.7 39.2 MoE with 2048 Experts MoE with 2048 Experts (longer training) GNMT (Wu et al., 2016) GNMT+RL (Wu et al., 2016) PBMT (Durrani et al., 2014) LSTM (6-layer) (Luong et al., 2015b) LSTM (6-layer+PosUnk) (Luong et al., 2015b) DeepAtt (Zhou et al., 2016) DeepAtt+PosUnk (Zhou et al., 2016) 2.69 2.63 2.79 2.96 85M 85M 214M 214M Table 3: Results on WMTâ 14 En â De newstest2014 (bold values represent best results). Test Perplexity BLEU 26.03 24.91 24.66 20.7 20.6 Table 4: Results on the Google Production Enâ Fr dataset (bold values represent best results). Model MoE with 2048 Experts GNMT (Wu et al., 2016) Test Perplexity BLEU Perplexity BLEU 36.57 35.56 Eval Eval Test 2.60 2.78 37.27 35.80 2.69 2.87 ops/timestep 85M 214M Total #Parameters 8.7B 278M Training Time 1 day/64 k40s 6 days/96 k80s
1701.06538#23
1701.06538#25
1701.06538
[ "1502.03167" ]
1701.06538#25
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
8 # Under review as a conference paper at ICLR 2017 Results: Tables 2, 3, and 4 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMTâ 14 Enâ Fr and Enâ De benchmarks. As our models did not use RL reï¬ nement, these results constitute signiï¬ cant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity scores are also better.2 On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time. 5.4 MULTILINGUAL MACHINE TRANSLATION Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com- bined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this ex- periment with a single MoE-augmented model. See Appendix E for details on model architecture. We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.
1701.06538#24
1701.06538#26
1701.06538
[ "1502.03167" ]
1701.06538#26
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Results: Results for the single-pair GNMT models, the multilingual GNMT model and the mul- tilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model signiï¬ cantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English â Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus.
1701.06538#25
1701.06538#27
1701.06538
[ "1502.03167" ]
1701.06538#27
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Table 5: Multilingual Machine Translation (bold values represent best results). GNMT-Mono GNMT-Multi MoE-Multi MoE-Multi vs. GNMT-Multi Parameters 278M / model ops/timestep training time, hardware Perplexity (dev) French â English Test BLEU German â English Test BLEU Japanese â English Test BLEU Korean â English Test BLEU Portuguese â English Test BLEU Spanish â English Test BLEU English â French Test BLEU English â German Test BLEU English â Japanese Test BLEU English â Korean Test BLEU English â Portuguese Test BLEU English â Spanish Test BLEU 212M various 36.47 31.77 23.41 25.42 44.40 38.00 35.37 26.43 23.66 19.75 38.40 34.50 278M 212M 8.7B 102M 21 days, 96 k20s 12 days, 64 k40s 4.14 34.40 31.17 21.62 22.87 42.53 36.04 34.00 23.15 21.10 18.41 37.35 34.25 3.35 37.46 34.80 25.91 28.71 46.13 39.39 36.59 24.53 22.78 16.62 37.90 36.21 -19% +3.06 +3.63 +4.29 +5.84 +3.60 +3.35 +2.59 +1.38 +1.68 -1.79 +0.55 +1.96 # 6 CONCLUSION This work is the ï¬
1701.06538#26
1701.06538#28
1701.06538
[ "1502.03167" ]
1701.06538#28
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
rst to demonstrate major wins from conditional computation in deep networks. We carefully identiï¬ ed the design considerations and challenges of conditional computing and ad- dressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufï¬ ciently large train- ing sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. ACKNOWLEDGMENTS We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. 2Reported perplexities relative to the tokenization used by both our models and GNMT.
1701.06538#27
1701.06538#29
1701.06538
[ "1502.03167" ]
1701.06538#29
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
9 # Under review as a conference paper at ICLR 2017 # REFERENCES Martà n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorï¬ ow:
1701.06538#28
1701.06538#30
1701.06538
[ "1502.03167" ]
1701.06538#30
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467. Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. CoRR, abs/1611.06194, 2016. URL http://arxiv.org/abs/1611. 06194. A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. Courville.
1701.06538#29
1701.06538#31
1701.06538
[ "1502.03167" ]
1701.06538#31
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Dynamic Capac- ity Networks. ArXiv e-prints, November 2015. Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yo- gatama, Jun Zhan, and Zhenyao Zhu.
1701.06538#30
1701.06538#32
1701.06538
[ "1502.03167" ]
1701.06538#32
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. K. Cho and Y. Bengio. Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning. ArXiv e-prints, June 2014. Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. Neural Computing, 2002.
1701.06538#31
1701.06538#33
1701.06538
[ "1502.03167" ]