id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1701.00299#34
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
cation. The idea is to ï¬ rst classify images to coarse categories and then to ï¬ ne cat- egories. This idea has been explored by numerous prior works [24, 6, 10], but here we show that the same idea can be implemented via a D2NN trained end to end. We use ILSVRC-10, a subset of the ILSVRC-65 [9]. In ILSVRC-10, 10 classes are organized into a 3-layer hierar- chy: 2 superclasses, 5 coarse classes and 10 leaf classes. Each class has 500 training images, 50 validation images, and 150 test images. As in Fig. 3d), the hierarchy in this D2NN mirrors the semantic hierarchy in ILSVRC-10.
1701.00299#33
1701.00299#35
1701.00299
[ "1511.06297" ]
1701.00299#35
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
An image ï¬ rst goes through the root N1. Then Q1 decides whether to descend the left branch (N2 and its children), and Q2 decides whether to descend the right branch (N3 and its children). The leaf nodes N4-N8 are each responsible for classifying two ï¬ ne-grained leaf classes. It is important to note that an input image can go down parallel paths in the hierarchy, e.g. descending both the left branch and the right branch, because Q1 and Q2 make separate decisions. This â multi-threadingâ
1701.00299#34
1701.00299#36
1701.00299
[ "1511.06297" ]
1701.00299#36
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
allows the network to avoid committing to a single path prematurely if an input image is ambigu- ous. Fig. 2d) plots the accuracy-cost curve of our hierarchi- cal D2NN. The accuracy is measured as the proportion of correctly classiï¬ ed test examples. The cost is measured as the number of multiplications, normalized by the cost of a conventional DNN consisting only of the regular nodes (de- noted as NN in the ï¬ gure). We can see that the hierarchi- cal D2NN can match the accuracy of the full network with about half of the computational cost. Fig. 4(right) plots for the hierarchical D2NN the distri- bution of examples going through execution sequences with different numbers of nodes activated. Due to the parallelism of D2NN, there can be many different execution sequences. We also see that as λ increases, accuracy is given more weight and more nodes are activated. Comparison with Dynamic Capacity Networks In this experiment we empirically compare our approach to closely related prior work. Here we compare D2NNs with Dynamic Capacity Networks (DCN) [2], for which efï¬
1701.00299#35
1701.00299#37
1701.00299
[ "1511.06297" ]
1701.00299#37
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
cency mea- surement is the absolute number of multiplications. Given an image, a DCN applies an additional high capacity sub- network to a set of image patches, selected using a hand- designed saliency based policy. The idea is that more inten- sive processing is only necessary for certain image regions. To compare, we evaluate with the same multiclass clas- siï¬ cation task on the Cluttered MNIST [25], which consists of MNIST digits randomly placed on a background clut- tered with fragments of other digits. We train a chain D2NN of length 4 , which implements the same idea of choosing a high-capacity alternative subnetwork for certain inputs. Fig. 6 plots the accuracy-cost curve of our D2NN as well as the accuracy-cost point achieved by the DCN in [2]â an accuracy of 0.9861 and and a cost of 2.77Ã
1701.00299#36
1701.00299#38
1701.00299
[ "1511.06297" ]
1701.00299#38
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
107. The closest point on our curve is an slightly lower accuracy of 0.9698 but slightly better efï¬ ciency (a cost of 2.66 à 107). Note that although our accuracy of 0.9698 is lower, it compares favorably to those of other state-of-the-art methods such as DRAW [16]: 0.9664 and RAM [25]: 0.9189. Visualization of Examples in Different Paths In Fig. 5 (left), we show face examples in the high-low D2NN for λ=0.4. Examples in low-capacity path are generally eas- ier (e.g. more frontal) than examples in high-capacity path. In Fig. 5 (right), we show car examples in the hierarchical D2NN with 1) a single path executed and 2) the full graph executed (for λ=1). They match our intuition that examples with a single path executed should be easier (e.g. less occlu- sion) to classify than examples with the full graph executed. CIFAR-10 Results We train a Cascade D2NN on CIFAR-
1701.00299#37
1701.00299#39
1701.00299
[ "1511.06297" ]
1701.00299#39
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
= Ml, =0.525 Ta=0.8 M@a=1 8 ® o.6F Oo £08 © 0.4) 5 0.6 2 0.2; L © 2 0 go4 1 8 0.2 80 rs \ lidasa® A 5 : fo) 7 oe) Figure 4. Distribution of examples going through different execution paths. Skipped nodes are in grey.
1701.00299#38
1701.00299#40
1701.00299
[ "1511.06297" ]
1701.00299#40
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
The hyperparameter λ controls the trade-off between accuracy and efï¬ ciency. A bigger λ values accuracy more. Left: for the high-low capacity D2NN. Right: for the hierarchical D2NN. The X-axis is the number of nodes activated. Figure 5. Examples with different paths in a high-low D2NN (left) and a hierarchical D2NN (right). 0.8 accuracy oO [2) gos BR â â D2NN « DCN 0 2 4 6 8 #multiplications x10" o iN fo} # 7.
1701.00299#39
1701.00299#41
1701.00299
[ "1511.06297" ]
1701.00299#41
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Acknowledgments This work is partially supported by the National Science Foundation under Grant No. 1539011 and gifts from Intel. # Appendix # A. Implementation Details Figure 6. Accuracy-cost curve for a chain D2NN on the CMNIST task compared to DCN [2]. 10 where the corresponded DNN baseline is the ResNet- 110. We initialize this D2NN with pre-trained ResNet-110 weights, apply cross-entropy losses on regular nodes, and tune the mixed-loss weight as explained in Sec. 4. We see a 30% reduction of cost with a 2% loss (relative) on accuracy, and a 62% reduction of cost with a 7% loss (relative) on ac- curacy. The D2NNâ s ability to improve efï¬ ciency relies on the assumption that not all inputs require the same amount of computation. In CIFAR-10, all images are low resolution (32 à 32), and it is likely that few images are signiï¬ cantly easier to classify than others. As a result, the efï¬ ciency im- provement is modest compared to other datasets.
1701.00299#40
1701.00299#42
1701.00299
[ "1511.06297" ]
1701.00299#42
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
We implement the D2NN framework in Torch [1]. Torch already provides implementations of conventional neural network modules (nodes). So a user can specify the sub- network architecture inside a control node or a regular node using existing Torch functionalities. Our framework then handles the communication between the user-deï¬ ned nodes in the forward and backward pass. To handle parallel paths, default-valued nodes and nodes with multiple data parents, we need to keep track of an ex- ampleâ s execution status (which nodes are activated by this example) and output status (which nodes have output for this example). An exampleâ s output status is different from its execution status if some nodes are not activated but have default values. For runtime efï¬ ciency, we implement the tracking of examples at the mini-batch level. That is, we perform forward and backward passes for a mini-batch of examples as a regular DNN does. Each mini-batch consists of several mini-bags of images.
1701.00299#41
1701.00299#43
1701.00299
[ "1511.06297" ]
1701.00299#43
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# 6. Conclusion We have introduced Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural networks that allow selective execution. Extensive experiments have demonstrated that D2NNs are ï¬ exible and effective for op- timizing accuracy-efï¬ ciency trade-offs. We describe the implementation of D2NN learning pro- cedure as two steps. First, the preprocessing step: When a user-deï¬ ned D2NN model is fed into our framework, we ï¬ rst perform a breadth-ï¬ rst search to get the DAG orders of nodes while performing structure error checks, contructing data and control relationships between nodes and calculat- ing the cost (number of multiplications) of each node.
1701.00299#42
1701.00299#44
1701.00299
[ "1511.06297" ]
1701.00299#44
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
After the preprocessing, the training step is similar to a regular DNN: a forward pass and a backward pass. All nodes are visited according to a topological ordering in a forward pass and the reverse ordering in a backward pass. For each function node, the forward pass has three steps: fetch inputs, forward inside the node, and send data or con- trol signals to children nodes. When dealing with multiple data inputs and multiple control signals, the D2NN will ï¬ l- ter examples with more than one null inputs or all negative control signals. When a default value has been set for a node, all examples have to send out data. If the node is not activated for a particular example, the output will take the default value. A backward pass has similar logic: fetch gra- dients from children, perform the backward pass inside and send out gradients to parents. It is worth noting that when a default value is used in a node, the gradients can be blocked by this node because it is not actually executed.
1701.00299#43
1701.00299#45
1701.00299
[ "1511.06297" ]
1701.00299#45
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# B. ILSVRC-10 Semantic Hierarchy The ILSVRC-10 dataset is a subset of the ILSVRC-65 In our ILSVRC-10, there are 10 classes or- dataset [9]. ganized into a 3-layer hierarchy: 2 superclasses, 5 coarse classes and 10 leaf classes as in Fig 7. Each class has 500 training images, 50 validation images, and 150 test images. # C. Conï¬ gurations
1701.00299#44
1701.00299#46
1701.00299
[ "1511.06297" ]
1701.00299#46
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
High-Low Capacity D2NN The high-low capacity D2NN consists of a single control node (Q) and three regular nodes (N1,N2,N3) as illustrated in Fig. 3a). â ¢ Node N1: a convolutional layer with a 3à 3 ï¬ lter size, 8 ï¬ lters and a stride of 2, followed by a 3à 3 max- pooling layer with a stride of 2. â ¢ Node N2: a convolutional layer with a 3à 3 ï¬ lter size and 16 ï¬ lters, followed by a 3à 3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
1701.00299#45
1701.00299#47
1701.00299
[ "1511.06297" ]
1701.00299#47
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N3: three 3à 3 max-pooling layers, each with a stride of 2, followed by two fully connected layers with 32 neurons and the 2-class output. â ¢ Node Q1: a convolutional layer with a 3à 3 ï¬ lter size and 2 ï¬ lters, followed by a 3à 3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 128 neurons followed by another fully connected layer with the 2-action output. Cascade D2NN The cascade D2NN consists of a sequence of four regular nodes (N1 to N7) and three control nodes (Q1-Q3) as in Fig. 3b).
1701.00299#46
1701.00299#48
1701.00299
[ "1511.06297" ]
1701.00299#48
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N1: a convolutional layer with a 3à 3 ï¬ lter size, 2 ï¬ lters and a stride of 2, followed by a 3à 3 max- pooling layer with a stride of 2. â ¢ Node N2: three 3à 3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output. â ¢ Node N3: two convolutional layers with both 3à 3 ï¬ l- ter sizes and 2, 8 ï¬ lters respectively, each followed by a 3à 3 max-pooling layer with a stride of 2. two 3à 3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
1701.00299#47
1701.00299#49
1701.00299
[ "1511.06297" ]
1701.00299#49
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N5: two convolutional layers with both 3à 3 ï¬ l- ter sizes and 4, 16 ï¬ lters respectively, each followed by a 3à 3 max-pooling layer with a stride of 2. two 3à 3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output. â ¢ Node N7: ï¬
1701.00299#48
1701.00299#50
1701.00299
[ "1511.06297" ]
1701.00299#50
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
ve convolutional layers with all 3à 3 ï¬ l- ter sizes and 2, 8, 32, 32, 64 ï¬ lters repectively, each followed by a 3à 3 max-pooling layer with a stride of 2 except for the third and ï¬ fth layer. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
1701.00299#49
1701.00299#51
1701.00299
[ "1511.06297" ]
1701.00299#51
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node Q1, Q2, Q3: the input is reshaped and fed into a fully connected layer with the 2-action output. Chain D2NN The Chain D2NN is shaped as a chain, where each link consists of a control node selecting between two regular nodes. In the experiments of LFW-B dataset, we use a 3-stage Chain D2NN as in Fig. 3c). â ¢ Node N1: a convolutional layer with a 3à 3 ï¬ lter size, 2 ï¬ lters and a stride of 2, followed by a 3à 3 max- pooling layer with a stride of 2.
1701.00299#50
1701.00299#52
1701.00299
[ "1511.06297" ]
1701.00299#52
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N2: a convolutional layer with a 1à 1 ï¬ lter size and 16 ï¬ lters. â ¢ Node N3: a convolutional layer with a 3à 3 ï¬ lter size and 16 ï¬ lters. â ¢ Node N4: a 3à 3 max-pooling layer with a stride of 2. â ¢ Node N5: a convolutional layer with a 1à 1 ï¬ lter size and 32 ï¬ lters. â
1701.00299#51
1701.00299#53
1701.00299
[ "1511.06297" ]
1701.00299#53
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
¢ Node N6: two convolutional layers with both 3à 3 ï¬ l- ter sizes and 32, 32 ï¬ lters repectively. Object a oS Vehicle Animal Boat Car Dog Bird . â â ¢ â Egygtion Bétsian oe Bik - Fireboat | Gondalo Ambulance Jeep Deerhound Ptarmigan cat Dane grouse Figure 7. The semantic class hierarchy of the ILSVRC-10 dataset.
1701.00299#52
1701.00299#54
1701.00299
[ "1511.06297" ]
1701.00299#54
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N7: a 3à 3 max-pooling layer with a stride of 2. â ¢ Node N8: a convolutional layer with a 1à 1 ï¬ lter size and 32 ï¬ lters followed by a 3à 3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 256 neurons. â ¢ Node N9: a convolutional layer with a 3à 3 ï¬ lter size and 64 ï¬ lters.
1701.00299#53
1701.00299#55
1701.00299
[ "1511.06297" ]
1701.00299#55
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
The output is reshaped and fed into a fully connected layer with 256 neurons. 2 and three fully connected layers with 2048 neurons, 2048 neurons and the 2 ï¬ ne-class output respectively. â ¢ Node Q1 and Q2: two convolutional layers with 5à 5, 3à 3 ï¬ lter sizes and 16, 32 ï¬ lters respectively (the for- mer has a 2à 2 padding), each followed by a 3à 3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
1701.00299#54
1701.00299#56
1701.00299
[ "1511.06297" ]
1701.00299#56
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N10: a fully connected layer with the 2-class output. â ¢ Node Q1: a convolutional layer with a 3à 3 ï¬ lter size and 8 ï¬ lters with a 3à 3 max-pooling layer with a stride of 2 before and a 3à 3 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2-action out- put respectively.
1701.00299#55
1701.00299#57
1701.00299
[ "1511.06297" ]
1701.00299#57
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node Q3 Q7: two convolutional layers with 5à 5, 3à 3 ï¬ lter sizes and 16, 32 ï¬ lters respectively (the former has a 2à 2 padding), each followed by a 3à 3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
1701.00299#56
1701.00299#58
1701.00299
[ "1511.06297" ]
1701.00299#58
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node Q2: a 3à 3 max-pooling layer with a stride of 2 followed by a convolutional layer with a 3à 3 ï¬ lter size and 4 ï¬ lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively. Comparison with Dynamic Capacity Networks We train a chain D2NN of length 4 similar to Fig. 3c). â ¢ Node N1: a convolutional layer with a 3à 3 ï¬ lter size and 24 ï¬ lters. â
1701.00299#57
1701.00299#59
1701.00299
[ "1511.06297" ]
1701.00299#59
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
¢ Node Q3: a convolutional layer with a 3à 3 ï¬ lter size and 2 ï¬ lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively. â ¢ Node N3: a convolutional layer with a 3à 3 ï¬ lter size and 24 ï¬ lters. â ¢ Node N4: a 2à 2 max-pooling layer with a stride of 2. Hierarchical D2NN Fig. 3d) illustrates the design of our hierarchical D2NN.
1701.00299#58
1701.00299#60
1701.00299
[ "1511.06297" ]
1701.00299#60
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N1: a convolutional layer with a 11à 11 ï¬ lter size, 64 ï¬ lters, a stride of 4 and a 2à 2 padding, fol- lowed by a 3à 3 max-pooling layer with a stride of 2. â ¢ Node N6: a convolutional layer with a 3à 3 ï¬ lter size and 24 ï¬ lters. â ¢ Node N7: an identity layer which directly uses inputs as outputs. â ¢ Node N9: a convolutional layer with a 3à 3 ï¬ lter size and 24 ï¬ lters. â
1701.00299#59
1701.00299#61
1701.00299
[ "1511.06297" ]
1701.00299#61
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
¢ Node N2 and N3: a convolutional layer with a 5à 5 ï¬ lter size, 96 ï¬ lters and a 2à 2 padding. â ¢ Node N10: a 2à 2 max-pooling layer with a stride of 2. â ¢ Node N4 N8: a 3à 3 max-pooling layer with a stride of 2 followed by three convolutional layers with 3à 3 ï¬ l- ter sizes and 160, 128, 128 ï¬ lters respectively. The out- put is fed into a 3à 3 max-pooling layer with a stride of
1701.00299#60
1701.00299#62
1701.00299
[ "1511.06297" ]
1701.00299#62
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node N12: a convolutional layer with a 3à 3 ï¬ lter size and 24 ï¬ lters. â ¢ Node N2, N5, N8, N11: an identity layer. â ¢ Node N13: a convolutional layer with a 4à 4 ï¬ lter size, 96 ï¬ lters, a stride of 2 and no padding, followed by a 11à 11 max-pooling layer. The output is reshaped and fed into a fully connected layer with the 10-class output.
1701.00299#61
1701.00299#63
1701.00299
[ "1511.06297" ]
1701.00299#63
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node Q1: a convolutional layer with a 3à 3 ï¬ lter size and 8 ï¬ lters with two 2à 2 max-pooling layers with strides of 2 before and one 2à 2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively. â ¢ Node Q2: a convolutional layer with a 3à 3 ï¬ lter size and 8 ï¬ lters with a 2à 2 max-pooling layer with a stride of 2 before and a 2à 2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
1701.00299#62
1701.00299#64
1701.00299
[ "1511.06297" ]
1701.00299#64
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
â ¢ Node Q3: a convolutional layer with a 3à 3 ï¬ lter size and 8 ï¬ lters with a 2à 2 max-pooling layer with a stride of 2 before and a 2à 2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively. â ¢ Node Q4: a convolutional layer with a 3à 3 ï¬ lter size and 8 ï¬ lters, followed by a 2à 2 max-pooling layer with a stride of 2. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively. For all 5 D2NNs, all convolutional layers use 1à 1 padding and each is followed by a ReLU layer unless speci- ï¬ ed individually. Each fully connected layer except the out- put layers is followed by a ReLU layer.
1701.00299#63
1701.00299#65
1701.00299
[ "1511.06297" ]
1701.00299#65
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
# References [1] Torch. http://torch.ch/. 8 [2] A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. C. Courville. Dynamic capacity net- works. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2549â 2558, 2016. 1, 2, 6, 7, 8 [3] J. M. Alvarez and M. Salzmann.
1701.00299#64
1701.00299#66
1701.00299
[ "1511.06297" ]
1701.00299#66
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Learning the number of neurons in deep networks. In Advances in Neural Informa- tion Processing Systems, pages 2270â 2278, 2016. 2 [4] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recog- nition with visual attention. arXiv preprint arXiv:1412.7755, 2014. 2, 4 [5] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup.
1701.00299#65
1701.00299#67
1701.00299
[ "1511.06297" ]
1701.00299#67
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Con- ditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015. 2 [6] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural In- formation Processing Systems, pages 163â 171, 2010. 2, 7 [7] Y. Bengio, N. L´eonard, and A. Courville.
1701.00299#66
1701.00299#68
1701.00299
[ "1511.06297" ]
1701.00299#68
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Estimating or prop- agating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. 2 [8] Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, et al. Dadiannao: A machine- In Microarchitecture (MICRO), learning supercomputer. 2014 47th Annual IEEE/ACM International Symposium on, pages 609â 622. IEEE, 2014. 2 [9] J. Deng, J. Krause, A. C. Berg, and L.
1701.00299#67
1701.00299#69
1701.00299
[ "1511.06297" ]
1701.00299#69
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Fei-Fei. Hedg- ing your bets: Optimizing accuracy-speciï¬ city trade-offs in large scale visual recognition. In Computer Vision and Pat- tern Recognition (CVPR), 2012 IEEE Conference on, pages 3450â 3457. IEEE, 2012. 7, 9 [10] J. Deng, S. Satheesh, A. C. Berg, and F. Li. Fast and bal- anced: Efï¬
1701.00299#68
1701.00299#70
1701.00299
[ "1511.06297" ]
1701.00299#70
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
cient label tree learning for large scale object recognition. In Advances in Neural Information Processing Systems, pages 567â 575, 2011. 2, 7 [11] M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151â 2184, 2012. 2 [12] L. Denoyer and P. Gallinari.
1701.00299#69
1701.00299#71
1701.00299
[ "1511.06297" ]
1701.00299#71
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. 1, 3 [13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fer- gus. Exploiting linear structure within convolutional net- In Advances in Neural In- works for efï¬ cient evaluation. formation Processing Systems, pages 1269â 1277, 2014. 2 [14] D. Eigen, M. Ranzato, and I. Sutskever.
1701.00299#70
1701.00299#72
1701.00299
[ "1511.06297" ]
1701.00299#72
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. 3 [15] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Cas- cade object detection with deformable part models. In Com- puter vision and pattern recognition (CVPR), 2010 IEEE conference on, pages 2241â 2248. IEEE, 2010. 2 [16] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw:
1701.00299#71
1701.00299#73
1701.00299
[ "1511.06297" ]
1701.00299#73
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
A recurrent neural network for image generation. In Proceedings of the 32nd International Con- ference on Machine Learning (ICML-15). Springer, JMLR Workshop and Conference Proceedings, 2015. 2, 7 [17] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. In Pro- Deep learning with limited numerical precision. ceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1737â
1701.00299#72
1701.00299#74
1701.00299
[ "1511.06297" ]
1701.00299#74
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
1746, 2015. 2 [18] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬ cient neural network. In Advances in Neural Information Processing Systems, pages 1135â 1143, 2015. 2 [19] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments.
1701.00299#73
1701.00299#75
1701.00299
[ "1511.06297" ]
1701.00299#75
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Technical Re- port 07-49, University of Massachusetts, Amherst, October 2007. 5 [20] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hin- ton. Adaptive mixtures of local experts. Neural computation, 3(1):79â 87, 1991. 3 [21] M. I. Jordan and R. A. Jacobs.
1701.00299#74
1701.00299#76
1701.00299
[ "1511.06297" ]
1701.00299#76
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Hierarchical mixtures of ex- perts and the em algorithm. Neural computation, 6(2):181â 214, 1994. 3 [22] G. B. H. E. Learned-Miller. Labeled faces in the wild: Up- dates and new reporting procedures. Technical Report UM- CS-2014-003, University of Massachusetts, Amherst, May 2014. 5 [23] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua.
1701.00299#75
1701.00299#77
1701.00299
[ "1511.06297" ]
1701.00299#77
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
A convolu- tional neural network cascade for face detection. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5325â 5334, 2015. 1, 2 [24] B. Liu, F. Sadeghi, M. Tappen, O. Shamir, and C. Liu. Prob- abilistic label trees for efï¬ cient large scale image classiï¬ ca- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 843â 850, 2013. 7 [25] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of vi- sual attention. In Advances in Neural Information Processing Systems, pages 2204â 2212, 2014. 2, 4, 7 [26] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller.
1701.00299#76
1701.00299#78
1701.00299
[ "1511.06297" ]
1701.00299#78
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Play- ing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. 4 [27] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. 1, 2, 6
1701.00299#77
1701.00299#79
1701.00299
[ "1511.06297" ]
1701.00299#79
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
[28] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feed- back connections. In Advances in Neural Information Pro- cessing Systems, pages 3545â 3553, 2014. 2 [29] Y. Sun, X. Wang, and X. Tang. Deep convolutional net- work cascade for facial point detection. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3476â
1701.00299#78
1701.00299#80
1701.00299
[ "1511.06297" ]
1701.00299#80
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
3483. IEEE, 2013. 1, 2 [30] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. 4 [31] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â 154, 2004. 2 [32] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li.
1701.00299#79
1701.00299#81
1701.00299
[ "1511.06297" ]
1701.00299#81
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution
Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â 2082, 2016. 2 [33] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classiï¬ cation and detection. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):1943â 1955, 2016. 2
1701.00299#80
1701.00299
[ "1511.06297" ]
1612.08083#0
Language Modeling with Gated Convolutional Networks
7 1 0 2 p e S 8 ] L C . s c [ 3 v 3 8 0 8 0 . 2 1 6 1 : v i X r a # Language Modeling with Gated Convolutional Networks # Yann N. Dauphin 1 Angela Fan 1 Michael Auli 1 David Grangier 1 # Abstract The pre-dominant approach to language mod- eling to date is based on recurrent neural net- works. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a ï¬ nite context ap- proach through stacked convolutions, which can be more efï¬ cient since they allow paralleliza- tion over sequential tokens. We propose a novel simpliï¬ ed gating mechanism that outperforms Oord et al. (2016b) and investigate the impact of key architectural decisions. The proposed ap- proach achieves state-of-the-art on the WikiText- 103 benchmark, even though it features long- term dependencies, as well as competitive re- sults on the Google Billion Words benchmark. Our model reduces the latency to score a sen- tence by an order of magnitude compared to a recurrent baseline.
1612.08083#1
1612.08083
[ "1511.06909" ]
1612.08083#1
Language Modeling with Gated Convolutional Networks
To our knowledge, this is the ï¬ rst time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks. outperform classical n-gram language models (Kneser & Ney, 1995; Chen & Goodman, 1996). These classical mod- els suffer from data sparsity, which makes it difï¬ cult to rep- resent large contexts and thus, long-range dependencies. Neural language models tackle this issue by embedding words in continuous space over which a neural network is applied. The current state of the art for language model- ing is based on long short term memory networks (LSTM; Hochreiter et al., 1997) which can theoretically model ar- bitrarily long dependencies. In this paper, we introduce new gated convolutional net- works and apply them to language modeling. Convolu- tional networks can be stacked to represent large context sizes and extract hierarchical features over larger and larger contexts with more abstractive features (LeCun & Bengio, 1995). This allows them to model long-term dependen- cies by applying O( N k ) operations over a context of size N and kernel width k. In contrast, recurrent networks view the input as a chain structure and therefore require a linear number O(N ) of operations.
1612.08083#0
1612.08083#2
1612.08083
[ "1511.06909" ]
1612.08083#2
Language Modeling with Gated Convolutional Networks
# 1. Introduction Statistical language models estimate the probability distri- bution of a sequence of words by modeling the probability of the next word given preceding words, i.e. Analyzing the input hierarchically bears resemblance to classical grammar formalisms which build syntactic tree structures of increasing granuality, e.g., sentences consist of noun phrases and verb phrases each comprising further internal structure (Manning & Sch¨utze, 1999; Steedman, 2002). Hierarchical structure also eases learning since the number of non-linearities for a given context size is reduced compared to a chain structure, thereby mitigating the van- ishing gradient problem (Glorot & Bengio, 2010). N P(wo,---,wn) = P(wo) Il P(wi|wo,--.,Wi-1), i=1 where wi are discrete word indices in a vocabulary. Lan- guage models are a critical part of systems for speech recognition (Yu & Deng, 2014) and machine translation (Koehn, 2010). Modern hardware is well suited to models that are highly parallelizable. In recurrent networks, the next output de- pends on the previous hidden state which does not enable parallelization over the elements of a sequence. Convolu- tional networks, however, are very amenable to this com- puting paradigm since the computation of all input words can be performed simultaneously (§2). Recently, neural networks (Bengio et al., 2003; Mikolov et al., 2010; Jozefowicz et al., 2016) have been shown to 1Facebook AI Research.
1612.08083#1
1612.08083#3
1612.08083
[ "1511.06909" ]
1612.08083#3
Language Modeling with Gated Convolutional Networks
Correspondence to: Yann N. Dauphin <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). Gating has been shown to be essential for recurrent neural networks to reach state-of-the-art performance (Jozefow- icz et al., 2016). Our gated linear units reduce the vanish- ing gradient problem for deep architectures by providing a linear path for the gradients while retaining non-linear ca- pabilities (§5.2). Language Modeling with Gated Convolutional Networks We show that gated convolutional networks outperform other recently published language models such as LSTMs trained in a similar setting on the Google Billion Word Benchmark (Chelba et al., 2013). We also evaluate the abil- ity of our models to deal with long-range dependencies on the WikiText-103 benchmark for which the model is con- ditioned on an entire paragraph rather than a single sen- tence and we achieve a new state-of-the-art on this dataset (Merity et al., 2016). Finally, we show that gated linear units achieve higher accuracy and converge faster than the LSTM-style gating of Oord et al. (2016; §4, §5).
1612.08083#2
1612.08083#4
1612.08083
[ "1511.06909" ]
1612.08083#4
Language Modeling with Gated Convolutional Networks
# 2. Approach In this paper we introduce a new neural language model that replaces the recurrent connections typically used in re- current networks with gated temporal convolutions. Neu- ral language models (Bengio et al., 2003) produce a repre- sentation H = [h0, . . . , hN ] of the context for each word w0, . . . , wN to predict the next word P (wi|hi). Recurrent neural networks f compute H through a recurrent function hi = f (hiâ 1, wiâ 1) which is an inherently sequential pro- cess that cannot be parallelized over i.1 Our proposed approach convolves the inputs with a func- tion f to obtain H = f â w and therefore has no tempo- ral dependencies, so it is easier to parallelize over the in- dividual words of a sentence. This process will compute each context as a function of a number of preceding words. Compared to recurrent networks, the context size is ï¬
1612.08083#3
1612.08083#5
1612.08083
[ "1511.06909" ]
1612.08083#5
Language Modeling with Gated Convolutional Networks
nite but we will demonstrate both that inï¬ nite contexts are not necessary and our models can represent large enough con- texts to perform well in practice (§5). -{_ Input sentence Text The cat sat on the mat Wo Wy Wp Wz Wy We We {Lookup Table <\ E=Dy, O000O @efeseze) OO000O OO000 OO0000 OO000O ( { maa) Ok os i BeE.V+e (~ i} OodpoCo}+_/ 0000 | CODSOO | _| @lelefere) 1 ©0000 Gating A = DTAqgAAAaAaAEe Ho=Ae0(B) IS) Ja} fo} |S] lal lal lo a} (oJ (SJ lal lS} [S| lo Stack L - 1 Convolution+Gating Blocks /-\__Softmax } Y= softmax(WH, ) ©0000 Figure 1 illustrates the model architecture. Words are rep- resented by a vector embedding stored in a lookup table D|V|à e where |V| is the number of words in the vocabulary and e is the embedding size. The input to our model is a sequence of words w0, . . . , wN which are represented by word embeddings E = [Dw0, . . . , DwN ]. We compute the hidden layers h0, . . . , hL as Figure 1. Architecture of the gated convolutional network for lan- guage modeling. hl(X) = (X â W + b) â Ï (X â V + c) (1) where m, n are respectively the number of input and output feature maps and k is the patch size, X â RN à m is the input of layer hl (either word embeddings or the outputs of previous layers), W â Rkà mà n, b â Rn, V â Rkà mà n, c â Rn are learned parameters, Ï is the sigmoid function and â is the element-wise product between matrices. When convolving inputs, we take care that hi does not contain information from future words. We address this by shifting the convolutional inputs to prevent the kernels 1Parallelization is usually done over multiple sequences in- stead. from seeing future context (Oord et al., 2016a). Speciï¬
1612.08083#4
1612.08083#6
1612.08083
[ "1511.06909" ]
1612.08083#6
Language Modeling with Gated Convolutional Networks
- cally, we zero-pad the beginning of the sequence with k â 1 elements, assuming the ï¬ rst input element is the beginning of sequence marker which we do not predict and k is the width of the kernel. The output of each layer is a linear projection X â W + b modulated by the gates Ï (X â V + c). Similar to LSTMs, these gates multiply each element of the matrix X â W + b and control the information passed on in the hierarchy. We dub this gating mechanism Gated Linear Units (GLU). Stacking multiple layers on top of the input E gives a repre- sentation of the context for each word H = hLâ
1612.08083#5
1612.08083#7
1612.08083
[ "1511.06909" ]
1612.08083#7
Language Modeling with Gated Convolutional Networks
¦. . .â ¦h0(E). We wrap the convolution and the gated linear unit in a pre- activation residual block that adds the input of the block to Language Modeling with Gated Convolutional Networks the output (He et al., 2015a). The blocks have a bottleneck structure for computational efï¬ ciency and each block has up to 5 layers. trast, the gradient of the gated linear unit V[X ®@ o(X)] = VX @o(X)+X@o'(X)VX (3) The simplest choice to obtain model predictions is to use a softmax layer, but this choice is often computationally inefï¬ cient for large vocabularies and approximations such as noise contrastive estimation (Gutmann & Hyv¨arinen) or hierarchical softmax (Morin & Bengio, 2005) are pre- ferred. We choose an improvement of the latter known as adaptive softmax which assigns higher capacity to very fre- quent words and lower capacity to rare words (Grave et al., 2016a). This results in lower memory requirements as well as faster computation at both training and test time.
1612.08083#6
1612.08083#8
1612.08083
[ "1511.06909" ]
1612.08083#8
Language Modeling with Gated Convolutional Networks
has a path â X â Ï (X) without downscaling for the ac- tivated gating units in Ï (X). This can be thought of as a multiplicative skip connection which helps gradients ï¬ ow through the layers. We compare the different gating schemes experimentally in Section §5.2 and we ï¬ nd gated linear units allow for faster convergence to better perplexi- ties. # 4. Experimental Setup # 4.1. Datasets # 3. Gating Mechanisms Gating mechanisms control the path through which infor- mation ï¬ ows in the network and have proven to be use- ful for recurrent neural networks (Hochreiter & Schmidhu- ber, 1997). LSTMs enable long-term memory via a sep- arate cell controlled by input and forget gates. This al- lows information to ï¬ ow unimpeded through potentially many timesteps. Without these gates, information could easily vanish through the transformations of each timestep. In contrast, convolutional networks do not suffer from the same kind of vanishing gradient and we ï¬ nd experimentally that they do not require forget gates. Therefore, we consider models possessing solely output gates, which allow the network to control what informa- tion should be propagated through the hierarchy of lay- ers. We show this mechanism to be useful for language modeling as it allows the model to select which words or features are relevant for predicting the next word. Par- allel to our work, Oord et al. (2016b) have shown the effectiveness of an LSTM-style mechanism of the form tanh(Xâ W+b)â Ï (Xâ V+c) for the convolutional mod- eling of images. Later, Kalchbrenner et al. (2016) extended this mechanism with additional gates for use in translation and character-level language modeling. We report results on two public large-scale language mod- eling datasets. First, the Google Billion Word dataset (Chelba et al., 2013) is considered one of the largest lan- guage modeling datasets with almost one billion tokens and a vocabulary of over 800K words. In this dataset, words appearing less than 3 times are replaced with a special un- known symbol. The data is based on an English corpus of 30, 301, 028 sentences whose order has been shufï¬
1612.08083#7
1612.08083#9
1612.08083
[ "1511.06909" ]
1612.08083#9
Language Modeling with Gated Convolutional Networks
ed. Second, WikiText-103 is a smaller dataset of over 100M tokens with a vocabulary of about 200K words (Merity et al., 2016). Different from GBW, the sentences are con- secutive which allows models to condition on larger con- texts rather than single sentences. For both datasets, we add a beginning of sequence marker <S > at the start of each line and an end of sequence marker </S> at the end of each line. On the Google Billion Word corpus each sequence is a single sentence, while on WikiText-103 a sequence is an entire paragraph. The model sees <S> and </S > as input but only predicts the end of sequence marker </S>. We evaluate models by computing the per- i â log p(wi|...,wiâ 1) on the standard held out plexity e test portion of each dataset. # 4.2. Training Gated linear units are a simpliï¬ ed gating mechanism based on the work of Dauphin & Grangier (2015) for non- deterministic gates that reduce the vanishing gradient prob- lem by having linear units coupled to the gates. This retains the non-linear capabilities of the layer while allowing the gradient to propagate through the linear unit without scal- ing. The gradient of the LSTM-style gating of which we dub gated tanh unit (GTU) is We implement our models in Torch (Collobert et al., 2011) and train on Tesla M40 GPUs. The majority of our models are trained on single GPU, as we focused on identifying compact architectures with good generalization and efï¬ - cient computation at test time. We trained larger models with an 8-GPU setup by copying the model onto each GPU and dividing the batch such that each worker computes 1/8th of the gradients. The gradients are then summed us- ing Nvidia NCCL. The multi-GPU setup allowed us to train models with larger hidden units.
1612.08083#8
1612.08083#10
1612.08083
[ "1511.06909" ]
1612.08083#10
Language Modeling with Gated Convolutional Networks
V[tanh(X) @ o(X)] = tanhâ (X)VX ® o(X) +o'(X)VX @ tanh(X). Notice that it gradually vanishes as we stack layers because of the downscaling factors tanhâ (X) and o/(X). In con- (2) We train using Nesterovâ s momentum (Sutskever et al., 2013). While the cost in terms of memory is storing an- other vector of the size of the parameters, it increases the speed of convergence signiï¬ cantly with minimal additional Language Modeling with Gated Convolutional Networks Name GCNN-13 GCNN-14B GCNN-9 GCNN-8B GCNN-8 GCNN-14 Dataset Google Billion Word wikitext-103 Lookup 128 280 Conv1 [4, 1268] x 1 (5, 512] x 1 [4,807] x 1 [1,512] x 1 [4,900] x 1 6, 850] x 3 1,14 1,128 Conv2.x : ee | x 12 5,128 | x3 ; soy | x4 5,128 | x3 | [4,900] x 7 1,850] x 1 > 1,512 , 1,512 1,512 1, 256 Conv3.x 5,512 | x3 5,256 | x3 5,850] x 4 1, 1024 1,512 1, 1024 1, 1024 Conv4.x 5,1024 | x6 1,1024 | x1 1,850] x 1 1, 2048 1, 2048 1, 1024 Conv5.x 5,1024 | x1 4,850] x 3 1, 4096 Conv6.x [4, 1024] x 1 Conv7.x [4, 2048] x 1 AdaSoftmax 10k,40k,200k 4k,40k,200k 2k,10k,50k | 10k,20k,200k
1612.08083#9
1612.08083#11
1612.08083
[ "1511.06909" ]
1612.08083#11
Language Modeling with Gated Convolutional Networks
Table 1. Architectures for the models. The residual building blocks are shown in brackets with the format [k, n]. â Bâ denotes bottleneck architectures. computation compared to standard stochastic gradient de- scent. The speed of convergence was further increased with gradient clipping (Pascanu et al., 2013) and weight normal- ization (Salimans & Kingma, 2016). Pascanu et al. (2013) argue for gradient clipping because it prevents the gradient explosion problem that characterizes RNNs. However, gradient clipping is not tied to RNNs, as it can be derived from the general concept of trust region methods. Gradient clipping is found using a spherical trust region
1612.08083#10
1612.08083#12
1612.08083
[ "1511.06909" ]
1612.08083#12
Language Modeling with Gated Convolutional Networks
In general, ï¬ nding a good architecture was simple and the rule of thumb is that the larger the model, the better the per- formance. In terms of optimization, we initialize the lay- ers of the model with the Kaiming initialization (He et al., 2015b), with the learning rate sampled uniformly in the interval [1., 2.], the momentum set to 0.99, and clipping set to 0.1. Good hyper-parameters for the optimizer are quite straightforward to ï¬ nd and the optimal values do not change much between datasets.
1612.08083#11
1612.08083#13
1612.08083
[ "1511.06909" ]
1612.08083#13
Language Modeling with Gated Convolutional Networks
# 5. Results Aé* = argmin f(0)+Vf7Ae@ s.t. ||Adl| <e . Vif = â max(||Vfl|,e) =. 4 max(| VFO e «a Empirically, our experiments converge signiï¬ cantly faster with the use of gradient clipping even though we do not use a recurrent architecture. In combination, these methods led to stable and fast con- vergence with comparatively large learning rates such as 1.
1612.08083#12
1612.08083#14
1612.08083
[ "1511.06909" ]
1612.08083#14
Language Modeling with Gated Convolutional Networks
# 4.3. Hyper-parameters LSTMs and recurrent networks are able to capture long term dependencies and are fast becoming cornerstones in natural language processing. In this section, we compare strong LSTM and RNN models from the literature to our gated convolutional approach on two datasets. We ï¬ nd the GCNN outperforms the comparable LSTM re- sults on Google billion words. To accurately compare these approaches, we control for the same number of GPUs and the adaptive softmax output model (Grave et al., 2016a), as these variables have a signiï¬
1612.08083#13
1612.08083#15
1612.08083
[ "1511.06909" ]
1612.08083#15
Language Modeling with Gated Convolutional Networks
cant inï¬ uence on performance. In this setting, the GCNN reaches 38.1 test perplexity while the comparable LSTM has 39.8 perplexity (Table 2). We found good hyper-parameter conï¬ gurations by cross- validating with random search on a validation set. For the number of residual model architecture, we select blocks between {1, . . . , 10}, the size of the embed- dings with {128, . . . , 256}, the number of units between {128, . . . , 2048}, and the kernel width between {3, . . . , 5}. Further, the GCNN obtains strong performance with much greater computational efï¬
1612.08083#14
1612.08083#16
1612.08083
[ "1511.06909" ]
1612.08083#16
Language Modeling with Gated Convolutional Networks
ciency. Figure 2 shows that our approach closes the previously signiï¬ cant gap between models that use the full softmax and models with the usu- ally less accurate hierarchical softmax. Thanks to the adap- Language Modeling with Gated Convolutional Networks Model Sigmoid-RNN-2048 (Ji et al., 2015) Interpolated KN 5-Gram (Chelba et al., 2013) Sparse Non-Negative Matrix LM (Shazeer et al., 2014) RNN-1024 + MaxEnt 9 Gram Features (Chelba et al., 2013) LSTM-2048-512 (Jozefowicz et al., 2016) 2-layer LSTM-8192-1024 (Jozefowicz et al., 2016) BIG GLSTM-G4 (Kuchaiev & Ginsburg, 2017) LSTM-2048 (Grave et al., 2016a) 2-layer LSTM-2048 (Grave et al., 2016a) GCNN-13 GCNN-14 Bottleneck Test PPL Hardware 1 CPU 100 CPUs - 24 GPUs 32 GPUs 32 GPUs 8 GPUs 1 GPU 1 GPU 1 GPU 8 GPUs 68.3 67.6 52.9 51.3 43.7 30.6 23.3â
1612.08083#15
1612.08083#17
1612.08083
[ "1511.06909" ]
1612.08083#17
Language Modeling with Gated Convolutional Networks
43.9 39.8 38.1 31.9 Table 2. Results on the Google Billion Word test set. The GCNN outperforms the LSTMs with the same output approximation. 55 eâ - LSTM+Softmax 50 eâ - GCNN+AdaSoftmax| B 545 2 © & % 40 2 35 30 0 200 400 600 800 1000 MFlops Figure 2. In comparison to the state-of-the-art (Jozefowicz et al., 2016) which uses the full softmax, the adaptive softmax approxi- mation greatly reduces the number of operations required to reach a given perplexity. Model LSTM-1024 (Grave et al., 2016b) GCNN-8 GCNN-14 Test PPL Hardware 1 GPU 1 GPU 4 GPUs 48.7 44.9 37.2 Table 3. Results for single models on the WikiText-103 dataset. lion Word, the average sentence length is quite short â only 20 words. We evaluate on WikiText-103 to determine if the model can perform well on a dataset where much larger contexts are available. On WikiText-103, an input se- quence is an entire Wikipedia article instead of an individ- ual sentence - increasing the average length to 4000 words. However, the GCNN outperforms LSTMs on this problem as well (Table 3). The GCNN-8 model has 8 layers with 800 units each and the LSTM has 1024 units. These results show that GCNNs can model enough context to achieve strong results. tive softmax, the GCNN only requires a fraction of the op- erations to reach the same perplexity values. The GCNN outperforms other single model state-of-the-art approaches except the much larger LSTM of Jozefowicz et al. (2016), a model which requires more GPUs and the much more computationally expensive full softmax. In comparison, the largest model we have trained reaches 31.9 test per- plexity compared to the 30.6 of that approach, but only re- quires training for 2 weeks on 8 GPUs compared to 3 weeks of training on 32 GPUs for the LSTM.
1612.08083#16
1612.08083#18
1612.08083
[ "1511.06909" ]
1612.08083#18
Language Modeling with Gated Convolutional Networks
Note that these re- sults can be improved by either using mixtures of experts (Shazeer et al., 2017) or ensembles of these models. We evaluated on the Gigaword dataset following Chen et al. (2016) to compare with fully connected models. We found that the fully connected and convolutional network reach respectively 55.6 and 29.4 perplexity. We also ran pre- liminary experiments on the much smaller Penn tree bank dataset. When we score the sentences independently, the GCNN and LSTM have comparable test perplexity with 108.7 and 109.3 respectively. However, it is possible to achieve better results by conditioning on previous sen- tences. Unlike the LSTM, we found that the GCNN over- ï¬ ts on this quite small dataset and so we note the model is better suited to larger scale problems.
1612.08083#17
1612.08083#19
1612.08083
[ "1511.06909" ]
1612.08083#19
Language Modeling with Gated Convolutional Networks
# 5.1. Computational Efï¬ ciency Another relevant concern is if the GCNNâ s ï¬ xed context size can thoroughly model long sequences. On Google Bil- â appeared after submission Computational cost is an important consideration for lan- guage models. Depending on the application, there are a number of metrics to consider. We measure the throughput Language Modeling with Gated Convolutional Networks 80 70 =< ReLU| 75) 65 â GTU 70| â GLU 2 2 60 3S fy 265 a g @ 55 = 60 bal 3 ® 59 Lad 55) Lad 50 45 435 5 10 15 20 25 30 35 405 50 100 Epochs Hours Figure 3. Learning curves on WikiText-103 (left) and Google Billion Word (right) for models with different activation mechanisms. Models with gated linear units (GLU) converge faster and to a lower perplexity. LSTM-2048 GCNN-9 GCNN-8 Bottleneck Throughput (CPU) 169 121 179 (GPU) 45,622 29,116 45,878 Responsiveness (GPU) 2,282 29,116 45,878
1612.08083#18
1612.08083#20
1612.08083
[ "1511.06909" ]
1612.08083#20
Language Modeling with Gated Convolutional Networks
Table 4. Processing speed in tokens/s at test time for an LSTM with 2048 units and GCNNs achieving 43.9 perplexity on Google Billion Word. The GCNN with bottlenecks improves the respon- siveness by 20 times while maintaining high throughput. of a model as the number of tokens that can be processed per second. Throughput can be maximized by processing many sentences in parallel to amortize sequential opera- tions. In contrast, responsiveness is the speed of process- ing the input sequentially, one token at a time. Through- put is important because it indicates the time required to process a corpus of text and responsiveness is an indicator of the time to ï¬ nish processing a sentence. A model can have low responsiveness but high throughput by evaluating many sentences simultaneously through batching. In this case, such a model is slow in ï¬ nishing processing individ- ual sentences, but can process many sentences at a good rate. We evaluate the throughput and responsiveness for mod- els that reach approximately 43.9 perplexity on the Google Billion Word benchmark. We consider the LSTM with 2048 units in Table 2, a GCNN-8Bottleneck with 7 Resnet blocks that have a bottleneck structure as described by (He et al., 2015a) and a GCNN-8 without bottlenecks. A bot- tleneck block wedges a k > 1 convolution between two k = 1 layers. This designs reduces computational cost by reducing and increasing dimensionality with the k = 1 lay- ers so that the convolution operates in a lower dimensional space. Our results show that the use of bottleneck blocks is important to maintaining computational efï¬ ciency. The throughput of the LSTM is measured by using a large batch of 750 sequences of length 20, resulting in 15, 000 to- kens per batch. The responsiveness is the average speed to process a sequence of 15, 000 contiguous tokens. Table 4 shows that the throughput for the LSTM and the GCNN are similar. The LSTM performs very well on GPU be- cause the large batch size of 750 enables high paralleliza- tion over different sentences. This is because the LSTM implementation has been thoroughly optimized and uses cuDNN, whereas the cuDNN implementation of convolu- tions is not been optimized for the 1-D convolutions we use in our model.
1612.08083#19
1612.08083#21
1612.08083
[ "1511.06909" ]
1612.08083#21
Language Modeling with Gated Convolutional Networks
We believe much better performance can be achieved by a more efï¬ cient 1-D cuDNN convolution. Un- like the LSTM, the GCNN can be parallelized both over sequences as well as across the tokens of each sequence, allowing the GCNN to have 20x higher responsiveness. # 5.2. Gating Mechanisms In this section, we compare the gated linear unit with other mechanisms as well as to models without gating. We consider the LSTM-style gating mechanism (GTU) tanh(X â W + b) â Ï (X â
1612.08083#20
1612.08083#22
1612.08083
[ "1511.06909" ]
1612.08083#22
Language Modeling with Gated Convolutional Networks
V + c) of (Oord et al., 2016b) and networks that use regular ReLU or Tanh activations. Gating units add parameters, so for fair comparison, we carefully cross-validate models with a comparable number of parameters. Figure 3 (left) shows that GLU networks converge to a lower perplexity than the other approaches on WikiText-103. Similar to gated linear units, the ReLU has a linear path that lets the gradients easily pass through the active units. This translates to much faster convergence for both the ReLU and the GLU. On the other hand, neither Tanh nor GTU have this linear path, and thus suffer from the vanishing gradient problem. In the GTU, both the in- puts as well as the gating units can cut the gradient when the units saturate. Comparing the GTU and Tanh models allows us to measure Language Modeling with Gated Convolutional Networks w w B BB O © OS N SB Test Perplexity w FS 32 10 20 30 40 50 60 70 Context 90 a a i) S 3 S Test Perplexity uw So 405, 10 15 20 25 Context Figure 4. Test perplexity as a function of context for Google Billion Word (left) and Wiki-103 (right). We observe that models with bigger context achieve better results but the results start diminishing quickly after a context of 20. the effect of gating since the Tanh model can be thought of as a GTU network with the sigmoid gating units removed. The results (Figure 3, left) show that the gating units make a vast difference and provide useful modeling capabilities, as there is a large difference in the performance between GTU and Tanh units. Similarly, while ReLU unit is not an exact ablation of the gating units in the GLU, it can be seen as a simpliï¬ cation ReLU(X) = X â (X > 0) where the gates become active depending on the sign of the input. Also in this case, GLU units lead to lower perplexity. In Figure 3 (right) we repeat the same experiment on the larger Google Billion Words dataset.
1612.08083#21
1612.08083#23
1612.08083
[ "1511.06909" ]
1612.08083#23
Language Modeling with Gated Convolutional Networks
We consider a ï¬ xed time budget of 100 hours because of the considerable train- ing time required for this task. Similar to WikiText-103, the gated linear units achieve the best results on this prob- lem. There is a gap of about 5 perplexity points between the GLU and ReLU which is similar to the difference be- tween the LSTM and RNN models measured by (Jozefow- icz et al., 2016) on the same dataset. hl(X) = (X â W + b) â (X â V + c). 140) â Linear â â Bilinear 120 â GLU 2 x o 2 100) o & 3 80) 2 60) 40 () 50 100 Hours Figure 5. Learning curves on Google Billion Word for models with varying degrees of non-linearity. # 5.3. Non-linear Modeling The experiments so far have shown that the gated linear unit beneï¬ ts from the linear path the unit provides com- pared to other non-linearities. Next, we compare networks with GLUs to purely linear networks and networks with bilinear layers in order to measure the impact of the non- linear path provided by the gates of the GLU. One mo- tivation for this experiment is the success of linear mod- els on many natural language processing tasks (Manning & Sch¨utze, 1999). We consider deep linear convolutional networks where the layers lack the gating units of the GLU and take the form hl(X) = X â W + b. Stacking sev- eral layers on top of each other is simply a factorization of the model which remains linear up to the softmax, at which point it becomes log-linear. Another variation of GLUs are bilinear layers (Mnih & Hinton, 2007) which take the form Figure 5 shows that GLUs perform best, followed by bilin- ear layers and then linear layers. Bilinear layers improve over linear ones by more than 40 perplexity points, and the GLU improves another 20 perplexity points over the bilin- ear model. The linear model performs very poorly at per- plexity 115 even compared to 67.6 of a Kneser-Ney 5-gram model, even though the former has access to more context.
1612.08083#22
1612.08083#24
1612.08083
[ "1511.06909" ]
1612.08083#24
Language Modeling with Gated Convolutional Networks
Surprisingly, the introduction of the bilinear units is enough to reach 61 perplexity on Google Billion Word, which sur- passes both Kneser-Ney 5-gram models and the non-linear neural model of (Ji et al., 2015). # 5.4. Context Size Figure 4 shows the impact of context size for the gated CNN. We tried different combinations of network depth and kernel widths for each context size and chose the best performing one for each size. Generally, larger contexts Language Modeling with Gated Convolutional Networks improve accuracy but returns drastically diminish with win- dows larger than 40 words, even for WikiText-103 where we may condition on an entire Wikipedia article. This means that the unlimited context offered by recurrent mod- els is not strictly necessary for language modeling.
1612.08083#23
1612.08083#25
1612.08083
[ "1511.06909" ]
1612.08083#25
Language Modeling with Gated Convolutional Networks
Fur- thermore, this ï¬ nding is also congruent with the fact that good performance with recurrent networks can be obtained by truncating gradients after only 40 timesteps using trun- cated back propagation through time. Figure 4 also shows that WikiText-103 beneï¬ ts much more from larger context size than Google Billion Word as the performance degrades more sharply with smaller contexts. WikiText-103 pro- vides much more context than Google Billion Word where the average sentence size is 20. However, while the average size of the documents is close to 4000 tokens, we ï¬ nd that strong performance can be achieved with a context size as low as 30 tokens. # 6. Conclusion
1612.08083#24
1612.08083#26
1612.08083
[ "1511.06909" ]
1612.08083#26
Language Modeling with Gated Convolutional Networks
We introduce a convolutional neural network for language modeling with a novel gating mechanism. Compared to recurrent neural networks, our approach builds a hierarchi- cal representation of the input words that makes it easier to capture long-range dependencies, similar in spirit to the tree-structured analysis of linguistic grammar formalisms. The same property eases learning since features are passed through a ï¬ xed number of layers and non-linearities, un- like for recurrent networks where the number of processing steps differs depending on the position of the word in the input. The results show that our gated convolutional net- work achieves a new state of the art on WikiText-103. On the Google Billion Word benchmark, we show competitive results can be achieved with signiï¬
1612.08083#25
1612.08083#27
1612.08083
[ "1511.06909" ]
1612.08083#27
Language Modeling with Gated Convolutional Networks
cantly fewer resources. # Acknowledgments # 5.5. Training In this section, we perform an ablation study of the impact of weight normalization and gradient clipping. We sepa- rately cross-validate the hyper-parameters of each conï¬ gu- ration to make the comparison fair. Due to the high cost of each of these experiments, we only consider a single itera- tion over the training data. Figure 6 shows that both meth- ods signiï¬ cantly speed up convergence. Weight normal- ization in particular improves the speed by over two times. This speedup is partly due to the ability to use much larger learning rates (1 instead of 0.01) than would otherwise be possible. Both clipping and weight normalization add com- putational overhead, but it is minor compared to the large gains in convergence speed.
1612.08083#26
1612.08083#28
1612.08083
[ "1511.06909" ]
1612.08083#28
Language Modeling with Gated Convolutional Networks
We would like to thank Ben Graham, Jonas Gehring, Edouard Grave, Armand Joulin and Ronan Collobert for helpful discussions. # References Bengio, Yoshua, Ducharme, R´ejean, Vincent, Pascal, and Jauvin, journal of Christian. A neural probabilistic language model. machine learning research, 3(Feb):1137â 1155, 2003. Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013. Chen, Stanley F and Goodman, Joshua. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pp. 310â 318. Association for Computational Lin- guistics, 1996. 140 130 â Without Clipping 120) â â Without WeightNorm > â With Both £110 Eq ov â a 100} o 2 90 rn & 80 70 60 50 40000 80000 120000 160000 Updates
1612.08083#27
1612.08083#29
1612.08083
[ "1511.06909" ]
1612.08083#29
Language Modeling with Gated Convolutional Networks
Chen, Wenlin, Grangier, David, and Auli, Michael. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2016. Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clement. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch. Dauphin, Yann N and Grangier, David. butions with linearizing belief networks. arXiv:1511.05622, 2015. Predicting distri- arXiv preprint Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬ culty of training deep feedforward neural networks. The handbook of brain theory and neural networks, 2010.
1612.08083#28
1612.08083#30
1612.08083
[ "1511.06909" ]
1612.08083#30
Language Modeling with Gated Convolutional Networks
Figure 6. Effect of weight normalization and gradient clipping on Google Billion Word. Grave, E., Joulin, A., Ciss´e, M., Grangier, D., and J´egou, H. Efï¬ cient softmax approximation for GPUs. ArXiv e-prints, September 2016a. Improving Neural Lan- guage Models with a Continuous Cache. ArXiv e-prints, De- cember 2016b. Language Modeling with Gated Convolutional Networks Gutmann, Michael and Hyv¨arinen, Aapo. Noise-contrastive esti- mation: A new estimation principle for unnormalized statisti- cal models. Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, arXiv preprint Koray. arXiv:1601.06759, 2016a. Pixel recurrent neural networks. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015a. Oord, Aaron van den, Kalchbrenner, Nal, Vinyals, Oriol, Espe- holt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Condi- tional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016b. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ ers: Surpassing human-level perfor- mance on imagenet classiï¬ cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â 1034, 2015b. Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬ culty of training recurrent neural networks. In Proceedings of The 30th International Conference on Machine Learning, pp. 1310â 1318, 2013. Hochreiter, Sepp and Schmidhuber, J¨urgen.
1612.08083#29
1612.08083#31
1612.08083
[ "1511.06909" ]
1612.08083#31
Language Modeling with Gated Convolutional Networks
Long short-term memory. Neural computation, 9(8):1735â 1780, 1997. Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. Ji, Shihao, Vishwanathan, SVN, Satish, Nadathur, Anderson, Michael J, and Dubey, Pradeep.
1612.08083#30
1612.08083#32
1612.08083
[ "1511.06909" ]
1612.08083#32
Language Modeling with Gated Convolutional Networks
Blackout: Speeding up recur- rent neural network language models with very large vocabu- laries. arXiv preprint arXiv:1511.06909, 2015. Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Skip-gram language modeling using sparse non-negative matrix probabil- ity estimation. arXiv preprint arXiv:1412.1454, 2014. Jozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. Shazeer, Noam, Mirhoseini, Azalia, Maziarz, Krzysztof, Davis, Andy, Le, Quoc V., Hinton, Geoffrey E., and Dean, Jeff. Out- rageously large neural networks: The sparsely-gated mixture- of-experts layer. CoRR, abs/1701.06538, 2017. URL http: //arxiv.org/abs/1701.06538. Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, van den Oord, Aaron, Graves, Alex, and Kavukcuoglu, Koray. Neural Machine Translation in Linear Time. arXiv, 2016.
1612.08083#31
1612.08083#33
1612.08083
[ "1511.06909" ]
1612.08083#33
Language Modeling with Gated Convolutional Networks
Steedman, Mark. The syntactic process. 2002. Kneser, Reinhard and Ney, Hermann. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pp. 181â 184. IEEE, 1995. Koehn, Philipp. Statistical Machine Translation. Cambridge Uni- versity Press, New York, NY, USA, 1st edition, 2010. ISBN 0521874157, 9780521874151. Sutskever, Ilya, Martens, James, Dahl, George E, and Hinton, Ge- offrey E.
1612.08083#32
1612.08083#34
1612.08083
[ "1511.06909" ]
1612.08083#34
Language Modeling with Gated Convolutional Networks
On the importance of initialization and momentum in deep learning. 2013. Wang, Mingxuan, Lu, Zhengdong, Li, Hang, Jiang, Wenbin, and gencnn: A convolutional architecture for word Liu, Qun. sequence prediction. CoRR, abs/1503.05034, 2015. URL http://arxiv.org/abs/1503.05034. Kuchaiev, Oleksii and Ginsburg, Boris. Factorization tricks for LSTM networks. CoRR, abs/1703.10722, 2017. URL http: //arxiv.org/abs/1703.10722. Yu, Dong and Deng, Li. Automatic Speech Recognition: A Deep Learning Approach.
1612.08083#33
1612.08083#35
1612.08083
[ "1511.06909" ]
1612.08083#35
Language Modeling with Gated Convolutional Networks
Springer Publishing Company, Incorpo- rated, 2014. ISBN 1447157788, 9781447157786. LeCun, Yann and Bengio, Yoshua. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995. Manning, Christopher D and Sch¨utze, Hinrich. Foundations of statistical natural language processing, 1999. Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer Sen- tinel Mixture Models. ArXiv e-prints, September 2016. Mikolov, Tom´aË s, Martin, Karaï¬ Â´at, Burget, Luk´aË s, Cernock´y, Jan, and Khudanpur, Sanjeev.
1612.08083#34
1612.08083#36
1612.08083
[ "1511.06909" ]
1612.08083#36
Language Modeling with Gated Convolutional Networks
Recurrent Neural Network based Language Model. In Proc. of INTERSPEECH, pp. 1045â 1048, 2010. Mnih, Andriy and Hinton, Geoffrey. Three new graphical models for statistical language modelling. In Proceedings of the 24th international conference on Machine learning, pp. 641â 648. ACM, 2007. Morin, Frederic and Bengio, Yoshua. Hierarchical probabilistic neural network language model. In Aistats, volume 5, pp. 246â 252. Citeseer, 2005.
1612.08083#35
1612.08083
[ "1511.06909" ]
1612.07837#0
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
7 1 0 2 b e F 1 1 ] D S . s c [ 2 v 7 3 8 7 0 . 2 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 SAMPLERNN: AN UNCONDITIONAL END-TO-END NEURAL AUDIO GENERATION MODEL Soroush Mehri University of Montreal Kundan Kumar IIT Kanpur Ishaan Gulrajani University of Montreal Shubham Jain IIT Kanpur Jose Sotelo University of Montreal Aaron Courville University of Montreal CIFAR Fellow Yoshua Bengio University of Montreal CIFAR Senior Fellow # ABSTRACT
1612.07837#1
1612.07837
[ "1602.07868" ]
1612.07837#1
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which proï¬ ts from combining memory-less modules, namely autoregressive multilayer percep- trons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the gener- ated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1612.07837#0
1612.07837#2
1612.07837
[ "1602.07868" ]
1612.07837#2
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
1 # INTRODUCTION Audio generation is a challenging task at the core of many problems of interest, such as text-to- speech synthesis, music synthesis and voice conversion. The particular difï¬ culty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1 Traditionally, the high-dimensionality of raw audio signal is dealt with by ï¬ rst compressing it into spectral or hand-engineered features and deï¬ ning the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in compli- cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems. In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) proï¬ ts from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance. In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the ï¬ exibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1612.07837#1
1612.07837#3
1612.07837
[ "1602.07868" ]
1612.07837#3
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/ 2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets 1 Published as a conference paper at ICLR 2017 operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily ï¬ exible in handling sequential dependencies at multiple levels of abstraction. Hence, our contribution is threefold: 1.
1612.07837#2
1612.07837#4
1612.07837
[ "1602.07868" ]
1612.07837#4
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
We present a novel method that utilizes RNNs at different scales to model longer term de- pendencies in audio waveforms while training on short sequences which results in memory efï¬ ciency during training. 2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on three audio datasets. Human evaluation also has been conducted to test these generative models. # 2 SAMPLERNN MODEL In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
1612.07837#3
1612.07837#5
1612.07837
[ "1602.07868" ]
1612.07837#5
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
T-1 2X) = T] praler...-) a) i=0 RNNs are commonly used to model sequential data which can be formulated as: ht = H(htâ 1, xi=t) p(xi+1|x1, . . . , xi) = Sof tmax(M LP (ht)) with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia- tions (Section 3). However, raw audio signals are challenging to model because they contain struc- ture at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart. SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation. 2.1 FRAME-LEVEL MODULES Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of F S(k) (â Frame Sizeâ ) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward. The variable number of frames we condition upon up to timestep t â 1 is expressed by a ï¬ xed length hidden state or memory h(k) t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k) . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame.
1612.07837#4
1612.07837#6
1612.07837
[ "1602.07868" ]
1612.07837#6
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
See Eqs. 4â 5. Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections. 2 (2) (3) Published as a conference paper at ICLR 2017 Xi+15 Xit16> +++ X31 Tier3 > Xit12, ees Xit2ds «++ »Xi427 Xi428, «++ » Xi431 Tier 2 c c c c Tier 1 Xi+285 00 Xi431 ey Tf ots Xit3Lo vey P(%i+32 | X<i+32) P(Xi+33 | X<i+33) P(%i+34 | X<i+34) PRi+35 | X<i+35) Figure 1: Snapshot of the unrolled model at timestep i with K = 3 tiers. As a simpliï¬ cation only one RNN and up-sampling ratio r = 4 is used for all tiers. Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that speciï¬ c tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k) t + c(k+1) t inpt = ; 1 < k < K k = K (4) Wxf (k) f (k=K) ; t ht = H(htâ 1, inpt) (5) c(k) (tâ 1)â r+j = Wjht; 1 â ¤ j â ¤ r (6)
1612.07837#5
1612.07837#7
1612.07837
[ "1602.07868" ]
1612.07837#7
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called â perforatedâ upsam- pling in the context of convolutional neural networks (CNNs). It was ï¬ rst demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique. 2.2 SAMPLE-LEVEL MODULE The lowest module (tier k = 1; Eqs. 7â 9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the F S(1) preceding samples as well as a vector c(k=2) from the next higher module which encodes information about the sequence prior to that frame. As F S(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above. f (1) iâ 1 = f latten([eiâ F S(1), . . . , eiâ 1]) f (1) i = f latten([eiâ F S(1)+1, . . . , ei]) i + c(2) inp(1) i = W (1) p(xi+1|x1, . . . , xi) = Sof tmax(M LP (inp(1) x f (1) i )) i (7) (8) (9) We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
1612.07837#6
1612.07837#8
1612.07837
[ "1602.07868" ]
1612.07837#8
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
3 Published as a conference paper at ICLR 2017 each window of F S(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference. 2.2.1 OUTPUT QUANTIZATION The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax). To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise. In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam- ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free. In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules. 2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS To demonstrate the importance of a sample-level autoregressive module, we try replacing it with â Multi-Softmaxâ (see Table 4), where the prediction of each sample xi depends only on the con- ditioning vector c from Eq. 9. In this conï¬ guration, the model outputs an entire frame of F S(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We ï¬ nd that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig- niï¬
1612.07837#7
1612.07837#9
1612.07837
[ "1602.07868" ]
1612.07837#9
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
cantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is re- duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time. # 2.3 TRUNCATED BPTT Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con- nections. However, when they can be trained efï¬ ciently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efï¬ cient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences. Table 3 shows that by increasing the subsequence length, performance substantially increases along- side with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures. Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2â 3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of ï¬ lling the details to lower tiers. 4
1612.07837#8
1612.07837#10
1612.07837
[ "1602.07868" ]
1612.07837#10
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Published as a conference paper at ICLR 2017 # 3 EXPERIMENTS AND RESULTS In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their prepro- cessing is as follows: Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di- versity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%.
1612.07837#9
1612.07837#11
1612.07837
[ "1602.07868" ]
1612.07837#11
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Music dataset is the collection of all 32 Beethovenâ s piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/val- idation/test split is 88%-6%-6%. See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio ï¬ les into 8 seconds long se- quences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients. We particularly explored two gated variants of RNNsâ GRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneï¬ cial for learning long-term dependencies. As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample. All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra- dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma| (6, = 0.9, Bg = 0.999, and « = 1leâ 8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter val- ues (Bergstra & Bengio}/2012) was conducted.
1612.07837#10
1612.07837#12
1612.07837
[ "1602.07868" ]
1612.07837#12
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma| 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to|He| (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax).
1612.07837#11
1612.07837#13
1612.07837
[ "1602.07868" ]
1612.07837#13
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Also FSâ ) = FS) = 2 and FS) = 8 were found to result in lowest NLL. 3.1 WAVENET RE-IMPLEMENTATION We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyper- parameters, as well as limited compute power at our disposal, we made our own design choices so that the model would ï¬ t on a single GPU while having a receptive ï¬ eld of around 250 milliseconds,
1612.07837#12
1612.07837#14
1612.07837
[ "1602.07868" ]
1612.07837#14
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
3Courtesy of Ubisoft 5 Published as a conference paper at ICLR 2017 2 Blizzard Onomatopoeia Music a, ni NAIA Ini WIN Nirirerernnvinnn NAN year Wily hal yal WN a perma VV WWW vv WA RA tt ota du lululil seth me Ma ahs ih lids i ero monn nines Figure 2: Examples from the datasets compared to samples from our models. In the ï¬ rst 3 rows, 2 seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1 and 4 are ground truth from which one can see how the datasets look different and have complex structure in low resolution which the frame-level component of the SampleRNN is designed to capture. Samples also to some extent mimic the same global structure. At the same time, zoomed-in samples of our model shows that it can perfectly resemble the high resolution structure present in the data as well. Table 1: Test NLL in bits for three presented datasets. Model Blizzard Onomatopoeia Music RNN (Eq. 2) WaveNet (re-impl.) 1.434 1.480 2.034 2.285 1.410 1.464 SampleRNN (2-tier) SampleRNN (3-tier) 1.392 1.387 2.026 1.990 1.076 1.159 Table 2: Average NLL on Blizzard test set for real-valued models. # Model # Model # Average Test NLL RNN-GMM -2.415 SampleRNN-GMM (2-tier) -2.782 6
1612.07837#13
1612.07837#15
1612.07837
[ "1602.07868" ]