doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1701.00299 | 44 | To handle parallel paths, default-valued nodes and nodes with multiple data parents, we need to keep track of an ex- ampleâs execution status (which nodes are activated by this example) and output status (which nodes have output for this example). An exampleâs output status is different from its execution status if some nodes are not activated but have default values. For runtime efï¬ciency, we implement the tracking of examples at the mini-batch level. That is, we perform forward and backward passes for a mini-batch of examples as a regular DNN does. Each mini-batch consists of several mini-bags of images.
# 6. Conclusion
We have introduced Dynamic Deep Neural Networks (D2NN), a new type of feed-forward deep neural networks that allow selective execution. Extensive experiments have demonstrated that D2NNs are ï¬exible and effective for op- timizing accuracy-efï¬ciency trade-offs. | 1701.00299#44 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 45 | We describe the implementation of D2NN learning pro- cedure as two steps. First, the preprocessing step: When a user-deï¬ned D2NN model is fed into our framework, we ï¬rst perform a breadth-ï¬rst search to get the DAG orders of nodes while performing structure error checks, contructing data and control relationships between nodes and calculat- ing the cost (number of multiplications) of each node.
After the preprocessing, the training step is similar to a regular DNN: a forward pass and a backward pass. All nodes are visited according to a topological ordering in a forward pass and the reverse ordering in a backward pass. | 1701.00299#45 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 46 | For each function node, the forward pass has three steps: fetch inputs, forward inside the node, and send data or con- trol signals to children nodes. When dealing with multiple data inputs and multiple control signals, the D2NN will ï¬l- ter examples with more than one null inputs or all negative control signals. When a default value has been set for a node, all examples have to send out data. If the node is not activated for a particular example, the output will take the default value. A backward pass has similar logic: fetch gra- dients from children, perform the backward pass inside and send out gradients to parents. It is worth noting that when a default value is used in a node, the gradients can be blocked by this node because it is not actually executed.
# B. ILSVRC-10 Semantic Hierarchy
The ILSVRC-10 dataset is a subset of the ILSVRC-65 In our ILSVRC-10, there are 10 classes or- dataset [9]. ganized into a 3-layer hierarchy: 2 superclasses, 5 coarse classes and 10 leaf classes as in Fig 7. Each class has 500 training images, 50 validation images, and 150 test images.
# C. Conï¬gurations | 1701.00299#46 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 47 | # C. Conï¬gurations
High-Low Capacity D2NN The high-low capacity D2NN consists of a single control node (Q) and three regular nodes (N1,N2,N3) as illustrated in Fig. 3a).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 8 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2.
⢠Node N2: a convolutional layer with a 3Ã3 ï¬lter size and 16 ï¬lters, followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
⢠Node N3: three 3Ã3 max-pooling layers, each with a stride of 2, followed by two fully connected layers with 32 neurons and the 2-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 2 ï¬lters, followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 128 neurons followed by another fully connected layer with the 2-action output. | 1701.00299#47 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 48 | Cascade D2NN The cascade D2NN consists of a sequence of four regular nodes (N1 to N7) and three control nodes (Q1-Q3) as in Fig. 3b).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 2 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2.
⢠Node N2: three 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N3: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 2, 8 ï¬lters respectively, each followed by a 3Ã3 max-pooling layer with a stride of 2.
two 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N5: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 4, 16 ï¬lters respectively, each followed by a 3Ã3 max-pooling layer with a stride of 2. | 1701.00299#48 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 49 | two 3Ã3 max-pooling layers with strides of 2. The output is reshaped and fed into a fully con- nected layer with the 2-class output.
⢠Node N7: ï¬ve convolutional layers with all 3Ã3 ï¬l- ter sizes and 2, 8, 32, 32, 64 ï¬lters repectively, each followed by a 3Ã3 max-pooling layer with a stride of 2 except for the third and ï¬fth layer. The output is reshaped and fed into a fully connected layer with 512 neurons followed by another fully connected layer with the 2-class output.
⢠Node Q1, Q2, Q3: the input is reshaped and fed into a fully connected layer with the 2-action output.
Chain D2NN The Chain D2NN is shaped as a chain, where each link consists of a control node selecting between two regular nodes. In the experiments of LFW-B dataset, we use a 3-stage Chain D2NN as in Fig. 3c).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size, 2 ï¬lters and a stride of 2, followed by a 3Ã3 max- pooling layer with a stride of 2. | 1701.00299#49 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 50 | ⢠Node N2: a convolutional layer with a 1Ã1 ï¬lter size and 16 ï¬lters.
⢠Node N3: a convolutional layer with a 3Ã3 ï¬lter size and 16 ï¬lters.
⢠Node N4: a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N5: a convolutional layer with a 1Ã1 ï¬lter size and 32 ï¬lters.
⢠Node N6: two convolutional layers with both 3Ã3 ï¬l- ter sizes and 32, 32 ï¬lters repectively.
Object a oS Vehicle Animal Boat Car Dog Bird . â⢠â Egygtion Bétsian oe Bik - Fireboat | Gondalo Ambulance Jeep Deerhound Ptarmigan cat Dane grouse
Figure 7. The semantic class hierarchy of the ILSVRC-10 dataset.
⢠Node N7: a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N8: a convolutional layer with a 1Ã1 ï¬lter size and 32 ï¬lters followed by a 3Ã3 max-pooling layer with a stride of 2. The output is reshaped and fed into a fully connected layer with 256 neurons. | 1701.00299#50 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 51 | ⢠Node N9: a convolutional layer with a 3Ã3 ï¬lter size and 64 ï¬lters. The output is reshaped and fed into a fully connected layer with 256 neurons.
2 and three fully connected layers with 2048 neurons, 2048 neurons and the 2 ï¬ne-class output respectively.
⢠Node Q1 and Q2: two convolutional layers with 5Ã5, 3Ã3 ï¬lter sizes and 16, 32 ï¬lters respectively (the for- mer has a 2Ã2 padding), each followed by a 3Ã3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
⢠Node N10: a fully connected layer with the 2-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 3Ã3 max-pooling layer with a stride of 2 before and a 3Ã3 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2-action out- put respectively. | 1701.00299#51 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 52 | ⢠Node Q3 Q7: two convolutional layers with 5Ã5, 3Ã3 ï¬lter sizes and 16, 32 ï¬lters respectively (the former has a 2Ã2 padding), each followed by a 3Ã3 max- pooling layer with a stride of 2. The output is reshaped and fed into three fully connected layers with 1024 neurons, 1024 neurons and the 2-action output respec- tively.
⢠Node Q2: a 3Ã3 max-pooling layer with a stride of 2 followed by a convolutional layer with a 3Ã3 ï¬lter size and 4 ï¬lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively.
Comparison with Dynamic Capacity Networks We train a chain D2NN of length 4 similar to Fig. 3c).
⢠Node N1: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node Q3: a convolutional layer with a 3Ã3 ï¬lter size and 2 ï¬lters. The output is reshaped and fed into two fully connected layers with 64 neurons and the 2- action output respectively.
⢠Node N3: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters. | 1701.00299#52 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 53 | ⢠Node N3: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N4: a 2Ã2 max-pooling layer with a stride of 2.
Hierarchical D2NN Fig. 3d) illustrates the design of our hierarchical D2NN.
⢠Node N1: a convolutional layer with a 11Ã11 ï¬lter size, 64 ï¬lters, a stride of 4 and a 2Ã2 padding, fol- lowed by a 3Ã3 max-pooling layer with a stride of 2.
⢠Node N6: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N7: an identity layer which directly uses inputs as outputs.
⢠Node N9: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N2 and N3: a convolutional layer with a 5Ã5 ï¬lter size, 96 ï¬lters and a 2Ã2 padding.
⢠Node N10: a 2Ã2 max-pooling layer with a stride of 2. | 1701.00299#53 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 54 | ⢠Node N10: a 2Ã2 max-pooling layer with a stride of 2.
⢠Node N4 N8: a 3Ã3 max-pooling layer with a stride of 2 followed by three convolutional layers with 3Ã3 ï¬l- ter sizes and 160, 128, 128 ï¬lters respectively. The out- put is fed into a 3Ã3 max-pooling layer with a stride of
⢠Node N12: a convolutional layer with a 3Ã3 ï¬lter size and 24 ï¬lters.
⢠Node N2, N5, N8, N11: an identity layer.
⢠Node N13: a convolutional layer with a 4Ã4 ï¬lter size, 96 ï¬lters, a stride of 2 and no padding, followed by a 11Ã11 max-pooling layer. The output is reshaped and fed into a fully connected layer with the 10-class output.
⢠Node Q1: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with two 2Ã2 max-pooling layers with strides of 2 before and one 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively. | 1701.00299#54 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 55 | ⢠Node Q2: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 2Ã2 max-pooling layer with a stride of 2 before and a 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
⢠Node Q3: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters with a 2Ã2 max-pooling layer with a stride of 2 before and a 2Ã2 max-pooling layer with a stride of 2 after. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
⢠Node Q4: a convolutional layer with a 3Ã3 ï¬lter size and 8 ï¬lters, followed by a 2Ã2 max-pooling layer with a stride of 2. The output is reshaped and fed into two fully connected layers with 256 neurons and the 2-action output respectively.
For all 5 D2NNs, all convolutional layers use 1Ã1 padding and each is followed by a ReLU layer unless speci- ï¬ed individually. Each fully connected layer except the out- put layers is followed by a ReLU layer.
# References | 1701.00299#55 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 56 | # References
[1] Torch. http://torch.ch/. 8 [2] A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. C. Courville. Dynamic capacity net- works. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2549â2558, 2016. 1, 2, 6, 7, 8 [3] J. M. Alvarez and M. Salzmann. Learning the number of neurons in deep networks. In Advances in Neural Informa- tion Processing Systems, pages 2270â2278, 2016. 2
[4] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recog- nition with visual attention. arXiv preprint arXiv:1412.7755, 2014. 2, 4
[5] E. Bengio, P.-L. Bacon, J. Pineau, and D. Precup. Con- ditional computation in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015. 2 | 1701.00299#56 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 57 | [6] S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural In- formation Processing Systems, pages 163â171, 2010. 2, 7 [7] Y. Bengio, N. L´eonard, and A. Courville. Estimating or prop- agating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. 2 [8] Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, et al. Dadiannao: A machine- In Microarchitecture (MICRO), learning supercomputer. 2014 47th Annual IEEE/ACM International Symposium on, pages 609â622. IEEE, 2014. 2
[9] J. Deng, J. Krause, A. C. Berg, and L. Fei-Fei. Hedg- ing your bets: Optimizing accuracy-speciï¬city trade-offs in large scale visual recognition. In Computer Vision and Pat- tern Recognition (CVPR), 2012 IEEE Conference on, pages 3450â3457. IEEE, 2012. 7, 9 | 1701.00299#57 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 58 | [10] J. Deng, S. Satheesh, A. C. Berg, and F. Li. Fast and bal- anced: Efï¬cient label tree learning for large scale object recognition. In Advances in Neural Information Processing Systems, pages 567â575, 2011. 2, 7
[11] M. Denil, L. Bazzani, H. Larochelle, and N. de Freitas. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151â2184, 2012. 2
[12] L. Denoyer and P. Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014. 1, 3
[13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fer- gus. Exploiting linear structure within convolutional net- In Advances in Neural In- works for efï¬cient evaluation. formation Processing Systems, pages 1269â1277, 2014. 2
[14] D. Eigen, M. Ranzato, and I. Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. 3 | 1701.00299#58 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 59 | [15] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Cas- cade object detection with deformable part models. In Com- puter vision and pattern recognition (CVPR), 2010 IEEE conference on, pages 2241â2248. IEEE, 2010. 2
[16] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of the 32nd International Con- ference on Machine Learning (ICML-15). Springer, JMLR Workshop and Conference Proceedings, 2015. 2, 7
[17] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. In Pro- Deep learning with limited numerical precision. ceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1737â1746, 2015. 2
[18] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015. 2 | 1701.00299#59 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 60 | [19] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Re- port 07-49, University of Massachusetts, Amherst, October 2007. 5
[20] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hin- ton. Adaptive mixtures of local experts. Neural computation, 3(1):79â87, 1991. 3
[21] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of ex- perts and the em algorithm. Neural computation, 6(2):181â 214, 1994. 3
[22] G. B. H. E. Learned-Miller. Labeled faces in the wild: Up- dates and new reporting procedures. Technical Report UM- CS-2014-003, University of Massachusetts, Amherst, May 2014. 5
[23] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolu- tional neural network cascade for face detection. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5325â5334, 2015. 1, 2 | 1701.00299#60 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 61 | [24] B. Liu, F. Sadeghi, M. Tappen, O. Shamir, and C. Liu. Prob- abilistic label trees for efï¬cient large scale image classiï¬ca- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 843â850, 2013. 7 [25] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of vi- sual attention. In Advances in Neural Information Processing Systems, pages 2204â2212, 2014. 2, 4, 7
[26] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Play- ing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. 4
[27] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. 1, 2, 6 | 1701.00299#61 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1701.00299 | 62 | [28] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feed- back connections. In Advances in Neural Information Pro- cessing Systems, pages 3545â3553, 2014. 2
[29] Y. Sun, X. Wang, and X. Tang. Deep convolutional net- work cascade for facial point detection. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 3476â3483. IEEE, 2013. 1, 2
[30] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. 4
[31] P. Viola and M. J. Jones. Robust real-time face detection. International journal of computer vision, 57(2):137â154, 2004. 2
[32] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016. 2 | 1701.00299#62 | Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution | We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | http://arxiv.org/pdf/1701.00299 | Lanlan Liu, Jia Deng | cs.LG, stat.ML | fixed typos; updated CIFAR-10 results and added more details;
corrected the cascade D2NN configuration details | null | cs.LG | 20170102 | 20180305 | [
{
"id": "1511.06297"
},
{
"id": "1701.06538"
}
] |
1612.08083 | 0 | 7 1 0 2
p e S 8 ] L C . s c [
3 v 3 8 0 8 0 . 2 1 6 1 : v i X r a
# Language Modeling with Gated Convolutional Networks
# Yann N. Dauphin 1 Angela Fan 1 Michael Auli 1 David Grangier 1
# Abstract
The pre-dominant approach to language mod- eling to date is based on recurrent neural net- works. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a ï¬nite context ap- proach through stacked convolutions, which can be more efï¬cient since they allow paralleliza- tion over sequential tokens. We propose a novel simpliï¬ed gating mechanism that outperforms Oord et al. (2016b) and investigate the impact of key architectural decisions. The proposed ap- proach achieves state-of-the-art on the WikiText- 103 benchmark, even though it features long- term dependencies, as well as competitive re- sults on the Google Billion Words benchmark. Our model reduces the latency to score a sen- tence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the ï¬rst time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks. | 1612.08083#0 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 1 | outperform classical n-gram language models (Kneser & Ney, 1995; Chen & Goodman, 1996). These classical mod- els suffer from data sparsity, which makes it difï¬cult to rep- resent large contexts and thus, long-range dependencies. Neural language models tackle this issue by embedding words in continuous space over which a neural network is applied. The current state of the art for language model- ing is based on long short term memory networks (LSTM; Hochreiter et al., 1997) which can theoretically model ar- bitrarily long dependencies.
In this paper, we introduce new gated convolutional net- works and apply them to language modeling. Convolu- tional networks can be stacked to represent large context sizes and extract hierarchical features over larger and larger contexts with more abstractive features (LeCun & Bengio, 1995). This allows them to model long-term dependen- cies by applying O( N k ) operations over a context of size N and kernel width k. In contrast, recurrent networks view the input as a chain structure and therefore require a linear number O(N ) of operations.
# 1. Introduction
Statistical language models estimate the probability distri- bution of a sequence of words by modeling the probability of the next word given preceding words, i.e. | 1612.08083#1 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 2 | # 1. Introduction
Statistical language models estimate the probability distri- bution of a sequence of words by modeling the probability of the next word given preceding words, i.e.
Analyzing the input hierarchically bears resemblance to classical grammar formalisms which build syntactic tree structures of increasing granuality, e.g., sentences consist of noun phrases and verb phrases each comprising further internal structure (Manning & Sch¨utze, 1999; Steedman, 2002). Hierarchical structure also eases learning since the number of non-linearities for a given context size is reduced compared to a chain structure, thereby mitigating the van- ishing gradient problem (Glorot & Bengio, 2010).
N P(wo,---,wn) = P(wo) Il P(wi|wo,--.,Wi-1), i=1
where wi are discrete word indices in a vocabulary. Lan- guage models are a critical part of systems for speech recognition (Yu & Deng, 2014) and machine translation (Koehn, 2010).
Modern hardware is well suited to models that are highly parallelizable. In recurrent networks, the next output de- pends on the previous hidden state which does not enable parallelization over the elements of a sequence. Convolu- tional networks, however, are very amenable to this com- puting paradigm since the computation of all input words can be performed simultaneously (§2). | 1612.08083#2 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 3 | Recently, neural networks (Bengio et al., 2003; Mikolov et al., 2010; Jozefowicz et al., 2016) have been shown to
1Facebook AI Research. Correspondence to: Yann N. Dauphin <[email protected]>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Gating has been shown to be essential for recurrent neural networks to reach state-of-the-art performance (Jozefow- icz et al., 2016). Our gated linear units reduce the vanish- ing gradient problem for deep architectures by providing a linear path for the gradients while retaining non-linear ca- pabilities (§5.2).
Language Modeling with Gated Convolutional Networks | 1612.08083#3 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 4 | Language Modeling with Gated Convolutional Networks
We show that gated convolutional networks outperform other recently published language models such as LSTMs trained in a similar setting on the Google Billion Word Benchmark (Chelba et al., 2013). We also evaluate the abil- ity of our models to deal with long-range dependencies on the WikiText-103 benchmark for which the model is con- ditioned on an entire paragraph rather than a single sen- tence and we achieve a new state-of-the-art on this dataset (Merity et al., 2016). Finally, we show that gated linear units achieve higher accuracy and converge faster than the LSTM-style gating of Oord et al. (2016; §4, §5).
# 2. Approach
In this paper we introduce a new neural language model that replaces the recurrent connections typically used in re- current networks with gated temporal convolutions. Neu- ral language models (Bengio et al., 2003) produce a repre- sentation H = [h0, . . . , hN ] of the context for each word w0, . . . , wN to predict the next word P (wi|hi). Recurrent neural networks f compute H through a recurrent function hi = f (hiâ1, wiâ1) which is an inherently sequential pro- cess that cannot be parallelized over i.1 | 1612.08083#4 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 5 | Our proposed approach convolves the inputs with a func- tion f to obtain H = f â w and therefore has no tempo- ral dependencies, so it is easier to parallelize over the in- dividual words of a sentence. This process will compute each context as a function of a number of preceding words. Compared to recurrent networks, the context size is ï¬nite but we will demonstrate both that inï¬nite contexts are not necessary and our models can represent large enough con- texts to perform well in practice (§5).
-{_ Input sentence Text The cat sat on the mat Wo Wy Wp Wz Wy We We {Lookup Table <\ E=Dy, O000O @efeseze) OO000O OO000 OO0000 OO000O ( { maa) Ok os i BeE.V+e (~ i} OodpoCo}+_/ 0000 | CODSOO | _| @lelefere) 1 ©0000 Gating A = DTAqgAAAaAaAEe Ho=Ae0(B) IS) Ja} fo} |S] lal lal lo a} (oJ (SJ lal lS} [S| lo Stack L - 1 Convolution+Gating Blocks /-\__Softmax } Y= softmax(WH, ) ©0000 | 1612.08083#5 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 6 | Figure 1 illustrates the model architecture. Words are rep- resented by a vector embedding stored in a lookup table D|V|Ãe where |V| is the number of words in the vocabulary and e is the embedding size. The input to our model is a sequence of words w0, . . . , wN which are represented by word embeddings E = [Dw0, . . . , DwN ]. We compute the hidden layers h0, . . . , hL as
Figure 1. Architecture of the gated convolutional network for lan- guage modeling.
hl(X) = (X â W + b) â Ï(X â V + c) (1)
where m, n are respectively the number of input and output feature maps and k is the patch size, X â RN Ãm is the input of layer hl (either word embeddings or the outputs of previous layers), W â RkÃmÃn, b â Rn, V â RkÃmÃn, c â Rn are learned parameters, Ï is the sigmoid function and â is the element-wise product between matrices.
When convolving inputs, we take care that hi does not contain information from future words. We address this by shifting the convolutional inputs to prevent the kernels | 1612.08083#6 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 7 | When convolving inputs, we take care that hi does not contain information from future words. We address this by shifting the convolutional inputs to prevent the kernels
1Parallelization is usually done over multiple sequences in- stead.
from seeing future context (Oord et al., 2016a). Speciï¬- cally, we zero-pad the beginning of the sequence with k â 1 elements, assuming the ï¬rst input element is the beginning of sequence marker which we do not predict and k is the width of the kernel.
The output of each layer is a linear projection X â W + b modulated by the gates Ï(X â V + c). Similar to LSTMs, these gates multiply each element of the matrix X â W + b and control the information passed on in the hierarchy. We dub this gating mechanism Gated Linear Units (GLU). Stacking multiple layers on top of the input E gives a repre- sentation of the context for each word H = hLâ¦. . .â¦h0(E). We wrap the convolution and the gated linear unit in a pre- activation residual block that adds the input of the block to
Language Modeling with Gated Convolutional Networks
the output (He et al., 2015a). The blocks have a bottleneck structure for computational efï¬ciency and each block has up to 5 layers. | 1612.08083#7 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 8 | the output (He et al., 2015a). The blocks have a bottleneck structure for computational efï¬ciency and each block has up to 5 layers.
trast, the gradient of the gated linear unit
V[X ®@ o(X)] = VX @o(X)+X@o'(X)VX (3)
The simplest choice to obtain model predictions is to use a softmax layer, but this choice is often computationally inefï¬cient for large vocabularies and approximations such as noise contrastive estimation (Gutmann & Hyv¨arinen) or hierarchical softmax (Morin & Bengio, 2005) are pre- ferred. We choose an improvement of the latter known as adaptive softmax which assigns higher capacity to very fre- quent words and lower capacity to rare words (Grave et al., 2016a). This results in lower memory requirements as well as faster computation at both training and test time.
has a path âX â Ï(X) without downscaling for the ac- tivated gating units in Ï(X). This can be thought of as a multiplicative skip connection which helps gradients ï¬ow through the layers. We compare the different gating schemes experimentally in Section §5.2 and we ï¬nd gated linear units allow for faster convergence to better perplexi- ties. | 1612.08083#8 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 9 | # 4. Experimental Setup
# 4.1. Datasets
# 3. Gating Mechanisms
Gating mechanisms control the path through which infor- mation ï¬ows in the network and have proven to be use- ful for recurrent neural networks (Hochreiter & Schmidhu- ber, 1997). LSTMs enable long-term memory via a sep- arate cell controlled by input and forget gates. This al- lows information to ï¬ow unimpeded through potentially many timesteps. Without these gates, information could easily vanish through the transformations of each timestep. In contrast, convolutional networks do not suffer from the same kind of vanishing gradient and we ï¬nd experimentally that they do not require forget gates. | 1612.08083#9 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 10 | Therefore, we consider models possessing solely output gates, which allow the network to control what informa- tion should be propagated through the hierarchy of lay- ers. We show this mechanism to be useful for language modeling as it allows the model to select which words or features are relevant for predicting the next word. Par- allel to our work, Oord et al. (2016b) have shown the effectiveness of an LSTM-style mechanism of the form tanh(XâW+b)âÏ(XâV+c) for the convolutional mod- eling of images. Later, Kalchbrenner et al. (2016) extended this mechanism with additional gates for use in translation and character-level language modeling. | 1612.08083#10 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 11 | We report results on two public large-scale language mod- eling datasets. First, the Google Billion Word dataset (Chelba et al., 2013) is considered one of the largest lan- guage modeling datasets with almost one billion tokens and a vocabulary of over 800K words. In this dataset, words appearing less than 3 times are replaced with a special un- known symbol. The data is based on an English corpus of 30, 301, 028 sentences whose order has been shufï¬ed. Second, WikiText-103 is a smaller dataset of over 100M tokens with a vocabulary of about 200K words (Merity et al., 2016). Different from GBW, the sentences are con- secutive which allows models to condition on larger con- texts rather than single sentences. For both datasets, we add a beginning of sequence marker <S > at the start of each line and an end of sequence marker </S> at the end of each line. On the Google Billion Word corpus each sequence is a single sentence, while on WikiText-103 a sequence is an entire paragraph. The model sees <S> and </S > as input but only predicts the end of sequence marker </S>. We evaluate models by computing the per- i â log p(wi|...,wiâ1) on the standard held out plexity e test portion of each dataset.
# 4.2. Training | 1612.08083#11 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 12 | # 4.2. Training
Gated linear units are a simpliï¬ed gating mechanism based on the work of Dauphin & Grangier (2015) for non- deterministic gates that reduce the vanishing gradient prob- lem by having linear units coupled to the gates. This retains the non-linear capabilities of the layer while allowing the gradient to propagate through the linear unit without scal- ing. The gradient of the LSTM-style gating of which we dub gated tanh unit (GTU) is
We implement our models in Torch (Collobert et al., 2011) and train on Tesla M40 GPUs. The majority of our models are trained on single GPU, as we focused on identifying compact architectures with good generalization and efï¬- cient computation at test time. We trained larger models with an 8-GPU setup by copying the model onto each GPU and dividing the batch such that each worker computes 1/8th of the gradients. The gradients are then summed us- ing Nvidia NCCL. The multi-GPU setup allowed us to train models with larger hidden units.
V[tanh(X) @ o(X)] = tanhâ(X)VX ® o(X) +o'(X)VX @ tanh(X). | 1612.08083#12 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 13 | Notice that it gradually vanishes as we stack layers because of the downscaling factors tanhâ(X) and o/(X). In con(2)
We train using Nesterovâs momentum (Sutskever et al., 2013). While the cost in terms of memory is storing an- other vector of the size of the parameters, it increases the speed of convergence signiï¬cantly with minimal additional
Language Modeling with Gated Convolutional Networks | 1612.08083#13 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 14 | Language Modeling with Gated Convolutional Networks
Name GCNN-13 GCNN-14B GCNN-9 GCNN-8B GCNN-8 GCNN-14 Dataset Google Billion Word wikitext-103 Lookup 128 280 Conv1 [4, 1268] x 1 (5, 512] x 1 [4,807] x 1 [1,512] x 1 [4,900] x 1 6, 850] x 3 1,14 1,128 Conv2.x : ee | x 12 5,128 | x3 ; soy | x4 5,128 | x3 | [4,900] x 7 1,850] x 1 > 1,512 , 1,512 1,512 1, 256 Conv3.x 5,512 | x3 5,256 | x3 5,850] x 4 1, 1024 1,512 1, 1024 1, 1024 Conv4.x 5,1024 | x6 1,1024 | x1 1,850] x 1 1, 2048 1, 2048 1, 1024 Conv5.x 5,1024 | x1 4,850] x 3 1, 4096 Conv6.x [4, 1024] x 1 Conv7.x [4, 2048] x 1 AdaSoftmax 10k,40k,200k 4k,40k,200k 2k,10k,50k | 10k,20k,200k | 1612.08083#14 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 15 | Table 1. Architectures for the models. The residual building blocks are shown in brackets with the format [k, n]. âBâ denotes bottleneck architectures.
computation compared to standard stochastic gradient de- scent. The speed of convergence was further increased with gradient clipping (Pascanu et al., 2013) and weight normal- ization (Salimans & Kingma, 2016).
Pascanu et al. (2013) argue for gradient clipping because it prevents the gradient explosion problem that characterizes RNNs. However, gradient clipping is not tied to RNNs, as it can be derived from the general concept of trust region methods. Gradient clipping is found using a spherical trust region
In general, ï¬nding a good architecture was simple and the rule of thumb is that the larger the model, the better the per- formance. In terms of optimization, we initialize the lay- ers of the model with the Kaiming initialization (He et al., 2015b), with the learning rate sampled uniformly in the interval [1., 2.], the momentum set to 0.99, and clipping set to 0.1. Good hyper-parameters for the optimizer are quite straightforward to ï¬nd and the optimal values do not change much between datasets.
# 5. Results | 1612.08083#15 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 16 | # 5. Results
Aé* = argmin f(0)+Vf7Ae@ s.t. ||Adl| <e . Vif = âmax(||Vfl|,e) =. 4 max(| VFO e «a
Empirically, our experiments converge signiï¬cantly faster with the use of gradient clipping even though we do not use a recurrent architecture.
In combination, these methods led to stable and fast con- vergence with comparatively large learning rates such as 1.
# 4.3. Hyper-parameters
LSTMs and recurrent networks are able to capture long term dependencies and are fast becoming cornerstones in natural language processing. In this section, we compare strong LSTM and RNN models from the literature to our gated convolutional approach on two datasets.
We ï¬nd the GCNN outperforms the comparable LSTM re- sults on Google billion words. To accurately compare these approaches, we control for the same number of GPUs and the adaptive softmax output model (Grave et al., 2016a), as these variables have a signiï¬cant inï¬uence on performance. In this setting, the GCNN reaches 38.1 test perplexity while the comparable LSTM has 39.8 perplexity (Table 2). | 1612.08083#16 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 17 | We found good hyper-parameter conï¬gurations by cross- validating with random search on a validation set. For the number of residual model architecture, we select blocks between {1, . . . , 10}, the size of the embed- dings with {128, . . . , 256}, the number of units between {128, . . . , 2048}, and the kernel width between {3, . . . , 5}.
Further, the GCNN obtains strong performance with much greater computational efï¬ciency. Figure 2 shows that our approach closes the previously signiï¬cant gap between models that use the full softmax and models with the usu- ally less accurate hierarchical softmax. Thanks to the adapLanguage Modeling with Gated Convolutional Networks | 1612.08083#17 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 18 | Model Sigmoid-RNN-2048 (Ji et al., 2015) Interpolated KN 5-Gram (Chelba et al., 2013) Sparse Non-Negative Matrix LM (Shazeer et al., 2014) RNN-1024 + MaxEnt 9 Gram Features (Chelba et al., 2013) LSTM-2048-512 (Jozefowicz et al., 2016) 2-layer LSTM-8192-1024 (Jozefowicz et al., 2016) BIG GLSTM-G4 (Kuchaiev & Ginsburg, 2017) LSTM-2048 (Grave et al., 2016a) 2-layer LSTM-2048 (Grave et al., 2016a) GCNN-13 GCNN-14 Bottleneck Test PPL Hardware 1 CPU 100 CPUs - 24 GPUs 32 GPUs 32 GPUs 8 GPUs 1 GPU 1 GPU 1 GPU 8 GPUs 68.3 67.6 52.9 51.3 43.7 30.6 23.3â 43.9 39.8 38.1 31.9
Table 2. Results on the Google Billion Word test set. The GCNN outperforms the LSTMs with the same output approximation.
55 eâ- LSTM+Softmax 50 eâ- GCNN+AdaSoftmax| B 545 2 © & % 40 2 35 30 0 200 400 600 800 1000 MFlops | 1612.08083#18 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 19 | 55 eâ- LSTM+Softmax 50 eâ- GCNN+AdaSoftmax| B 545 2 © & % 40 2 35 30 0 200 400 600 800 1000 MFlops
Figure 2. In comparison to the state-of-the-art (Jozefowicz et al., 2016) which uses the full softmax, the adaptive softmax approxi- mation greatly reduces the number of operations required to reach a given perplexity.
Model LSTM-1024 (Grave et al., 2016b) GCNN-8 GCNN-14 Test PPL Hardware 1 GPU 1 GPU 4 GPUs 48.7 44.9 37.2
Table 3. Results for single models on the WikiText-103 dataset.
lion Word, the average sentence length is quite short â only 20 words. We evaluate on WikiText-103 to determine if the model can perform well on a dataset where much larger contexts are available. On WikiText-103, an input se- quence is an entire Wikipedia article instead of an individ- ual sentence - increasing the average length to 4000 words. However, the GCNN outperforms LSTMs on this problem as well (Table 3). The GCNN-8 model has 8 layers with 800 units each and the LSTM has 1024 units. These results show that GCNNs can model enough context to achieve strong results. | 1612.08083#19 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 20 | tive softmax, the GCNN only requires a fraction of the op- erations to reach the same perplexity values. The GCNN outperforms other single model state-of-the-art approaches except the much larger LSTM of Jozefowicz et al. (2016), a model which requires more GPUs and the much more computationally expensive full softmax. In comparison, the largest model we have trained reaches 31.9 test per- plexity compared to the 30.6 of that approach, but only re- quires training for 2 weeks on 8 GPUs compared to 3 weeks of training on 32 GPUs for the LSTM. Note that these re- sults can be improved by either using mixtures of experts (Shazeer et al., 2017) or ensembles of these models. | 1612.08083#20 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 21 | We evaluated on the Gigaword dataset following Chen et al. (2016) to compare with fully connected models. We found that the fully connected and convolutional network reach respectively 55.6 and 29.4 perplexity. We also ran pre- liminary experiments on the much smaller Penn tree bank dataset. When we score the sentences independently, the GCNN and LSTM have comparable test perplexity with 108.7 and 109.3 respectively. However, it is possible to achieve better results by conditioning on previous sen- tences. Unlike the LSTM, we found that the GCNN over- ï¬ts on this quite small dataset and so we note the model is better suited to larger scale problems.
# 5.1. Computational Efï¬ciency
Another relevant concern is if the GCNNâs ï¬xed context size can thoroughly model long sequences. On Google Bilâappeared after submission
Computational cost is an important consideration for lan- guage models. Depending on the application, there are a number of metrics to consider. We measure the throughput
Language Modeling with Gated Convolutional Networks
80 70 =< ReLU| 75) 65 â GTU 70| â GLU 2 2 60 3S fy 265 a g @ 55 = 60 bal 3 ® 59 Lad 55) Lad 50 45 435 5 10 15 20 25 30 35 405 50 100 Epochs Hours | 1612.08083#21 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 22 | Figure 3. Learning curves on WikiText-103 (left) and Google Billion Word (right) for models with different activation mechanisms. Models with gated linear units (GLU) converge faster and to a lower perplexity.
LSTM-2048 GCNN-9 GCNN-8 Bottleneck Throughput (CPU) 169 121 179 (GPU) 45,622 29,116 45,878 Responsiveness (GPU) 2,282 29,116 45,878
Table 4. Processing speed in tokens/s at test time for an LSTM with 2048 units and GCNNs achieving 43.9 perplexity on Google Billion Word. The GCNN with bottlenecks improves the respon- siveness by 20 times while maintaining high throughput. | 1612.08083#22 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 23 | of a model as the number of tokens that can be processed per second. Throughput can be maximized by processing many sentences in parallel to amortize sequential opera- tions. In contrast, responsiveness is the speed of process- ing the input sequentially, one token at a time. Through- put is important because it indicates the time required to process a corpus of text and responsiveness is an indicator of the time to ï¬nish processing a sentence. A model can have low responsiveness but high throughput by evaluating many sentences simultaneously through batching. In this case, such a model is slow in ï¬nishing processing individ- ual sentences, but can process many sentences at a good rate.
We evaluate the throughput and responsiveness for mod- els that reach approximately 43.9 perplexity on the Google Billion Word benchmark. We consider the LSTM with 2048 units in Table 2, a GCNN-8Bottleneck with 7 Resnet blocks that have a bottleneck structure as described by (He et al., 2015a) and a GCNN-8 without bottlenecks. A bot- tleneck block wedges a k > 1 convolution between two k = 1 layers. This designs reduces computational cost by reducing and increasing dimensionality with the k = 1 lay- ers so that the convolution operates in a lower dimensional space. Our results show that the use of bottleneck blocks is important to maintaining computational efï¬ciency. | 1612.08083#23 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 24 | The throughput of the LSTM is measured by using a large batch of 750 sequences of length 20, resulting in 15, 000 to- kens per batch. The responsiveness is the average speed to process a sequence of 15, 000 contiguous tokens. Table 4 shows that the throughput for the LSTM and the GCNN are similar. The LSTM performs very well on GPU be- cause the large batch size of 750 enables high paralleliza- tion over different sentences. This is because the LSTM implementation has been thoroughly optimized and uses cuDNN, whereas the cuDNN implementation of convolu- tions is not been optimized for the 1-D convolutions we use in our model. We believe much better performance can be achieved by a more efï¬cient 1-D cuDNN convolution. Un- like the LSTM, the GCNN can be parallelized both over sequences as well as across the tokens of each sequence, allowing the GCNN to have 20x higher responsiveness.
# 5.2. Gating Mechanisms | 1612.08083#24 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 25 | # 5.2. Gating Mechanisms
In this section, we compare the gated linear unit with other mechanisms as well as to models without gating. We consider the LSTM-style gating mechanism (GTU) tanh(X â W + b) â Ï(X â V + c) of (Oord et al., 2016b) and networks that use regular ReLU or Tanh activations. Gating units add parameters, so for fair comparison, we carefully cross-validate models with a comparable number of parameters. Figure 3 (left) shows that GLU networks converge to a lower perplexity than the other approaches on WikiText-103. Similar to gated linear units, the ReLU has a linear path that lets the gradients easily pass through the active units. This translates to much faster convergence for both the ReLU and the GLU. On the other hand, neither Tanh nor GTU have this linear path, and thus suffer from the vanishing gradient problem. In the GTU, both the in- puts as well as the gating units can cut the gradient when the units saturate.
Comparing the GTU and Tanh models allows us to measure
Language Modeling with Gated Convolutional Networks | 1612.08083#25 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 26 | Comparing the GTU and Tanh models allows us to measure
Language Modeling with Gated Convolutional Networks
w w B BB O © OS N SB Test Perplexity w FS 32 10 20 30 40 50 60 70 Context 90 a a i) S 3 S Test Perplexity uw So 405, 10 15 20 25 Context
Figure 4. Test perplexity as a function of context for Google Billion Word (left) and Wiki-103 (right). We observe that models with bigger context achieve better results but the results start diminishing quickly after a context of 20.
the effect of gating since the Tanh model can be thought of as a GTU network with the sigmoid gating units removed. The results (Figure 3, left) show that the gating units make a vast difference and provide useful modeling capabilities, as there is a large difference in the performance between GTU and Tanh units. Similarly, while ReLU unit is not an exact ablation of the gating units in the GLU, it can be seen as a simpliï¬cation ReLU(X) = X â (X > 0) where the gates become active depending on the sign of the input. Also in this case, GLU units lead to lower perplexity. | 1612.08083#26 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 27 | In Figure 3 (right) we repeat the same experiment on the larger Google Billion Words dataset. We consider a ï¬xed time budget of 100 hours because of the considerable train- ing time required for this task. Similar to WikiText-103, the gated linear units achieve the best results on this prob- lem. There is a gap of about 5 perplexity points between the GLU and ReLU which is similar to the difference be- tween the LSTM and RNN models measured by (Jozefow- icz et al., 2016) on the same dataset.
hl(X) = (X â W + b) â (X â V + c).
140) â Linear ââ Bilinear 120 â GLU 2 x o 2 100) o & 3 80) 2 60) 40 () 50 100 Hours
Figure 5. Learning curves on Google Billion Word for models with varying degrees of non-linearity.
# 5.3. Non-linear Modeling | 1612.08083#27 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 28 | Figure 5. Learning curves on Google Billion Word for models with varying degrees of non-linearity.
# 5.3. Non-linear Modeling
The experiments so far have shown that the gated linear unit beneï¬ts from the linear path the unit provides com- pared to other non-linearities. Next, we compare networks with GLUs to purely linear networks and networks with bilinear layers in order to measure the impact of the non- linear path provided by the gates of the GLU. One mo- tivation for this experiment is the success of linear mod- els on many natural language processing tasks (Manning & Sch¨utze, 1999). We consider deep linear convolutional networks where the layers lack the gating units of the GLU and take the form hl(X) = X â W + b. Stacking sev- eral layers on top of each other is simply a factorization of the model which remains linear up to the softmax, at which point it becomes log-linear. Another variation of GLUs are bilinear layers (Mnih & Hinton, 2007) which take the form | 1612.08083#28 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 29 | Figure 5 shows that GLUs perform best, followed by bilin- ear layers and then linear layers. Bilinear layers improve over linear ones by more than 40 perplexity points, and the GLU improves another 20 perplexity points over the bilin- ear model. The linear model performs very poorly at per- plexity 115 even compared to 67.6 of a Kneser-Ney 5-gram model, even though the former has access to more context. Surprisingly, the introduction of the bilinear units is enough to reach 61 perplexity on Google Billion Word, which sur- passes both Kneser-Ney 5-gram models and the non-linear neural model of (Ji et al., 2015).
# 5.4. Context Size
Figure 4 shows the impact of context size for the gated CNN. We tried different combinations of network depth and kernel widths for each context size and chose the best performing one for each size. Generally, larger contexts
Language Modeling with Gated Convolutional Networks | 1612.08083#29 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 30 | Language Modeling with Gated Convolutional Networks
improve accuracy but returns drastically diminish with win- dows larger than 40 words, even for WikiText-103 where we may condition on an entire Wikipedia article. This means that the unlimited context offered by recurrent mod- els is not strictly necessary for language modeling. Fur- thermore, this ï¬nding is also congruent with the fact that good performance with recurrent networks can be obtained by truncating gradients after only 40 timesteps using trun- cated back propagation through time. Figure 4 also shows that WikiText-103 beneï¬ts much more from larger context size than Google Billion Word as the performance degrades more sharply with smaller contexts. WikiText-103 pro- vides much more context than Google Billion Word where the average sentence size is 20. However, while the average size of the documents is close to 4000 tokens, we ï¬nd that strong performance can be achieved with a context size as low as 30 tokens.
# 6. Conclusion | 1612.08083#30 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 31 | # 6. Conclusion
We introduce a convolutional neural network for language modeling with a novel gating mechanism. Compared to recurrent neural networks, our approach builds a hierarchi- cal representation of the input words that makes it easier to capture long-range dependencies, similar in spirit to the tree-structured analysis of linguistic grammar formalisms. The same property eases learning since features are passed through a ï¬xed number of layers and non-linearities, un- like for recurrent networks where the number of processing steps differs depending on the position of the word in the input. The results show that our gated convolutional net- work achieves a new state of the art on WikiText-103. On the Google Billion Word benchmark, we show competitive results can be achieved with signiï¬cantly fewer resources.
# Acknowledgments
# 5.5. Training | 1612.08083#31 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 32 | # Acknowledgments
# 5.5. Training
In this section, we perform an ablation study of the impact of weight normalization and gradient clipping. We sepa- rately cross-validate the hyper-parameters of each conï¬gu- ration to make the comparison fair. Due to the high cost of each of these experiments, we only consider a single itera- tion over the training data. Figure 6 shows that both meth- ods signiï¬cantly speed up convergence. Weight normal- ization in particular improves the speed by over two times. This speedup is partly due to the ability to use much larger learning rates (1 instead of 0.01) than would otherwise be possible. Both clipping and weight normalization add com- putational overhead, but it is minor compared to the large gains in convergence speed.
We would like to thank Ben Graham, Jonas Gehring, Edouard Grave, Armand Joulin and Ronan Collobert for helpful discussions.
# References
Bengio, Yoshua, Ducharme, R´ejean, Vincent, Pascal, and Jauvin, journal of Christian. A neural probabilistic language model. machine learning research, 3(Feb):1137â1155, 2003. | 1612.08083#32 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 33 | Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, and Robinson, Tony. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
Chen, Stanley F and Goodman, Joshua. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pp. 310â318. Association for Computational Lin- guistics, 1996.
140 130 â Without Clipping 120) ââ Without WeightNorm > â With Both £110 Eq ov âa 100} o 2 90 rn & 80 70 60 50 40000 80000 120000 160000 Updates
Chen, Wenlin, Grangier, David, and Auli, Michael. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2016.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clement. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch. | 1612.08083#33 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 34 | Dauphin, Yann N and Grangier, David. butions with linearizing belief networks. arXiv:1511.05622, 2015. Predicting distri- arXiv preprint
Glorot, Xavier and Bengio, Yoshua. Understanding the difï¬culty of training deep feedforward neural networks. The handbook of brain theory and neural networks, 2010.
Figure 6. Effect of weight normalization and gradient clipping on Google Billion Word.
Grave, E., Joulin, A., Ciss´e, M., Grangier, D., and J´egou, H. Efï¬cient softmax approximation for GPUs. ArXiv e-prints, September 2016a.
Improving Neural Lan- guage Models with a Continuous Cache. ArXiv e-prints, De- cember 2016b.
Language Modeling with Gated Convolutional Networks
Gutmann, Michael and Hyv¨arinen, Aapo. Noise-contrastive esti- mation: A new estimation principle for unnormalized statisti- cal models.
Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, arXiv preprint Koray. arXiv:1601.06759, 2016a. Pixel recurrent neural networks. | 1612.08083#34 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 35 | He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015a.
Oord, Aaron van den, Kalchbrenner, Nal, Vinyals, Oriol, Espe- holt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Condi- tional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016b.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectiï¬ers: Surpassing human-level perfor- mance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015b.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difï¬culty of training recurrent neural networks. In Proceedings of The 30th International Conference on Machine Learning, pp. 1310â1318, 2013. | 1612.08083#35 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 36 | Hochreiter, Sepp and Schmidhuber, J¨urgen. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
Ji, Shihao, Vishwanathan, SVN, Satish, Nadathur, Anderson, Michael J, and Dubey, Pradeep. Blackout: Speeding up recur- rent neural network language models with very large vocabu- laries. arXiv preprint arXiv:1511.06909, 2015.
Shazeer, Noam, Pelemans, Joris, and Chelba, Ciprian. Skip-gram language modeling using sparse non-negative matrix probabil- ity estimation. arXiv preprint arXiv:1412.1454, 2014.
Jozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. | 1612.08083#36 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 37 | Shazeer, Noam, Mirhoseini, Azalia, Maziarz, Krzysztof, Davis, Andy, Le, Quoc V., Hinton, Geoffrey E., and Dean, Jeff. Out- rageously large neural networks: The sparsely-gated mixture- of-experts layer. CoRR, abs/1701.06538, 2017. URL http: //arxiv.org/abs/1701.06538.
Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, van den Oord, Aaron, Graves, Alex, and Kavukcuoglu, Koray. Neural Machine Translation in Linear Time. arXiv, 2016.
Steedman, Mark. The syntactic process. 2002.
Kneser, Reinhard and Ney, Hermann. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pp. 181â184. IEEE, 1995.
Koehn, Philipp. Statistical Machine Translation. Cambridge Uni- versity Press, New York, NY, USA, 1st edition, 2010. ISBN 0521874157, 9780521874151. | 1612.08083#37 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 38 | Sutskever, Ilya, Martens, James, Dahl, George E, and Hinton, Ge- offrey E. On the importance of initialization and momentum in deep learning. 2013.
Wang, Mingxuan, Lu, Zhengdong, Li, Hang, Jiang, Wenbin, and gencnn: A convolutional architecture for word Liu, Qun. sequence prediction. CoRR, abs/1503.05034, 2015. URL http://arxiv.org/abs/1503.05034.
Kuchaiev, Oleksii and Ginsburg, Boris. Factorization tricks for LSTM networks. CoRR, abs/1703.10722, 2017. URL http: //arxiv.org/abs/1703.10722.
Yu, Dong and Deng, Li. Automatic Speech Recognition: A Deep Learning Approach. Springer Publishing Company, Incorpo- rated, 2014. ISBN 1447157788, 9781447157786.
LeCun, Yann and Bengio, Yoshua. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
Manning, Christopher D and Sch¨utze, Hinrich. Foundations of statistical natural language processing, 1999. | 1612.08083#38 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.08083 | 39 | Manning, Christopher D and Sch¨utze, Hinrich. Foundations of statistical natural language processing, 1999.
Merity, S., Xiong, C., Bradbury, J., and Socher, R. Pointer Sen- tinel Mixture Models. ArXiv e-prints, September 2016.
Mikolov, Tom´aËs, Martin, Karaï¬Â´at, Burget, Luk´aËs, Cernock´y, Jan, and Khudanpur, Sanjeev. Recurrent Neural Network based Language Model. In Proc. of INTERSPEECH, pp. 1045â1048, 2010.
Mnih, Andriy and Hinton, Geoffrey. Three new graphical models for statistical language modelling. In Proceedings of the 24th international conference on Machine learning, pp. 641â648. ACM, 2007.
Morin, Frederic and Bengio, Yoshua. Hierarchical probabilistic neural network language model. In Aistats, volume 5, pp. 246â 252. Citeseer, 2005. | 1612.08083#39 | Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks. | http://arxiv.org/pdf/1612.08083 | Yann N. Dauphin, Angela Fan, Michael Auli, David Grangier | cs.CL | null | null | cs.CL | 20161223 | 20170908 | [
{
"id": "1511.06909"
},
{
"id": "1602.02410"
},
{
"id": "1606.05328"
},
{
"id": "1602.07868"
},
{
"id": "1512.03385"
},
{
"id": "1511.05622"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 0 | 7 1 0 2
b e F 1 1 ] D S . s c [
2 v 7 3 8 7 0 . 2 1 6 1 : v i X r a
Published as a conference paper at ICLR 2017
SAMPLERNN: AN UNCONDITIONAL END-TO-END NEURAL AUDIO GENERATION MODEL
Soroush Mehri University of Montreal Kundan Kumar IIT Kanpur Ishaan Gulrajani University of Montreal Shubham Jain IIT Kanpur Jose Sotelo University of Montreal Aaron Courville University of Montreal CIFAR Fellow Yoshua Bengio University of Montreal CIFAR Senior Fellow
# ABSTRACT
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which proï¬ts from combining memory-less modules, namely autoregressive multilayer percep- trons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the gener- ated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1
# INTRODUCTION | 1612.07837#0 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 1 | 1
# INTRODUCTION
Audio generation is a challenging task at the core of many problems of interest, such as text-to- speech synthesis, music synthesis and voice conversion. The particular difï¬culty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1
Traditionally, the high-dimensionality of raw audio signal is dealt with by ï¬rst compressing it into spectral or hand-engineered features and deï¬ning the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in compli- cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems. | 1612.07837#1 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 2 | In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) proï¬ts from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance.
In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the ï¬exibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/ | 1612.07837#2 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 3 | 1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/
2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets
1
Published as a conference paper at ICLR 2017
operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily ï¬exible in handling sequential dependencies at multiple levels of abstraction.
Hence, our contribution is threefold:
1. We present a novel method that utilizes RNNs at different scales to model longer term de- pendencies in audio waveforms while training on short sequences which results in memory efï¬ciency during training.
2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on three audio datasets. Human evaluation also has been conducted to test these generative models.
# 2 SAMPLERNN MODEL | 1612.07837#3 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 4 | # 2 SAMPLERNN MODEL
In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
T-1 2X) = T] praler...-) a) i=0
RNNs are commonly used to model sequential data which can be formulated as:
ht = H(htâ1, xi=t)
p(xi+1|x1, . . . , xi) = Sof tmax(M LP (ht))
with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia- tions (Section 3). However, raw audio signals are challenging to model because they contain struc- ture at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart. | 1612.07837#4 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 5 | SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation.
2.1 FRAME-LEVEL MODULES
Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of F S(k) (âFrame Sizeâ) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward. | 1612.07837#5 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 6 | The variable number of frames we condition upon up to timestep t â 1 is expressed by a ï¬xed length hidden state or memory h(k) t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k) . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs. 4â5.
Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections.
2
(2) (3)
Published as a conference paper at ICLR 2017 | 1612.07837#6 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 7 | 2
(2) (3)
Published as a conference paper at ICLR 2017
Xi+15 Xit16> +++ X31 Tier3 > Xit12, ees Xit2ds «++ »Xi427 Xi428, «++ » Xi431 Tier 2 c c c c Tier 1 Xi+285 00 Xi431 ey Tf ots Xit3Lo vey P(%i+32 | X<i+32) P(Xi+33 | X<i+33) P(%i+34 | X<i+34) PRi+35 | X<i+35)
Figure 1: Snapshot of the unrolled model at timestep i with K = 3 tiers. As a simpliï¬cation only one RNN and up-sampling ratio r = 4 is used for all tiers.
Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that speciï¬c tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k)
t + c(k+1) t inpt = ; 1 < k < K k = K (4)
Wxf (k) f (k=K) ; t ht = H(htâ1, inpt)
(5) | 1612.07837#7 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 8 | Wxf (k) f (k=K) ; t ht = H(htâ1, inpt)
(5)
c(k) (tâ1)âr+j = Wjht;
1 ⤠j ⤠r (6)
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called âperforatedâ upsam- pling in the context of convolutional neural networks (CNNs). It was ï¬rst demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.
2.2 SAMPLE-LEVEL MODULE | 1612.07837#8 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 9 | 2.2 SAMPLE-LEVEL MODULE
The lowest module (tier k = 1; Eqs. 7â9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the F S(1) preceding samples as well as a vector c(k=2) from the next higher module which encodes information about the sequence prior to that frame. As F S(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above. | 1612.07837#9 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 10 | f (1) iâ1 = f latten([eiâF S(1), . . . , eiâ1]) f (1) i = f latten([eiâF S(1)+1, . . . , ei]) i + c(2) inp(1) i = W (1) p(xi+1|x1, . . . , xi) = Sof tmax(M LP (inp(1) x f (1) i )) i (7) (8) (9)
We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
3
Published as a conference paper at ICLR 2017
each window of F S(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference.
2.2.1 OUTPUT QUANTIZATION | 1612.07837#10 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 11 | 2.2.1 OUTPUT QUANTIZATION
The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax).
To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise.
In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam- ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.
In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.
2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS | 1612.07837#11 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 12 | 2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS
To demonstrate the importance of a sample-level autoregressive module, we try replacing it with âMulti-Softmaxâ (see Table 4), where the prediction of each sample xi depends only on the con- ditioning vector c from Eq. 9. In this conï¬guration, the model outputs an entire frame of F S(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We ï¬nd that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig- niï¬cantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is re- duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.
# 2.3 TRUNCATED BPTT | 1612.07837#12 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 13 | # 2.3 TRUNCATED BPTT
Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con- nections. However, when they can be trained efï¬ciently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efï¬cient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.
Table 3 shows that by increasing the subsequence length, performance substantially increases along- side with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures. | 1612.07837#13 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 14 | Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2â3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of ï¬lling the details to lower tiers.
4
Published as a conference paper at ICLR 2017
# 3 EXPERIMENTS AND RESULTS
In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their prepro- cessing is as follows: | 1612.07837#14 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 15 | In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their prepro- cessing is as follows:
Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di- versity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%. Music dataset is the collection of all 32 Beethovenâs piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/val- idation/test split is 88%-6%-6%. | 1612.07837#15 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 16 | See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio ï¬les into 8 seconds long se- quences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients.
We particularly explored two gated variants of RNNsâGRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneï¬cial for learning long-term dependencies.
As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample. | 1612.07837#16 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 17 | All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra- dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma| (6, = 0.9, Bg = 0.999, and « = 1leâ8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter val- ues (Bergstra & Bengio}/2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma| 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to|He| (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and | 1612.07837#17 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 19 | 3.1 WAVENET RE-IMPLEMENTATION
We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyper- parameters, as well as limited compute power at our disposal, we made our own design choices so that the model would ï¬t on a single GPU while having a receptive ï¬eld of around 250 milliseconds,
3Courtesy of Ubisoft
5
Published as a conference paper at ICLR 2017
2
Blizzard Onomatopoeia Music a, ni NAIA Ini WIN Nirirerernnvinnn NAN year Wily hal yal WN a perma VV WWW vv WA RA tt ota du lululil seth me Ma ahs ih lids i ero monn nines | 1612.07837#19 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 20 | Figure 2: Examples from the datasets compared to samples from our models. In the ï¬rst 3 rows, 2 seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1 and 4 are ground truth from which one can see how the datasets look different and have complex structure in low resolution which the frame-level component of the SampleRNN is designed to capture. Samples also to some extent mimic the same global structure. At the same time, zoomed-in samples of our model shows that it can perfectly resemble the high resolution structure present in the data as well.
Table 1: Test NLL in bits for three presented datasets. Model Blizzard Onomatopoeia Music RNN (Eq. 2) WaveNet (re-impl.) 1.434 1.480 2.034 2.285 1.410 1.464 SampleRNN (2-tier) SampleRNN (3-tier) 1.392 1.387 2.026 1.990 1.076 1.159
Table 2: Average NLL on Blizzard test set for real-valued models.
# Model
# Model
# Average Test NLL
RNN-GMM -2.415 SampleRNN-GMM (2-tier) -2.782
6
Published as a conference paper at ICLR 2017 | 1612.07837#20 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 21 | # Average Test NLL
RNN-GMM -2.415 SampleRNN-GMM (2-tier) -2.782
6
Published as a conference paper at ICLR 2017
Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set.
# Subsequence Length
32
64
128
256
512
NLL Validation 1.575 1.468 1.412 1.391 1.364
Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN are provided to compare the contribution of each component in performance. NLL Test (Validation) Model
SampleRNN (2-tier) Without Embedding Multi-Softmax 1.392 (1.369) 1.566 (1.539) 1.685 (1.656) | 1612.07837#21 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 22 | while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution ï¬lters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all sam- ples in the receptive ï¬eld as input or with target sequence length of size T with input of size receptive ï¬eld + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors. | 1612.07837#22 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 23 | For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive ï¬eld the parameters of multinomial distribution of sample at time step of 4092 acoustic samples i.e. t, p(xi) = fθ(xiâ1, xiâ2, . . . xiâ4092) where θ is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution ï¬lter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 ï¬lters in total due to gated non- linearity). We trained this model using Adam optimizer with a ï¬xed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.
3.2 HUMAN EVALUATION | 1612.07837#23 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 24 | 3.2 HUMAN EVALUATION
Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were deï¬nitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too.
All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them. Hence, we have a quantiï¬cation of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015).
Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1. | 1612.07837#24 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 25 | The same evaluation was conducted for Music dataset except for an additional ï¬ltering process of samples. Speciï¬c to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
7
Published as a conference paper at ICLR 2017
100 100 100 3+tier 3-tier 3-tier 2 80 80 80 ⬠5 £ 60 60 60 3 ES 2 = 40 40 40 2 g £ 20 20 20 2-tier No-Pref. RNN _ No-Pref. WaveN. nio-pref, 0 0 0 848 101 51 842 «8.9 69 39.0 7.0 40 100 100 100 g 80 2-tier 80 80 c RNN 2 60 co}. 2tier 60 3 ES g 5 40 40 Waven. 40 $ aveN 5 . £& 20 RNN 20 â 20 No-Pref. No-Pref. No-Pref. 0 0 0 790 180 3.0 602 320 78 22.4 633 143
Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted on samples generated from models trained on Blizzard dataset. | 1612.07837#25 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 26 | Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted on samples generated from models trained on Blizzard dataset.
100, 100, 100 © 3+tier 2-tier z 8o 80 80 ⬠5 5 60 2-tier 60 60 g g 2 40) 3 tier 40 40 2 2 20 No-Pref. 20 No-Pref. 20 No-Pref. } } RNN } RNN 32.6 57.0 10.5 83.5 47 11.8 85.1 2.3 12.6
Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted on samples generated from models trained on Music dataset.
3.3 QUANTIFYING INFORMATION RETENTION
For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds. | 1612.07837#26 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 27 | Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
We did classiï¬cation based on mean fundamental frequency of speakers for the ï¬rst and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.
8
Published as a conference paper at ICLR 2017
This is in contrast to a model with ï¬xed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive ï¬eld size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).
# 4 RELATED WORK | 1612.07837#27 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 28 | # 4 RELATED WORK
Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix- elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi- tion, we transform the joint distribution of acoustic samples using Eq. 1.
The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016).
Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009). | 1612.07837#28 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 29 | Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence.
# 5 DISCUSSION AND CONCLUSION
We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters. | 1612.07837#29 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 30 | Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when speciï¬c domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex struc- ture.
# ACKNOWLEDGMENTS
The authors would like to thank JoËao Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Qu´ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnolog´ıa (CONACyT) as well as the Secretar´ıa de Educaci´on P´ublica (SEP) for their support. This work was a collaboration with Ubisoft.
# 4http://deeplearning.net/software/theano/
9
Published as a conference paper at ICLR 2017
# REFERENCES
Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pp. 400â406, 1999. | 1612.07837#30 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 31 | Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pp. 400â406, 1999.
James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281â305, 2012.
Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditory ï¬lter banks using non-negative matrix factorisation. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4713â4716. IEEE, 2008.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980â2988, 2015.
Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener- ate chairs, tables and cars with convolutional networks. 2016. | 1612.07837#31 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 32 | Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen- cies. In NIPS, volume 400, pp. 409. Citeseer, 1995.
Felix Gers. Long short-term memory in recurrent neural networks. PhD thesis, Universit¨at Han- nover, 2001.
Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss. Web Audio Evaluation Tool: A browser-based listening test environment. In 12th Sound and Music Computing Conference, July 2015.
Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy blog, 2015. | 1612.07837#32 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 33 | Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathy blog, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011.
Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning for audio classiï¬cation using convolutional deep belief networks. In Advances in neural information processing systems, pp. 1096â1104, 2009.
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. | 1612.07837#33 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 34 | Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla, P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013â indian language task. In Blizzard Challenge Workshop 2013, 2013.
10
Published as a conference paper at ICLR 2017
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.
J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history com- pression. Neural Computation, 4(2):234â242, 1992.
Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 30th AAAI Conference on Artiï¬cial Intelligence (AAAI-16), 2016.
Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu- tation, pp. 153â164. Springer, 1999. | 1612.07837#34 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 35 | Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu- tation, pp. 153â164. Springer, 1999.
Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug- gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl- edge Management, pp. 553â562. ACM, 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and Keiichiro Oura. Speech synthesis based on hidden markov models. Proceedings of the IEEE, 101(5): 1234â1252, 2013.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. | 1612.07837#35 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 36 | Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.
# APPENDIX A
A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID
SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the ï¬nal output. | 1612.07837#36 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.07837 | 37 | In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive ï¬eld of size 509 samples from the past.
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work.
11 | 1612.07837#37 | SampleRNN: An Unconditional End-to-End Neural Audio Generation Model | In this paper we propose a novel model for unconditional audio generation
based on generating one audio sample at a time. We show that our model, which
profits from combining memory-less modules, namely autoregressive multilayer
perceptrons, and stateful recurrent neural networks in a hierarchical structure
is able to capture underlying sources of variations in the temporal sequences
over very long time spans, on three datasets of different nature. Human
evaluation on the generated samples indicate that our model is preferred over
competing models. We also show how each component of the model contributes to
the exhibited performance. | http://arxiv.org/pdf/1612.07837 | Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio | cs.SD, cs.AI | Published as a conference paper at ICLR 2017 | null | cs.SD | 20161222 | 20170211 | [
{
"id": "1602.07868"
},
{
"id": "1609.03499"
},
{
"id": "1511.07122"
},
{
"id": "1601.06759"
}
] |
1612.06370 | 1 | 1Facebook AI Research (FAIR) 2University of California, Berkeley
# Abstract
This paper presents a novel yet intuitive approach to un- supervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based group- ing cues can be used to learn an effective visual represen- tation. Speciï¬cally, we use unsupervised motion-based seg- mentation on videos to obtain segments, which we use as âpseudo ground truthâ to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed âpretextâ tasks studied in the litera- ture. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation signiï¬cantly outperforms previous un- supervised approaches across multiple settings, especially when training data for the target task is scarce. | 1612.06370#1 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 2 | Figure 1. Low-level appearance cues lead to incorrect grouping (top right). Motion helps us to correctly group pixels that move together (bottom left) and identify this group as a single object (bottom right). We use unsupervised motion-based grouping to train a ConvNet to segment objects in static images and show that the network learns strong features that transfer well to other tasks.
# 1. Introduction | 1612.06370#2 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 3 | ConvNet-based image representations are extremely ver- satile, showing good performance in a variety of recogni- tion tasks [9, 15, 19, 50]. Typically these representations are trained using supervised learning on large-scale image classiï¬cation datasets, such as ImageNet [41]. In contrast, animal visual systems do not require careful manual anno- tation to learn, and instead take advantage of the nearly in- ï¬nite amount of unlabeled data in their surrounding envi- ronments. Developing models that can learn under these challenging conditions is a fundamental scientiï¬c problem, which has led to a ï¬urry of recent work proposing methods that learn visual representations without manual annotation. A recurring theme in these works is the idea of a âpre- text taskâ: a task that is not of direct interest, but can be used to obtain a good visual representation as a byprod- uct of training. Example pretext tasks include reconstructing the input [4, 20, 44], predicting the pixels of the next frame in a video stream [17], metric learning on object track endpoints [46], temporally ordering shufï¬ed frames from a video [29], and spatially | 1612.06370#3 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
1612.06370 | 4 | frame in a video stream [17], metric learning on object track endpoints [46], temporally ordering shufï¬ed frames from a video [29], and spatially ordering patches from a static im- age [8, 30]. The challenge in this line of research lies in cleverly designing a pretext task that causes the ConvNet (or other representation learner) to learn high-level features. In this paper, we take a different approach that is moti- vated by human vision studies. Both infants [42] and newly sighted congenitally blind people [32] tend to oversegment static objects, but can group things properly when they move (Figure 1). To do so, they may rely on the Gestalt principle of common fate [34, 47]: pixels that move together tend to belong together. The ability to parse static scenes im- proves [32] over time, suggesting that while motion-based grouping appears early, static grouping is acquired later, possibly bootstrapped by motion cues. Moreover, experi- ments in [32] show that shortly after gaining sight, human subjects are better able to name objects that tend to be seen | 1612.06370#4 | Learning Features by Watching Objects Move | This paper presents a novel yet intuitive approach to unsupervised feature
learning. Inspired by the human visual system, we explore whether low-level
motion-based grouping cues can be used to learn an effective visual
representation. Specifically, we use unsupervised motion-based segmentation on
videos to obtain segments, which we use as 'pseudo ground truth' to train a
convolutional network to segment objects from a single frame. Given the
extensive evidence that motion plays a key role in the development of the human
visual system, we hope that this straightforward approach to unsupervised
learning will be more effective than cleverly designed 'pretext' tasks studied
in the literature. Indeed, our extensive experiments show that this is the
case. When used for transfer learning on object detection, our representation
significantly outperforms previous unsupervised approaches across multiple
settings, especially when training data for the target task is scarce. | http://arxiv.org/pdf/1612.06370 | Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, Bharath Hariharan | cs.CV, cs.AI, cs.LG, cs.NE, stat.ML | CVPR 2017 | null | cs.CV | 20161219 | 20170412 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.