doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1609.04836 | 39 | B.4 NETWORKS C2 AND C4
The C2 network is a modiï¬ed version of the popular VGG conï¬guration (Simonyan & Zisserman, 2014). The C3 network uses the conï¬guration: 2Ã[64, 3, 3, 1], 2Ã[128, 3, 3, 1], 3Ã[256, 3, 3, 1], 3à [512, 3, 3, 1], 3 à [512, 3, 3, 1] which a MaxPool(2) after each stack. This stack is followed by a 512- dimensional dense layer and ï¬nally, a 10-dimensional output layer. The activation and properties of each layer is as in B.3. As is the case with C3 and C1, the conï¬guration C4 is identical to C2 except that it uses 100 softmax outputs instead of 10.
12
Published as a conference paper at ICLR 2017
# C PERFORMANCE MODEL
As mentioned in Section 1, a training algorithm that operates in the large-batch regime without suffering from a generalization gap would have the ability to scale to much larger number of nodes than is currently possible. Such and algorithm might also improve training time through faster convergence. We present an idealized performance model that demonstrates our goal. | 1609.04836#39 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 40 | For LB method to be competitive with SB method, the LB method must (i) converge to minimizers that generalize well, and (ii) do it in a reasonably number of iterations, which we analyze here. Let I, and Ip be number of iterations required by SB and LB methods to reach the point of comparable test accuracy, respectively. Let B, and By, be corresponding batch sizes and P be number of processors being used for training. Assume that P < Be, and let f,(P) be the parallel efficiency of the SB method. For simplicity, we assume that f(P), the parallel efficiency of the LB method, is 1.0. In other words, we assume that the LB method is perfectly scalable due to use of a large batch size.
For LB to be faster than SB, we must have
Be Bs Ij; <1,ââ.. âP ~ PIP)
In other words, the ratio of iterations of LB to the iterations of SB should be
Ie Bs 1. ~ PB
For example, if f,(P) = 0.2 and B,/By = 0.1, the LB method must converge in at most half as many iterations as the SB method to see performance benefits. We refer the reader to 2016) for a more detailed model and a commentary on the effect of batch-size on the performance.
# D CURVILINEAR PARAMETRIC PLOTS | 1609.04836#40 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 41 | # D CURVILINEAR PARAMETRIC PLOTS
The parametric plots for the curvilinear path from x to x7, i.e., f(sin(S#)a7 + cos($#)a*) can be found in Figure[7]
# E ATTEMPTS TO IMPROVE LB METHODS
In this section, we discuss a few strategies that aim to remedy the problem of poor generalization for large-batch methods. As in Section 2, we use 10% as the percentage batch-size for large-batch experiments and 256 for small-batch methods. For all experiments, we use ADAM as the optimizer irrespective of batch-size.
E.1 DATA AUGMENTATION | 1609.04836#41 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 42 | E.1 DATA AUGMENTATION
Given that large-batch methods appear to be attracted to sharp minimizers, one can ask whether it is possible to modify the geometry of the loss function so that it is more benign to large-batch meth- ods. The loss function depends both on the geometry of the objective function and to the size and properties of the training set. One approach we consider is data augmentation; see e.g. (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). The application of this technique is domain speciï¬c but generally involves augmenting the data set through controlled modiï¬cations on the training data. For instance, in the case of image recognition, the training set can be augmented through translations, rotations, shearing and ï¬ipping of the training data. This technique leads to regularization of the network and has been employed for improving testing accuracy on several data sets. | 1609.04836#42 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 43 | In our experiments, we train the 4 image-based (convolutional) networks using aggressive data aug- mentation and present the results in Table 6. For the augmentation, we use horizontal reï¬ections, random rotations up to 10⦠and random translation of up to 0.2 times the size of the image. It is evident from the table that, while the LB method achieves accuracy comparable to the SB method (also with training data augmented), the sharpness of the minima still exists, suggesting sensitivity to images contained in neither training or testing set. In this section, we exclude parametric plots and sharpness values for the SB method owing to space constraints and the similarity to those presented in Section 2.2.
13
Published as a conference paper at ICLR 2017
(a) F1 (b) F2 (c) C1 (d) C2 (e) C3 (f) C4
Figure 7: Parametric Plots â Curvilinear (Left vertical axis corresponds to cross-entropy loss, f , and right vertical axis corresponds to classiï¬cation accuracy; solid line indicates training data set and dashed line indicated testing data set); α = 0 corresponds to the SB minimizer while α = 1 corresponds to the LB minimizer
Table 6: Effect of Data Augmentation | 1609.04836#43 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 44 | Table 6: Effect of Data Augmentation
Testing Accuracy Sharpness (LB method) Baseline (SB) | Augmented LB e=10% | «=5-104 Cy | 83.63% £ 0.14% | 82.50% £ 0.67% | 231.77 £ 30.50 45.89 + 3.82 Cz | 89.82% £0.12% | 90.26% + 1.15% | 468.65 + 47.86 | 105.22 C3 | 54.55% : » | 53.03% + 0.33% | 103.68 + 11.93 37.6 C4 | 63.05% + 0.5% 65.88 + 0.138% 271.06 + 29.69 45.31 4
14
Published as a conference paper at ICLR 2017
Table 7: Effect of Conservative Training | 1609.04836#44 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 45 | 14
Published as a conference paper at ICLR 2017
Table 7: Effect of Conservative Training
Testing Accuracy Sharpness (LB method) Baseline (SB) | Conservative LB e=10-% | ©=5-10-+ Fi 0.07% | 98.12% + 0.01% 63.81 | 46.02 + 12.58 F | 64.02% 40.2% | 61.94% + 1.10% 51.63 | 190.77 + 25.33 C, | 80.04% + 0.12% | 78.41% + 0.22% 34.91 | 171.19 + 15.13 C2 | 89.24% + 0.05% | 88.495% + 0.63% 108.88 + 47.36 0.39% | 45.98% +0.54% | 337.92 110.69 Cy | 63.08% +0.10% | 62.514
E.2 CONSERVATIVE TRAINING
In (Li et al., 2014), the authors argue that the convergence rate of SGD for the large-batch setting can be improved by obtaining iterates through the following proximal sub-problem.
1 A ; hoa = arg min Bil > fi(a) + =||x â ve |l5 (5) | sll iC By, | 1609.04836#45 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 46 | 1 A ; hoa = arg min Bil > fi(a) + =||x â ve |l5 (5) | sll iC By,
The motivation for this strategy is, in the context of large-batch methods, to better utilize a batch before moving onto the next one. The minimization problem is solved inexactly using 3â5 itera- tions of gradient descent, co-ordinate descent or L-BFGS. (Li et al., 2014) report that this not only improves the convergence rate of SGD but also leads to improved empirical performance on con- vex machine learning problems. The underlying idea of utilizing a batch is not speciï¬c to convex problems and we can apply the same framework for deep learning, however, without theoretical guarantees. Indeed, similar algorithms were proposed in (Zhang et al., 2015) and (Mobahi, 2016) for Deep Learning. The former placed emphasis on parallelization of small-batch SGD and asyn- chrony while the latter on a diffusion-continuation mechanism for training. The results using the conservative training approach are presented in Figure 7. In all experiments, we solve the problem (5) using 3 iterations of ADAM and set the regularization parameter λ to be 10â3. Again, there is a statistically signiï¬cant improvement in the testing accuracy of the large-batch method but it does not solve the problem of sensitivity. | 1609.04836#46 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 47 | # E.3 ROBUST TRAINING
A natural way of avoiding sharp minima is through robust optimization techniques. These methods attempt to optimize a worst-case cost as opposed to the nominal (or true) cost. Mathematically, given an ⬠> 0, these techniques solve the problem
min @(x) = mae, f(a + Ax) (6)
Geometrically, classical (nominal) optimization attempts to locate the lowest point of a valley, while robust optimization attempts to lower an eâdisc down the loss surface. We refer an interested reader to (2010), and the references therein, for a review of non-convex robust optimiza- tion. A direct application of this technique is, however, not feasible in our context since each itera- tion is prohibitively expensive because it involves solving a large-scale second-order conic program (SOCP).
15
Published as a conference paper at ICLR 2017
Worst-Case (Robust) â $4) < O02) lominal Cost FG) > fa)
# Cost
Figure 8: Illustration of Robust Optimization | 1609.04836#47 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 48 | Worst-Case (Robust) â $4) < O02) lominal Cost FG) > fa)
# Cost
Figure 8: Illustration of Robust Optimization
In the context of Deep Learning, there are two inter-dependent forms of robustness: robustness to the data and robustness to the solution. The former exploits the fact that the function f is inherently a statistical model, while the latter treats f as a black-box function. In (Shaham et al., 2015), the authors prove the equivalence between robustness of the solution (with respect to the data) and adversarial training (Goodfellow et al., 2014a). | 1609.04836#48 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.04836 | 49 | Given the partial success of the data augmentation strategy, it is natural to question the efï¬cacy of adversarial training. As described in (Goodfellow et al., 2014a), adversarial training also aims to artiï¬cially increase the training set but, unlike randomized data augmentation, uses the modelâs sensitivity to construct new examples. Despite its intuitive appeal, in our experiments, we found that this strategy did not improve generalization. Similarly, we observed no generalization beneï¬t from the stability training proposed by (Zheng et al., 2016). In both cases, the testing accuracy, sharpness values and the parametric plots were similar to the unmodiï¬ed (baseline) case discussed in Section 2. It remains to be seen whether adversarial training (or any other form of robust training) can increase the viability of large-batch training.
16 | 1609.04836#49 | On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap. | http://arxiv.org/pdf/1609.04836 | Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang | cs.LG, math.OC | Accepted as a conference paper at ICLR 2017 | null | cs.LG | 20160915 | 20170209 | [
{
"id": "1502.03167"
},
{
"id": "1606.04838"
},
{
"id": "1604.04326"
},
{
"id": "1602.06709"
},
{
"id": "1605.08361"
},
{
"id": "1611.01838"
},
{
"id": "1601.04114"
},
{
"id": "1511.05432"
},
{
"id": "1509.01240"
}
] |
1609.03499 | 0 | 6 1 0 2
p e S 9 1 ] D S . s c [
2 v 9 9 4 3 0 . 9 0 6 1 : v i X r a
# WAVENET: A GENERATIVE MODEL FOR RAW AUDIO
A¨aron van den Oord Sander Dieleman Heiga Zenâ Karen Simonyan Oriol Vinyals Alex Graves Nal Kalchbrenner Andrew Senior Koray Kavukcuoglu
{avdnoord, sedielem, heigazen, simonyan, vinyals, gravesa, nalk, andrewsenior, korayk}@google.com Google DeepMind, London, UK â Google, London, UK
# ABSTRACT | 1609.03499#0 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 1 | # ABSTRACT
This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predic- tive distribution for each audio sample conditioned on all previous ones; nonethe- less we show that it can be efï¬ciently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of- the-art performance, with human listeners rating it as signiï¬cantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal ï¬delity, and can switch between them by conditioning on the speaker identity. When trained to model music, we ï¬nd that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.
# INTRODUCTION
This work explores raw audio generation techniques, inspired by recent advances in neural autore- gressive generative models that model complex distributions such as images (van den Oord et al., 2016a;b) and text (J´ozefowicz et al., 2016). Modeling joint probabilities over pixels or words using neural architectures as products of conditional distributions yields state-of-the-art generation. | 1609.03499#1 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 2 | Remarkably, these architectures are able to model distributions over thousands of random variables (e.g. 64Ã64 pixels as in PixelRNN (van den Oord et al., 2016a)). The question this paper addresses is whether similar approaches can succeed in generating wideband raw audio waveforms, which are signals with very high temporal resolution, at least 16,000 samples per second (see Fig. 1).
Niven -â Saal
Figure 1: A second of generated speech.
This paper introduces WaveNet, an audio generative model based on the PixelCNN (van den Oord et al., 2016a;b) architecture. The main contributions of this work are as follows:
⢠We show that WaveNets can generate raw speech signals with subjective naturalness never before reported in the ï¬eld of text-to-speech (TTS), as assessed by human raters.
1
⢠In order to deal with long-range temporal dependencies needed for raw audio generation, we develop new architectures based on dilated causal convolutions, which exhibit very large receptive ï¬elds.
⢠We show that when conditioned on a speaker identity, a single model can be used to gener- ate different voices.
⢠The same architecture shows strong results when tested on a small speech recognition dataset, and is promising when used to generate other audio modalities such as music. | 1609.03499#2 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 3 | ⢠The same architecture shows strong results when tested on a small speech recognition dataset, and is promising when used to generate other audio modalities such as music.
We believe that WaveNets provide a generic and ï¬exible framework for tackling many applications that rely on audio generation (e.g. TTS, music, speech enhancement, voice conversion, source sep- aration).
# 2 WAVENET
In this paper we introduce a new generative model operating directly on the raw audio waveform. The joint probability of a waveform x = {x1, . . . , xT } is factorised as a product of conditional probabilities as follows:
T p(x) =] p(a | 21,.--, 2-1) dd) t=1
t=1 Each audio sample xt is therefore conditioned on the samples at all previous timesteps. | 1609.03499#3 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 4 | t=1 Each audio sample xt is therefore conditioned on the samples at all previous timesteps.
Similarly to PixelCNNs (van den Oord et al., 2016a;b), the conditional probability distribution is modelled by a stack of convolutional layers. There are no pooling layers in the network, and the output of the model has the same time dimensionality as the input. The model outputs a categorical distribution over the next value xt with a softmax layer and it is optimized to maximize the log- likelihood of the data w.r.t. the parameters. Because log-likelihoods are tractable, we tune hyper- parameters on a validation set and can easily measure if the model is overï¬tting or underï¬tting.
2.1 DILATED CAUSAL CONVOLUTIONS
Output O- O Oe Oo Oo 66065650 66606 Hidden Layer 666066066666 ) Hidden Layer Cee eee bs |.
Figure 2: Visualization of a stack of causal convolutional layers. | 1609.03499#4 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 5 | Figure 2: Visualization of a stack of causal convolutional layers.
The main ingredient of WaveNet are causal convolutions. By using causal convolutions, we make sure the model cannot violate the ordering in which we model the data: the prediction p (xt+1 | x1, ..., xt) emitted by the model at timestep t cannot depend on any of the future timesteps xt+1, xt+2, . . . , xT as shown in Fig. 2. For images, the equivalent of a causal convolution is a masked convolution (van den Oord et al., 2016a) which can be implemented by constructing a mask tensor and doing an elementwise multiplication of this mask with the convolution kernel before ap- plying it. For 1-D data such as audio one can more easily implement this by shifting the output of a normal convolution by a few timesteps.
At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are se- quential: after each sample is predicted, it is fed back into the network to predict the next sample.
2 | 1609.03499#5 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 6 | 2
Because models with causal convolutions do not have recurrent connections, they are typically faster to train than RNNs, especially when applied to very long sequences. One of the problems of causal convolutions is that they require many layers, or large ï¬lters to increase the receptive ï¬eld. For example, in Fig. 2 the receptive ï¬eld is only 5 (= #layers + ï¬lter length - 1). In this paper we use dilated convolutions to increase the receptive ï¬eld by orders of magnitude, without greatly increasing computational cost. | 1609.03499#6 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 7 | A dilated convolution (also called `a trous, or convolution with holes) is a convolution where the ï¬lter is applied over an area larger than its length by skipping input values with a certain step. It is equivalent to a convolution with a larger ï¬lter derived from the original ï¬lter by dilating it with zeros, but is signiï¬cantly more efï¬cient. A dilated convolution effectively allows the network to operate on a coarser scale than with a normal convolution. This is similar to pooling or strided convolutions, but here the output has the same size as the input. As a special case, dilated convolution with dilation 1 yields the standard convolution. Fig. 3 depicts dilated causal convolutions for dilations 1, 2, 4, and 8. Dilated convolutions have previously been used in various contexts, e.g. signal processing (Holschneider et al., 1989; Dutilleux, 1989), and image segmentation (Chen et al., 2015; Yu & Koltun, 2016).
@ ¢g . : ' , 4 . . : ? ' : Output i â : ' i : t : : ' i : : ' Dilation = 8 Hidden Layer Dilation = 4 Hidden Layer Dilation = 2 Hidden Layer Dilation = 4 Input | 1609.03499#7 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 8 | Figure 3: Visualization of a stack of dilated causal convolutional layers.
Stacked dilated convolutions enable networks to have very large receptive ï¬elds with just a few lay- ers, while preserving the input resolution throughout the network as well as computational efï¬ciency. In this paper, the dilation is doubled for every layer up to a limit and then repeated: e.g.
1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512. The intuition behind this conï¬guration is two-fold. First, exponentially increasing the dilation factor results in exponential receptive ï¬eld growth with depth (Yu & Koltun, 2016). For example each 1, 2, 4, . . . , 512 block has receptive ï¬eld of size 1024, and can be seen as a more efï¬cient and dis- criminative (non-linear) counterpart of a 1Ã1024 convolution. Second, stacking these blocks further increases the model capacity and the receptive ï¬eld size.
2.2 SOFTMAX DISTRIBUTIONS | 1609.03499#8 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 9 | 2.2 SOFTMAX DISTRIBUTIONS
One approach to modeling the conditional distributions p (xt | x1, . . . , xtâ1) over the individual audio samples would be to use a mixture model such as a mixture density network (Bishop, 1994) or mixture of conditional Gaussian scale mixtures (MCGSM) (Theis & Bethge, 2015). However, van den Oord et al. (2016a) showed that a softmax distribution tends to work better, even when the data is implicitly continuous (as is the case for image pixel intensities or audio sample values). One of the reasons is that a categorical distribution is more ï¬exible and can more easily model arbitrary distributions because it makes no assumptions about their shape.
Because raw audio is typically stored as a sequence of 16-bit integer values (one per timestep), a softmax layer would need to output 65,536 probabilities per timestep to model all possible values. To make this more tractable, we ï¬rst apply a µ-law companding transformation (ITU-T, 1988) to the data, and then quantize it to 256 possible values:
f (xt) = sign(xt) ln (1 + µ |xt|) ln (1 + µ) ,
3 | 1609.03499#9 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 10 | f (xt) = sign(xt) ln (1 + µ |xt|) ln (1 + µ) ,
3
where â1 < xt < 1 and µ = 255. This non-linear quantization produces a signiï¬cantly better reconstruction than a simple linear quantization scheme. Especially for speech, we found that the reconstructed signal after quantization sounded very similar to the original.
2.3 GATED ACTIVATION UNITS
We use the same gated activation unit as used in the gated PixelCNN (van den Oord et al., 2016b):
z = tanh (Wy, *x) Oo (Won * x), (2)
where * denotes a convolution operator, © denotes an element-wise multiplication operator, o(-) is a sigmoid function, k is the layer index, f and g denote filter and gate, respectively, and W is a learnable convolution filter. In our initial experiments, we observed that this non-linearity worked significantly better than the rectified linear activation function (Nair & Hinton) [2010) for modeling audio signals.
# 2.4 RESIDUAL AND SKIP CONNECTIONS
f E@fa@ 1x 1H) Softmax + Output Skip-connections Dilated Conv -9- 4 Causal Conv r Input
Figure 4: Overview of the residual block and the entire architecture. | 1609.03499#10 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 11 | Figure 4: Overview of the residual block and the entire architecture.
Both residual (He et al., 2015) and parameterised skip connections are used throughout the network, to speed up convergence and enable training of much deeper models. In Fig. 4 we show a residual block of our model, which is stacked many times in the network.
2.5 CONDITIONAL WAVENETS
Given an additional input h, WaveNets can model the conditional distribution p (x | h) of the audio given this input. Eq. (1) now becomes
T p(x|b)=]] p (a | a,-..,ae-1,h). (3) t=1
By conditioning the model on other input variables, we can guide WaveNetâs generation to produce audio with the required characteristics. For example, in a multi-speaker setting we can choose the speaker by feeding the speaker identity to the model as an extra input. Similarly, for TTS we need to feed information about the text as an extra input.
We condition the model on other inputs in two different ways: global conditioning and local condi- tioning. Global conditioning is characterised by a single latent representation h that inï¬uences the output distribution across all timesteps, e.g. a speaker embedding in a TTS model. The activation function from Eq. (2) now becomes: | 1609.03499#11 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 12 | z = tanh (Wp, *x + Vinh) Oa (Wyn *x+ Vonh) .
4
where Vâ,k is a learnable linear projection, and the vector V T sion. â,kh is broadcast over the time dimenFor local conditioning we have a second timeseries ht, possibly with a lower sampling frequency than the audio signal, e.g. linguistic features in a TTS model. We ï¬rst transform this time series using a transposed convolutional network (learned upsampling) that maps it to a new time series y = f (h) with the same resolution as the audio signal, which is then used in the activation unit as follows:
z= tanh (Wp, *x+Vpn*y) Oo (Wor *x+Von*y),
where Vf,k ây is now a 1Ã1 convolution. As an alternative to the transposed convolutional network, it is also possible to use Vf,k âh and repeat these values across time. We saw that this worked slightly worse in our experiments.
2.6 CONTEXT STACKS | 1609.03499#12 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 13 | 2.6 CONTEXT STACKS
We have already mentioned several different ways to increase the receptive ï¬eld size of a WaveNet: increasing the number of dilation stages, using more layers, larger ï¬lters, greater dilation factors, or a combination thereof. A complementary approach is to use a separate, smaller context stack that processes a long part of the audio signal and locally conditions a larger WaveNet that processes only a smaller part of the audio signal (cropped at the end). One can use multiple context stacks with varying lengths and numbers of hidden units. Stacks with larger receptive ï¬elds have fewer units per layer. Context stacks can also have pooling layers to run at a lower frequency. This keeps the computational requirements at a reasonable level and is consistent with the intuition that less capacity is required to model temporal correlations at longer timescales.
# 3 EXPERIMENTS
To measure WaveNetâs audio modelling performance, we evaluate it on three different tasks: multi- speaker speech generation (not conditioned on text), TTS, and music audio modelling. We provide samples drawn from WaveNet for these experiments on the accompanying webpage: https://www.deepmind.com/blog/wavenet-generative-model-raw-audio/.
3.1 MULTI-SPEAKER SPEECH GENERATION | 1609.03499#13 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 14 | 3.1 MULTI-SPEAKER SPEECH GENERATION
For the ï¬rst experiment we looked at free-form speech generation (not conditioned on text). We used the English multi-speaker corpus from CSTR voice cloning toolkit (VCTK) (Yamagishi, 2012) and conditioned WaveNet only on the speaker. The conditioning was applied by feeding the speaker ID to the model in the form of a one-hot vector. The dataset consisted of 44 hours of data from 109 different speakers.
Because the model is not conditioned on text, it generates non-existent but human language-like words in a smooth way with realistic sounding intonations. This is similar to generative models of language or images, where samples look realistic at ï¬rst glance, but are clearly unnatural upon closer inspection. The lack of long range coherence is partly due to the limited size of the modelâs receptive ï¬eld (about 300 milliseconds), which means it can only remember the last 2â3 phonemes it produced. | 1609.03499#14 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 15 | A single WaveNet was able to model speech from any of the speakers by conditioning it on a one- hot encoding of a speaker. This conï¬rms that it is powerful enough to capture the characteristics of all 109 speakers from the dataset in a single model. We observed that adding speakers resulted in better validation set performance compared to training solely on a single speaker. This suggests that WaveNetâs internal representation was shared among multiple speakers.
Finally, we observed that the model also picked up on other characteristics in the audio apart from the voice itself. For instance, it also mimicked the acoustics and recording quality, as well as the breathing and mouth movements of the speakers.
5
3.2 TEXT-TO-SPEECH
For the second experiment we looked at TTS. We used the same single-speaker speech databases from which Googleâs North American English and Mandarin Chinese TTS systems are built. The North American English dataset contains 24.6 hours of speech data, and the Mandarin Chinese dataset contains 34.8 hours; both were spoken by professional female speakers. | 1609.03499#15 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 16 | WaveNets for the TTS task were locally conditioned on linguistic features which were derived from input texts. We also trained WaveNets conditioned on the logarithmic fundamental frequency (log F0) values in addition to the linguistic features. External models predicting log F0 values and phone durations from linguistic features were also trained for each language. The receptive ï¬eld size of the WaveNets was 240 milliseconds. As example-based and model-based speech synthesis base- lines, hidden Markov model (HMM)-driven unit selection concatenative (Gonzalvo et al., 2016) and long short-term memory recurrent neural network (LSTM-RNN)-based statistical parametric (Zen et al., 2016) speech synthesizers were built. Since the same datasets and linguistic features were used to train both the baselines and WaveNets, these speech synthesizers could be fairly compared. | 1609.03499#16 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 17 | To evaluate the performance of WaveNets for the TTS task, subjective paired comparison tests and mean opinion score (MOS) tests were conducted. In the paired comparison tests, after listening to each pair of samples, the subjects were asked to choose which they preferred, though they could choose âneutralâ if they did not have any preference. In the MOS tests, after listening to each stimulus, the subjects were asked to rate the naturalness of the stimulus in a ï¬ve-point Likert scale score (1: Bad, 2: Poor, 3: Fair, 4: Good, 5: Excellent). Please refer to Appendix B for details. | 1609.03499#17 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 18 | Fig. 5 shows a selection of the subjective paired comparison test results (see Appendix B for the complete table). It can be seen from the results that WaveNet outperformed the baseline statisti- cal parametric and concatenative speech synthesizers in both languages. We found that WaveNet conditioned on linguistic features could synthesize speech samples with natural segmental quality but sometimes it had unnatural prosody by stressing wrong words in a sentence. This could be due to the long-term dependency of F0 contours: the size of the receptive ï¬eld of the WaveNet, 240 milliseconds, was not long enough to capture such long-term dependency. WaveNet conditioned on both linguistic features and F0 values did not have this problem: the external F0 prediction model runs at a lower frequency (200 Hz) so it can learn long-range dependencies that exist in F0 contours.
Table 1 show the MOS test results. It can be seen from the table that WaveNets achieved 5-scale MOSs in naturalness above 4.0, which were signiï¬cantly better than those from the baseline systems. They were the highest ever reported MOS values with these training datasets and test sentences. The gap in the MOSs from the best synthetic speech to the natural ones decreased from 0.69 to 0.34 (51%) in US English and 0.42 to 0.13 (69%) in Mandarin Chinese. | 1609.03499#18 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 19 | Subjective 5-scale MOS in naturalness Speech samples North American English Mandarin Chinese LSTM-RNN parametric HMM-driven concatenative WaveNet (L+F) 3.67 ± 0.098 3.86 ± 0.137 4.21 ± 0.081 3.79 ± 0.084 3.47 ± 0.108 4.08 ± 0.085 Natural (8-bit µ-law) Natural (16-bit linear PCM) 4.46 ± 0.067 4.55 ± 0.075 4.25 ± 0.082 4.21 ± 0.071
Table 1: Subjective 5-scale mean opinion scores of speech samples from LSTM-RNN-based sta- tistical parametric, HMM-driven unit selection concatenative, and proposed WaveNet-based speech synthesizers, 8-bit µ-law encoded natural speech, and 16-bit linear pulse-code modulation (PCM) natural speech. WaveNet improved the previous state of the art signiï¬cantly, reducing the gap be- tween natural speech and best previous model by more than 50%.
3.3 MUSIC
For out third set of experiments we trained WaveNets to model two music datasets:
6
[J-stM~â fiiconcat ~ââ [No pref. 100 80 60 40 20 Preference scores (%)
North American English Mandarin Chinese | 1609.03499#19 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 20 | 6
[J-stM~â fiiconcat ~ââ [No pref. 100 80 60 40 20 Preference scores (%)
North American English Mandarin Chinese
100 [| WaveNet (L) [J WaveNet (L+F) [JJ] No pref. 80 60 40 20 Preference scores (%) North American Mandarin Chinese
# English
100 Best baseline [[jWaveNet (L+F) [J ]No pref. 80 60 40 20 Preference scores (%) 0 North American English Mandarin Chinese
Figure 5: Subjective preference scores (%) of speech samples between (top) two baselines, (middle) two WaveNets, and (bottom) the best baseline and WaveNet. Note that LSTM and Concat cor- respond to LSTM-RNN-based statistical parametric and HMM-driven unit selection concatenative baseline synthesizers, and WaveNet (L) and WaveNet (L+F) correspond to the WaveNet condi- tioned on linguistic features only and that conditioned on both linguistic features and log F0 values.
7
⢠the MagnaTagATune dataset (Law & Von Ahn, 2009), which consists of about 200 hours of music audio. Each 29-second clip is annotated with tags from a set of 188, which describe the genre, instrumentation, tempo, volume and mood of the music. | 1609.03499#20 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 21 | ⢠the YouTube piano dataset, which consists of about 60 hours of solo piano music obtained from YouTube videos. Because it is constrained to a single instrument, it is considerably easier to model.
Although it is difï¬cult to quantitatively evaluate these models, a subjective evaluation is possible by listening to the samples they produce. We found that enlarging the receptive ï¬eld was crucial to ob- tain samples that sounded musical. Even with a receptive ï¬eld of several seconds, the models did not enforce long-range consistency which resulted in second-to-second variations in genre, instrumen- tation, volume and sound quality. Nevertheless, the samples were often harmonic and aesthetically pleasing, even when produced by unconditional models.
Of particular interest are conditional music models, which can generate music given a set of tags specifying e.g. genre or instruments. Similarly to conditional speech models, we insert biases that depend on a binary vector representation of the tags associated with each training clip. This makes it possible to control various aspects of the output of the model when sampling, by feeding in a binary vector that encodes the desired properties of the samples. We have trained such models on the MagnaTagATune dataset; although the tag data bundled with the dataset was relatively noisy and had many omissions, after cleaning it up by merging similar tags and removing those with too few associated clips, we found this approach to work reasonably well. | 1609.03499#21 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 22 | 3.4 SPEECH RECOGNITION
Although WaveNet was designed as a generative model, it can straightforwardly be adapted to dis- criminative audio tasks such as speech recognition.
Traditionally, speech recognition research has largely focused on using log mel-ï¬lterbank energies or mel-frequency cepstral coefï¬cients (MFCCs), but has been moving to raw audio recently (Palaz et al., 2013; T¨uske et al., 2014; Hoshen et al., 2015; Sainath et al., 2015). Recurrent neural networks such as LSTM-RNNs (Hochreiter & Schmidhuber, 1997) have been a key component in these new speech classiï¬cation pipelines, because they allow for building models with long range contexts. With WaveNets we have shown that layers of dilated convolutions allow the receptive ï¬eld to grow longer in a much cheaper way than using LSTM units. | 1609.03499#22 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 23 | As a last experiment we looked at speech recognition with WaveNets on the TIMIT (Garofolo et al., 1993) dataset. For this task we added a mean-pooling layer after the dilated convolutions that ag- gregated the activations to coarser frames spanning 10 milliseconds (160Ã downsampling). The pooling layer was followed by a few non-causal convolutions. We trained WaveNet with two loss terms, one to predict the next sample and one to classify the frame, the model generalized better than with a single loss and achieved 18.8 PER on the test set, which is to our knowledge the best score obtained from a model trained directly on raw audio on TIMIT.
# 4 CONCLUSION | 1609.03499#23 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 24 | # 4 CONCLUSION
This paper has presented WaveNet, a deep generative model of audio data that operates directly at the waveform level. WaveNets are autoregressive and combine causal ï¬lters with dilated convolu- tions to allow their receptive ï¬elds to grow exponentially with depth, which is important to model the long-range temporal dependencies in audio signals. We have shown how WaveNets can be con- ditioned on other inputs in a global (e.g. speaker identity) or local way (e.g. linguistic features). When applied to TTS, WaveNets produced samples that outperform the current best TTS systems in subjective naturalness. Finally, WaveNets showed very promising results when applied to music audio modeling and speech recognition.
# ACKNOWLEDGEMENTS
The authors would like to thank Lasse Espeholt, Jeffrey De Fauw and Grzegorz Swirszcz for their inputs, Adam Cain, Max Cant and Adrian Bolton for their help with artwork, Helen King, Steven
8
Gaffney and Steve Crossan for helping to manage the project, Faith Mackinder for help with prepar- ing the blogpost, James Besley for legal support and Demis Hassabis for managing the project and his inputs.
# REFERENCES | 1609.03499#24 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 25 | # REFERENCES
Agiomyrgiannakis, Yannis. Vocaine the vocoder and applications is speech synthesis. In ICASSP, pp. 4230â4234, 2015.
Bishop, Christopher M. Mixture density networks. Technical Report NCRG/94/004, Neural Com- puting Research Group, Aston University, 1994.
Chen, Liang-Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. URL http://arxiv.org/abs/1412.7062.
Chiba, Tsutomu and Kajiyama, Masato. The Vowel: Its Nature and Structure. Tokyo-Kaiseikan, 1942.
Dudley, Homer. Remaking speech. The Journal of the Acoustical Society of America, 11(2):169â 177, 1939.
Dutilleux, Pierre. An implementation of the âalgorithme `a trousâ to compute the wavelet transform. In Combes, Jean-Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 298â304. Springer Berlin Heidelberg, 1989. | 1609.03499#25 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 26 | Fan, Yuchen, Qian, Yao, and Xie, Feng-Long, Soong Frank K. TTS synthesis with bidirectional LSTM based recurrent neural networks. In Interspeech, pp. 1964â1968, 2014.
Fant, Gunnar. Acoustic Theory of Speech Production. Mouton De Gruyter, 1970.
Garofolo, John S., Lamel, Lori F., Fisher, William M., Fiscus, Jonathon G., and Pallett, David S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report, 93, 1993.
Gonzalvo, Xavi, Tazari, Siamak, Chan, Chun-an, Becker, Markus, Gutkin, Alexander, and Silen, Hanna. Recent advances in Google real-time HMM-driven unit selection synthesizer. In Inter- speech, 2016. URL http://research.google.com/pubs/pub45564.html.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. | 1609.03499#26 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 27 | Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735â1780, 1997.
Holschneider, Matthias, Kronland-Martinet, Richard, Morlet, Jean, and Tchamitchian, Philippe. A real-time algorithm for signal analysis with the help of the wavelet transform. In Combes, Jean- Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 286â297. Springer Berlin Heidelberg, 1989.
Hoshen, Yedid, Weiss, Ron J., and Wilson, Kevin W. Speech acoustic modeling from raw multi- channel waveforms. In ICASSP, pp. 4624â4628. IEEE, 2015.
Hunt, Andrew J. and Black, Alan W. Unit selection in a concatenative speech synthesis system using a large speech database. In ICASSP, pp. 373â376, 1996.
Imai, Satoshi and Furuichi, Chieko. Unbiased estimation of log spectrum. In EURASIP, pp. 203â 206, 1988. | 1609.03499#27 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 28 | Imai, Satoshi and Furuichi, Chieko. Unbiased estimation of log spectrum. In EURASIP, pp. 203â 206, 1988.
Itakura, Fumitada. Line spectrum representation of linear predictor coefï¬cients of speech signals. The Journal of the Acoust. Society of America, 57(S1):S35âS35, 1975.
Itakura, Fumitada and Saito, Shuzo. A statistical method for estimation of speech spectral density and formant frequencies. Trans. IEICE, J53A:35â42, 1970.
9
ITU-T. Recommendation G. 711. Pulse Code Modulation (PCM) of voice frequencies, 1988.
J´ozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410.
Juang, Biing-Hwang and Rabiner, Lawrence. Mixture autoregressive hidden Markov models for speech signals. IEEE Trans. Acoust. Speech Signal Process., pp. 1404â1413, 1985. | 1609.03499#28 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 29 | Kameoka, Hirokazu, Ohishi, Yasunori, Mochihashi, Daichi, and Le Roux, Jonathan. Speech anal- ysis with multi-kernel linear prediction. In Spring Conference of ASJ, pp. 499â502, 2010. (in Japanese).
Karaali, Orhan, Corrigan, Gerald, Gerson, Ira, and Massey, Noel. Text-to-speech conversion with neural networks: A recurrent TDNN approach. In Eurospeech, pp. 561â564, 1997.
Kawahara, Hideki, Masuda-Katsuse, Ikuyo, and de Cheveign´e, Alain. Restructuring speech rep- resentations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency- based f0 extraction: possible role of a repetitive structure in sounds. Speech Commn., 27:187â 207, 1999.
Kawahara, Hideki, Estill, Jo, and Fujimura, Osamu. Aperiodicity extraction and control using mixed mode excitation and group delay manipulation for a high quality speech analysis, modiï¬cation and synthesis system STRAIGHT. In MAVEBA, pp. 13â15, 2001. | 1609.03499#29 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 30 | Law, Edith and Von Ahn, Luis. Input-agreement: a new mechanism for collecting data using human computation games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1197â1206. ACM, 2009.
Maia, Ranniery, Zen, Heiga, and Gales, Mark J. F. Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters. In ISCA SSW7, pp. 88â93, 2010.
Morise, Masanori, Yokomori, Fumiya, and Ozawa, Kenji. WORLD: A vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst., E99-D(7):1877â1884, 2016.
Moulines, Eric and Charpentier, Francis. Pitch synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Commn., 9:453â467, 1990.
Muthukumar, P. and Black, Alan W. A deep learning approach to data-driven parameterizations for statistical parametric speech synthesis. arXiv:1409.8558, 2014.
Nair, Vinod and Hinton, Geoffrey E. Rectiï¬ed linear units improve restricted Boltzmann machines. In ICML, pp. 807â814, 2010. | 1609.03499#30 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 31 | Nakamura, Kazuhiro, Hashimoto, Kei, Nankaku, Yoshihiko, and Tokuda, Keiichi. Integration of IEICE Trans. Inf. spectral feature extraction and modeling for HMM-based speech synthesis. Syst., E97-D(6):1438â1448, 2014.
Palaz, Dimitri, Collobert, Ronan, and Magimai-Doss, Mathew. Estimating phoneme class condi- tional probabilities from raw speech signal using convolutional neural networks. In Interspeech, pp. 1766â1770, 2013.
Peltonen, Sari, Gabbouj, Moncef, and Astola, Jaakko. Nonlinear ï¬lter design: methodologies and challenges. In IEEE ISPA, pp. 102â107, 2001.
Poritz, Alan B. Linear predictive hidden Markov models and the speech signal. In ICASSP, pp. 1291â1294, 1982.
Rabiner, Lawrence and Juang, Biing-Hwang. Fundamentals of Speech Recognition. PrenticeHall, 1993. | 1609.03499#31 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 32 | Rabiner, Lawrence and Juang, Biing-Hwang. Fundamentals of Speech Recognition. PrenticeHall, 1993.
Sagisaka, Yoshinori, Kaiki, Nobuyoshi, Iwahashi, Naoto, and Mimura, Katsuhiko. ATR ν-talk speech synthesis system. In ICSLP, pp. 483â486, 1992.
10
Sainath, Tara N., Weiss, Ron J., Senior, Andrew, Wilson, Kevin W., and Vinyals, Oriol. Learning the speech front-end with raw waveform CLDNNs. In Interspeech, pp. 1â5, 2015.
Takaki, Shinji and Yamagishi, Junichi. A deep auto-encoder based low-dimensional feature ex- traction from FFT spectral envelopes for statistical parametric speech synthesis. In ICASSP, pp. 5535â5539, 2016.
Takamichi, Shinnosuke, Toda, Tomoki, Black, Alan W., Neubig, Graham, Sakriani, Sakti, and Naka- mura, Satoshi. Postï¬lters to modify the modulation spectrum for statistical parametric speech synthesis. IEEE/ACM Trans. Audio Speech Lang. Process., 24(4):755â767, 2016. | 1609.03499#32 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 33 | Theis, Lucas and Bethge, Matthias. Generative image modeling using spatial LSTMs. In NIPS, pp. 1927â1935, 2015.
Toda, Tomoki and Tokuda, Keiichi. A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. IEICE Trans. Inf. Syst., E90-D(5):816â824, 2007.
Toda, Tomoki and Tokuda, Keiichi. Statistical approach to vocal tract transfer function estimation based on factor analyzed trajectory hmm. In ICASSP, pp. 3925â3928, 2008.
Tokuda, Keiichi. Speech synthesis as a statistical machine learning problem. http://www.sp. nitech.ac.jp/Ëtokuda/tokuda_asru2011_for_pdf.pdf, 2011. Invited talk given at ASRU.
Tokuda, Keiichi and Zen, Heiga. Directly modeling speech waveforms by neural networks for statistical parametric speech synthesis. In ICASSP, pp. 4215â4219, 2015.
Tokuda, Keiichi and Zen, Heiga. Directly modeling voiced and unvoiced components in speech waveforms by neural networks. In ICASSP, pp. 5640â5644, 2016. | 1609.03499#33 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 34 | Tuerk, Christine and Robinson, Tony. Speech synthesis using artiï¬cial neural networks trained on cepstral coefï¬cients. In Proc. Eurospeech, pp. 1713â1716, 1993.
T¨uske, Zolt´an, Golik, Pavel, Schl¨uter, Ralf, and Ney, Hermann. Acoustic modeling with deep neural networks using raw time signal for LVCSR. In Interspeech, pp. 890â894, 2014.
Uria, Benigno, Murray, Iain, Renals, Steve, Valentini-Botinhao, Cassia, and Bridle, John. Modelling acoustic feature dependencies with artiï¬cial neural networks: Trajectory-RNADE. In ICASSP, pp. 4465â4469, 2015.
van den Oord, A¨aron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016a. | 1609.03499#34 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 35 | van den Oord, A¨aron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328.
Wu, Yi-Jian and Tokuda, Keiichi. Minimum generation error training with direct log spectral distor- tion on LSPs for HMM-based speech synthesis. In Interspeech, pp. 577â580, 2008.
Yamagishi, Junichi. English multi-speaker corpus for CSTR voice cloning toolkit, 2012. URL http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html.
Yoshimura, Takayoshi. Simultaneous modeling of phonetic and prosodic parameters, and char- acteristic conversion for HMM-based text-to-speech systems. PhD thesis, Nagoya Institute of Technology, 2002.
Yu, Fisher and Koltun, Vladlen. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. URL http://arxiv.org/abs/1511.07122. | 1609.03499#35 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 36 | Zen, Heiga. An example of context-dependent label format for HMM-based speech synthesis in English, 2006. URL http://hts.sp.nitech.ac.jp/?Download.
11
Feature | ' ic [| prediction |e SeaREES = soeccn Training Synthesis Model training
# Figure 6: Outline of statistical parametric speech synthesis.
Zen, Heiga, Tokuda, Keiichi, and Kitamura, Tadashi. Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic features. Comput. Speech Lang., 21(1):153â173, 2007.
Zen, Heiga, Tokuda, Keiichi, and Black, Alan W. Statistical parametric speech synthesis. Speech Commn., 51(11):1039â1064, 2009.
Zen, Heiga, Senior, Andrew, and Schuster, Mike. Statistical parametric speech synthesis using deep neural networks. In Proc. ICASSP, pp. 7962â7966, 2013. | 1609.03499#36 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 37 | Zen, Heiga, Agiomyrgiannakis, Yannis, Egberts, Niels, Henderson, Fergus, and Szczepaniak, Prze- mysÅaw. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthe- sizers for mobile devices. In Interspeech, 2016. URL https://arxiv.org/abs/1606. 06061.
# A TEXT-TO-SPEECH BACKGROUND
The goal of TTS synthesis is to render naturally sounding speech signals given a text to be syn- thesized. Human speech production process ï¬rst translates a text (or concept) into movements of muscles associated with articulators and speech production-related organs. Then using air-ï¬ow from lung, vocal source excitation signals, which contain both periodic (by vocal cord vibration) and aperiodic (by turbulent noise) components, are generated. By ï¬ltering the vocal source excitation signals by time-varying vocal tract transfer functions controlled by the articulators, their frequency characteristics are modulated. Finally, the generated speech signals are emitted. The aim of TTS is to mimic this process by computers in some way. | 1609.03499#37 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 38 | TTS can be viewed as a sequence-to-sequence mapping problem; from a sequence of discrete sym- bols (text) to a real-valued time series (speech signals). A typical TTS pipeline has two parts; 1) text analysis and 2) speech synthesis. The text analysis part typically includes a number of natural language processing (NLP) steps, such as sentence segmentation, word segmentation, text normal- ization, part-of-speech (POS) tagging, and grapheme-to-phoneme (G2P) conversion. It takes a word sequence as input and outputs a phoneme sequence with a variety of linguistic contexts. The speech synthesis part takes the context-dependent phoneme sequence as its input and outputs a synthesized speech waveform. This part typically includes prosody prediction and speech waveform generation. | 1609.03499#38 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 39 | There are two main approaches to realize the speech synthesis part; non-parametric, example-based approach known as concatenative speech synthesis (Moulines & Charpentier, 1990; Sagisaka et al., 1992; Hunt & Black, 1996), and parametric, model-based approach known as statistical parametric speech synthesis (Yoshimura, 2002; Zen et al., 2009). The concatenative approach builds up the utterance from units of recorded speech, whereas the statistical parametric approach uses a gener- ative model to synthesize the speech. The statistical parametric approach ï¬rst extracts a sequence of vocoder parameters (Dudley, 1939) o = {o1, . . . , oN } from speech signals x = {x1, . . . , xT } and linguistic features l from the text W , where N and T correspond to the numbers of vocoder parameter vectors and speech signals. Typically a vocoder parameter vector on is extracted at ev- ery 5 milliseconds. It often includes cepstra (Imai & Furuichi, 1988) or line spectral pairs (Itakura, 1975), which represent vocal tract transfer function, and fundamental frequency (F0) and | 1609.03499#39 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 40 | (Imai & Furuichi, 1988) or line spectral pairs (Itakura, 1975), which represent vocal tract transfer function, and fundamental frequency (F0) and aperiodic- ity (Kawahara et al., 2001), which represent characteristics of vocal source excitation signals. Then a set of generative models, such as hidden Markov models (HMMs) (Yoshimura, 2002), feed-forward neural networks (Zen et al., 2013), and recurrent neural networks (Tuerk & Robinson, 1993; Karaali et al., 1997; Fan et al., 2014), is trained from the extracted vocoder parameters and linguistic features | 1609.03499#40 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 42 | Then a speech waveform is reconstructed from Ëo using a vocoder. The statistical parametric ap- proach offers various advantages over the concatenative one such as small footprint and ï¬exibility to change its voice characteristics. However, its subjective naturalness is often signiï¬cantly worse than that of the concatenative approach; synthesized speech often sounds mufï¬ed and has artifacts. Zen et al. (2009) reported three major factors that can degrade the subjective naturalness; quality of vocoders, accuracy of generative models, and effect of oversmoothing. The ï¬rst factor causes the artifacts and the second and third factors lead to the mufï¬eness in the synthesized speech. There have been a number of attempts to address these issues individually, such as developing high-quality vocoders (Kawahara et al., 1999; Agiomyrgiannakis, 2015; Morise et al., 2016), improving the ac- curacy of generative models (Zen et al., 2007; 2013; Fan et al., 2014; Uria et al., 2015), and compen- sating the oversmoothing effect (Toda & Tokuda, 2007; Takamichi et al., 2016). Zen | 1609.03499#42 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 44 | Extracting vocoder parameters can be viewed as estimation of a generative model parameters given speech signals (Itakura & Saito, 1970; Imai & Furuichi, 1988). For example, linear predictive anal- ysis (Itakura & Saito, 1970), which has been used in speech coding, assumes that the generative model of speech signals is a linear auto-regressive (AR) zero-mean Gaussian process;
# autoregressive
=Car p+ (6) p=1
p=1 er ~ N(0,G?) | 1609.03499#44 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 45 | (7) where ap is a p-th order linear predictive coefï¬cient (LPC) and G2 is a variance of modeling error. These parameters are estimated based on the maximum likelihood (ML) criterion. In this sense, the training part of the statistical parametric approach can be viewed as a two-step optimization and sub-optimal: extract vocoder parameters by ï¬tting a generative model of speech signals then model trajectories of the extracted vocoder parameters by a separate generative model for time series (Tokuda, 2011). There have been attempts to integrate these two steps into a single one (Toda & Tokuda, 2008; Wu & Tokuda, 2008; Maia et al., 2010; Nakamura et al., 2014; Muthukumar & Black, 2014; Tokuda & Zen, 2015; 2016; Takaki & Yamagishi, 2016). For example, Tokuda & Zen (2016) integrated non-stationary, nonzero-mean Gaussian process generative model of speech signals and LSTM-RNN-based sequence generative model to a single one and jointly optimized them by back-propagation. Although they showed that this model could approximate natural speech signals, its segmental naturalness was signiï¬cantly worse than the non-integrated model due to over- generalization and over-estimation of noise components in speech signals. | 1609.03499#45 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 46 | The conventional generative models of raw audio signals have a number of assumptions which are inspired from the speech production, such as
⢠Use of ï¬xed-length analysis window; They are typically based on a stationary stochas- tic process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010). To model time-varying speech signals by a stationary stochas- tic process, parameters of these generative models are estimated within a ï¬xed-length, over- lapping and shifting analysis window (typically its length is 20 to 30 milliseconds, and shift is 5 to 10 milliseconds). However, some phones such as stops are time-limited by less than 20 milliseconds (Rabiner & Juang, 1993). Therefore, using such ï¬xed-size analysis win- dow has limitations.
⢠Linear ï¬lter; These generative models are typically realized as a linear time-invariant ï¬l- ter (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010) within a windowed frame. However, the relationship between suc- cessive audio samples can be highly non-linear.
13 | 1609.03499#46 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 47 | 13
⢠Gaussian process assumption; The conventional generative models are based on Gaussian process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010; Tokuda & Zen, 2015; 2016). From the source-ï¬lter model of speech production (Chiba & Kajiyama, 1942; Fant, 1970) point of view, this is equivalent to assuming that a vocal source excitation signal is a sample from a Gaussian distribu- tion (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Tokuda & Zen, 2015; Kameoka et al., 2010; Tokuda & Zen, 2016). Together with the lin- ear assumption above, it results in assuming that speech signals are normally distributed. However, distributions of real speech signals can be signiï¬cantly different from Gaussian.
Although these assumptions are convenient, samples from these generative models tend to be noisy and lose important details to make these audio signals sounding natural. | 1609.03499#47 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 48 | Although these assumptions are convenient, samples from these generative models tend to be noisy and lose important details to make these audio signals sounding natural.
WaveNet, which was described in Section 2, has none of the above-mentioned assumptions. It incorporates almost no prior knowledge about audio signals, except the choice of the receptive ï¬eld and µ-law encoding of the signal. It can also be viewed as a non-linear causal ï¬lter for quantized signals. Although such non-linear ï¬lter can represent complicated signals while preserving the details, designing such ï¬lters is usually difï¬cult (Peltonen et al., 2001). WaveNets give a way to train them from data.
# B DETAILS OF TTS EXPERIMENT | 1609.03499#48 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 49 | # B DETAILS OF TTS EXPERIMENT
The HMM-driven unit selection and WaveNet TTS systems were built from speech at 16 kHz sam- pling. Although LSTM-RNNs were trained from speech at 22.05 kHz sampling, speech at 16 kHz sampling was synthesized at runtime using a resampling functionality in the Vocaine vocoder (Agiomyrgiannakis, 2015). Both the LSTM-RNN-based statistical parametric and HMM-driven unit selection speech synthesizers were built from the speech datasets in the 16-bit linear PCM, whereas the WaveNet-based ones were trained from the same speech datasets in the 8-bit µ-law encoding. | 1609.03499#49 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 50 | The linguistic features include phone, syllable, word, phrase, and utterance-level features (Zen, 2006) (e.g. phone identities, syllable stress, the number of syllables in a word, and position of the current syllable in a phrase) with additional frame position and phone duration features (Zen et al., 2013). These features were derived and associated with speech every 5 milliseconds by phone-level forced alignment at the training stage. We used LSTM-RNN-based phone duration and autoregres- sive CNN-based log F0 prediction models. They were trained so as to minimize the mean squared errors (MSE). It is important to note that no post-processing was applied to the audio signals gener- ated from the WaveNets. | 1609.03499#50 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 51 | The subjective listening tests were blind and crowdsourced. 100 sentences not included in the train- ing data were used for evaluation. Each subject could evaluate up to 8 and 63 stimuli for North American English and Mandarin Chinese, respectively. Test stimuli were randomly chosen and pre- sented for each subject. In the paired comparison test, each pair of speech samples was the same text synthesized by the different models. In the MOS test, each stimulus was presented to subjects in isolation. Each pair was evaluated by eight subjects in the paired comparison test, and each stimulus was evaluated by eight subjects in the MOS test. The subjects were paid and native speakers per- forming the task. Those ratings (about 40%) where headphones were not used were excluded when computing the preference and mean opinion scores. Table 2 shows the full details of the paired comparison test shown in Fig. 5.
14 | 1609.03499#51 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 52 | 14
Subjective preference (%) in naturalness WaveNet WaveNet No Language LSTM Concat (L) (L+F) _ preference p value North 23.3 63.6 13.1 | < 107% American 18.7 69.3 12.0] <10-° English 7.6 82.0 10.4 | < 107% 32.4 41.2 26.4 0.003 20.1 49.3 30.6 | <10-° 17.8 37.9 44.3 | <10-° Mandarin 50.6 15.6 33.8 | <10-° Chinese 25.0 23.3 51.8 0.476 12.5 29.3 58.2 | < 107% 17.6 43.1 39.3 | «<10-° 7.6 55.9 36.5 | <10-° 10.0 25.5 64.5 | <10-° | 1609.03499#52 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03499 | 53 | Table 2: Subjective preference scores of speech samples between LSTM-RNN-based statistical para- metric (LSTM), HMM-driven unit selection concatenative (Concat), and proposed WaveNet-based speech synthesizers. Each row of the table denotes scores of a paired comparison test between two synthesizers. Scores of the synthesizers which were signiï¬cantly better than their competing ones at p < 0.01 level were shown in the bold type. Note that WaveNet (L) and WaveNet (L+F) correspond to WaveNet conditioned on linguistic features only and that conditioned on both linguistic features and F0 values.
15 | 1609.03499#53 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio
waveforms. The model is fully probabilistic and autoregressive, with the
predictive distribution for each audio sample conditioned on all previous ones;
nonetheless we show that it can be efficiently trained on data with tens of
thousands of samples per second of audio. When applied to text-to-speech, it
yields state-of-the-art performance, with human listeners rating it as
significantly more natural sounding than the best parametric and concatenative
systems for both English and Mandarin. A single WaveNet can capture the
characteristics of many different speakers with equal fidelity, and can switch
between them by conditioning on the speaker identity. When trained to model
music, we find that it generates novel and often highly realistic musical
fragments. We also show that it can be employed as a discriminative model,
returning promising results for phoneme recognition. | http://arxiv.org/pdf/1609.03499 | Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, Koray Kavukcuoglu | cs.SD, cs.LG | null | null | cs.SD | 20160912 | 20160919 | [
{
"id": "1601.06759"
}
] |
1609.03193 | 0 | 6 1 0 2
p e S 3 1 ] G L . s c [
2 v 3 9 1 3 0 . 9 0 6 1 : v i X r a
# Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
# Ronan Collobert Facebook AI Research, Menlo Park [email protected]
# Christian Puhrsch Facebook AI Research, Menlo Park [email protected]
# Gabriel Synnaeve Facebook AI Research, New York [email protected]
# Abstract
This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.
# Introduction | 1609.03193#0 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 1 | # Introduction
We present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel- Frequency Cepstral Coefï¬cients (MFCC), power spectrum, or raw waveform) to the transcription. The acoustic model is trained using letters (graphemes) directly, which take out the need for an intermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to build state of the art systems for speech recognition consists in ï¬rst training an HMM/GMM model to force align the units on which the ï¬nal acoustic model operates (most often context-dependent phone states). This approach takes its roots in HMM/GMM training [27]. The improvements brought by deep neural networks (DNNs) [14, 10] and convolutional neural networks (CNNs) [24, 25] for acoustic modeling only extend this training pipeline. | 1609.03193#1 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 2 | The current state of the art on Librispeech (the dataset that we used for our evaluations) uses this approach too [18, 20], with an additional step of speaker adaptation [22, 19]. Recently, [23] proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut ties with the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network (RNN) [7] for phoneme transcription. There are now competitive end-to-end approaches of acoustic models toppled with RNNs layers as in [8, 13, 21, 1], trained with a sequence criterion [6]. However these models are computationally expensive, and thus take a long time to train. | 1609.03193#2 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 3 | Compared to classical approaches that need phonetic annotation (often derived from a phonetic dictionary, rules, and generative training), we propose to train the model end-to-end, using graphemes directly. Compared to sequence criterion based approaches that train directly from speech signal to graphemes [13], we propose a simple(r) architecture (23 millions of parameters for our best model, vs. 100 millions of parameters in [1]) based on convolutional networks for the acoustic model, toppled with a graph transformer network [4], trained with a simpler sequence criterion. Our word-error-rate on clean speech is slightly better than [8], and slightly worse than [1], in particular factoring that they train on 12,000 hours while we only train on the 960h available in LibriSpeechâs train set. Finally, some of our models are also trained on the raw waveform, as in [15, 16]. The rest of the paper is
structured as follows: the next section presents the convolutional networks used for acoustic modeling, along with the automatic segmentation criterion. The following section shows experimental results comparing different features, the criterion, and our current best word error rates on LibriSpeech.
# 2 Architecture | 1609.03193#3 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 4 | # 2 Architecture
Our speech recognition system is a standard convolutional neural network [12] fed with various different features, trained through an alternative to the Connectionist Temporal Classiï¬cation (CTC) [6], and coupled with a simple beam search decoder. In the following sub-sections, we detail each of these components.
# 2.1 Features
We consider three types of input features for our model: MFCCs, power-spectrum, and raw wave. MFCCs are carefully designed speech-speciï¬c features, often found in classical HMM/GMM speech systems [27] because of their dimensionality compression (13 coefï¬- cients are often enough to span speech frequencies). Power-spectrum features are found in most recent deep learning acoustic modeling features [1]. Raw wave has been somewhat explored in few recent work [15, 16]. ConvNets have the advantage to be ï¬exible enough to be used with either of these input feature types. Our acoustic models output letter scores (one score per letter, given a dictionary
# L
# 2.2 ConvNet Acoustic Model | 1609.03193#4 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 5 | # L
# 2.2 ConvNet Acoustic Model
The acoustic models we considered in this paper are all based on standard 1D convolutional neural networks (ConvNets). ConvNets interleave convolution operations with pointwise non-linearity oper- ations. Often ConvNets also embark pooling layers: these type of layers allow the network to âseeâ a larger context, without increas- ing the number of parameters, by locally aggregating the previous convolution operation output. Instead, our networks leverage striding convolutions. Given (xt)t=1...Tx an input sequence with Tx frames of dx dimensional vectors, a convolution with kernel width kw, stride dw and dy frame size output computes the following:
d, kw He =O + DOS wise Coxe arak VWSt< dy, j=lk=1 qd)
Rdy and w where b lution (to be learned). â â RdyÃdxÃkw are the parameters of the convoCONV kw = 1 2000 : 40 CONV kw = 1 2000 : 2000 CONV kw = 32 250_: 2000 CONV kw = 7 250 : 250 CONV kw = 7 250: 250 CONV kw = 48,dw = 2 250: 250 CONV kw = 250, dw = 160 1: 250 al | 1609.03193#5 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 6 | Pointwise non-linear layers are added after convolutional layers. In our experience, we surprisingly found that using hyperbolic tangents, their piecewise linear counterpart HardTanh (as in [16]) or ReLU units lead to similar results.
There are some slight variations between the architectures, depending on the input features. MFCC-based networks need less striding, as standard MFCC ï¬lters are applied with large strides on the input raw sequence. With power spectrum-based and raw wave-based networks, we observed that the overall stride of the network was more important than where the convolution with strides were placed. We found thus preferrable to set the strided convolutions near the ï¬rst input layers of the network, as it leads to the fastest architectures: with power spectrum features or raw wave, the input sequences are very long and the ï¬rst convolutions are thus the most expensive ones.
Figure 1: Our neural net- work architecture for raw wave. two layers are convolutions with strides. Last two layers are convolu- tions with kw = 1, which are equivalent to fully con- nected layers. Power spec- trum and MFCC based net- works do not have the ï¬rst layer.
2 | 1609.03193#6 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 7 | 2
The last layer of our convolutional network outputs one score per letter in the letter dictionary (dy = ). Our architecture for raw wave is shown in Figure 1 and is inspired by [16]. The architectures for both power spectrum and MFCC features do not include the ï¬rst layer. The full network can be seen as a non-linear convolution, with a kernel width of size 31280 and stride equal to 320; given the sample rate of our data is 16KHz, label scores are produced using a window of 1955 ms, with steps of 20ms.
# Inferring Segmentation with AutoSegCriterion | 1609.03193#7 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 8 | # Inferring Segmentation with AutoSegCriterion
Most large labeled speech databases provide only a text transcription for each audio ï¬le. In a classiï¬cation framework (and given our acoustic model produces letter predictions), one would need the segmentation of each letter in the transcription to train properly the model. Unfortunately, manually labeling the segmentation of each letter would be tedious. Several solutions have been explored in the speech community to alleviate this issue: HMM/GMM models use an iterative EM procedure: (i) during the Estimation step, the best segmentation is inferred, according to the current model, by maximizing the joint probability of the letter (or any sub-word unit) transcription and input sequence. (ii) During the Maximization step the model is optimized by minimizing a frame-level criterion, based on the (now ï¬xed) inferred segmentation. This approach is also often used to boostrap the training of neural network-based acoustic models.
Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMI criterion [2] which maximizes the mutual information between the acoustic sequence and word sequences or the Minimum Bayse Risk (MBR) criterion [5]. | 1609.03193#8 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 9 | More recently, standalone neural network architectures have been trained using criterions which jointly infer the segmentation of the transcription while increase the overall score of the right transcription [6, 17]. The most popular one is certainly the Connectionist Temporal Classiï¬cation (CTC) criterion, which is at the core of Baiduâs Deep Speech architecture [1]. CTC assumes that the network output probability scores, normalized at the frame level. It considers all possible sequence of letters (or any sub-word units), which can lead to a to a given transcription. CTC also allow a special âblankâ state to be optionally inserted between each letters. The rational behind the blank state is two- folds: (i) modeling âgarbageâ frames which might occur between each letter and (ii) identifying the separation between two identical consecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTC for a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the available frames output by the acoustic model. We denote ctc(θ, T ) G an unfolded graph over T frames for a given transcription θ, and Ï = | 1609.03193#9 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 10 | available frames output by the acoustic model. We denote ctc(θ, T ) G an unfolded graph over T frames for a given transcription θ, and Ï = Ï1, . . . , ÏT ctc(θ, T ) a path in this graph representing a (valid) sequence of letters for this transcription. At each time step t, )) each node of the graph is assigned with the corresponding log-probability letter (that we denote ft( · ctc(θ, T ); for output by the acoustic model. CTC aims at maximizing the âoverallâ score of paths in that purpose, it minimizes the Forward score: | 1609.03193#10 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 11 | T CTC(0,T) =â logadd SO fx,(x), (2) TEGete (9,7) pay
where the âlogaddâ operation, also often called âlog-sum-expâ is deï¬ned as logadd(a, b) = exp(log(a) + log(b)). This overall score can be efï¬ciently computed with the Forward algorithm. To put things in perspective, if one would replace the logadd( ) in (2) (which can be then · · efï¬ciently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one would then maximize the score of the best path, according to the model belief. The logadd( ) can be seen · as a smooth version of the max( ): paths with similar scores will be attributed the same weight in the · overall score (and hence receive the same gradient), and paths with much larger score will have much ) works much better more overall weight than paths with low scores. In practice, using the logadd( · than the max( ). It is also worth noting that maximizing (2) does not diverge, as the acoustic model · ). is assumed to output normalized scores (log-probabilities) fi( · | 1609.03193#11 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 12 | In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels, (ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges) (iii) global normalization instead of per-frame normalization:
The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b). We found that in practice there was no advantage of having a blank class to model the
3
(a) (b)
Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters (with the blank state denoted â â), for the transcription âcatâ. (b) Shows the same graph unfolded â
over 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditional probability output by the neural network acoustic model.
# e | 1609.03193#12 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 13 | # e
possible âgarbageâ frames between letters. Modeling letter repetitions (which is also an important quality of the blank label in CTC) can be easily replaced by repetition character labels (we used two extra labels for two and three repetitions). For example âcaterpillarâ could be written as âcaterpil2arâ, where â2â is a label to represent the repetition of the previous letter. Not having blank labels also simpliï¬es the decoder. With (ii) one can easily plug an external language model, which would insert transition scores on the edges of the graph. This could be particularly useful in future work, if one wanted to model representations more high-level than letters. In that respect, avoiding normalized transitions is important to alleviate the problem of âlabel biasâ [3, 11]. In this work, we limited ourselves to transition scalars, which are learned together with the acoustic model. The normalization evoked in (iii) is necessary when using un-normalized scores on nodes or edges; it insures incorrect transcriptions will have a low conï¬dence. | 1609.03193#13 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 14 | In the following, we name our criterion âAuto Segmentation Criterionâ (ASG). Considering the asg(θ, T ) over T frames for a given same notations than for CTC in (2), and an unfolded graph transcription θ (as in Figure 3b), as well as a fully connected graph f ull(θ, T ) over T frames (representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing:
ASG(6,T) =â_ logadd Slate )+9n1m(2)) + logadd Sul 2) + Gneaer(@)) ®EGasg(9.T) 424 mwEGfut(9,T) a]
(3) where gi,j( ) is a transition score model to jump from label i to label j. The left-hand part of 3 · promotes sequences of letters leading to the right transcription, and the right-hand part demotes all sequences of letters. As for CTC, these two parts can be efï¬ciently computed with the Forward algorithm. Derivatives with respect to fi( ) can be obtained (maths are a bit tedious) by ) and gi,j( · · applying the chain rule through the Forward recursion.
# 2.4 Beam-Search Decoder | 1609.03193#14 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 15 | # 2.4 Beam-Search Decoder
We wrote our own one-pass decoder, which performs a simple beam-search with beam threholding, histogram pruning and language model smearing [26]. We kept the decoder as simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptation before decoding, nor any word graph rescoring. Our decoder relies on KenLM [9] for the language modeling part. It also accepts un-normalized acoustic scores (transitions and emissions from the acoustic model) as input. The decoder attempts to maximize the following:
T L(A) = logadd S~(fa,(2) + Gne1m()) + 10g Pim (9) + 814] , (4) TEGasg(9,T) pay
4
(a) (b) (c)
Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences of letters for the transcription âcatâ. (b) Shows the same graph unfolded over 5 frames. (c) Shows the corresponding fully connected graph, which describe all possible sequences of letter; this graph is used for normalization purposes. Un-normalized transitions scores are possible on the edges. At each time step, nodes are assigned a conditional un-normalized score, output by the neural network acoustic model. | 1609.03193#15 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 16 | where Plm(θ) is the probability of the language model given a transcription θ, α and β are two hyper-parameters which control the weight of the language model and the word insertion penalty respectively.
# 3 Experiments
We implemented everything using Torch71. The ASG criterion as well as the decoder were imple- mented in C (and then interfaced into Torch).
We consider as benchmark LibriSpeech, a large speech database freely available for download [18]. LibriSpeech comes with its own train, validation and test sets. Except when speciï¬ed, we used all the available data (about 1000h of audio ï¬les) for training and validating our models. We use the original 16 KHz sampling rate. The vocabulary contains 30 graphemes: the standard English alphabet plus the apostrophe, silence, and two special ârepetitionâ graphemes which encode the duplication (once or twice) of the previous letter (see Section 2.3).
The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. In the following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs have been obtained by using our own decoder (see Section 2.4), with the standard 4-gram language model provided with LibriSpeech2. | 1609.03193#16 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 17 | MFCC features are computed with 13 coefï¬cients, a 25 ms sliding window and 10 ms stride. We included ï¬rst and second order derivatives. Power spectrum features are computed with a 25 ms window, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) per input sequence.
# 3.1 Results
Table 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use
1http://www.torch.ch. 2http://www.openslr.org/11. 3https://github.com/baidu-research/warp-ctc.
5 | 1609.03193#17 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 18 | 5
Table 1: CTC vs ASG. CTC is Baiduâs implementation. ASG is implemented on CPU (core in C, threading in Lua). (a) reports performance in LER. Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28, transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcription size: 200) are reported in (b) and (c) respectively. Timings include both forward and backward passes. CPU implementations use 8 threads.
(a)
(b)
dev-clean test-clean ASG CTC 10.7 10.4 10.5 10.1 batch size 1 4 8 ASG CPU GPU CPU 2.5 5.9 2.8 6.0 2.8 6.1 CTC 1.9 2.0 2.0
(c)
batch size 1 4 8 CTC ASG GPU CPU 16.0 97.9 17.7 99.6 19.2 100.3 CPU 40.9 41.6 41.7
(a) (b)
# a | 1609.03193#18 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 19 | (a) (b)
# a
Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). This compares MFCC-based and power spectrum-based (POW) architectures. AUG experiments include data augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as a comparison [8, 1].
case. ASG appears faster on long sequences, even though it is running on CPU only. Baiduâs GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters).
We also investigated the impact of the training size on the dataset, as well as the effect of a simple data augmentation procedure, where shifts were introduced in the input frames, as well as stretching. For that purpose, we tuned the size of our architectures (given a particular size of the dataset), to avoid over-ï¬tting. Figure 4a shows the augmentation helps for small training set size. However, with enough training data, the effect of data augmentation vanishes, and both type of features appear to perform similarly. Figure 4b reports the WER with respect to the available training data size. We observe that we compare very well against Deep Speech 1 & 2 which were trained with much more data [8, 1]. | 1609.03193#19 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 20 | Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, for each type of features. The overall stride of architectures is 320 (see Figure 1), which produces a label every 20 ms. We found that one could squeeze out about 1% in performance by reï¬ning the precision of the output. This is efï¬ciently achieved by shifting the input sequence, and feeding it to the network
6
Table 2: LER/WER of the best sets of hyper-parameters for each feature types.
dev-clean test-clean PS LER WER LER WER LER WER 6.9 6.9 MFCC Raw 9.3 9.1 10.3 10.6 7.2 9.4 10.1
several times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrum and raw features are performing slightly worse than MFCCs. One could expect, however, that with enough data (see Figure 4) the gap would vanish.
# 4 Conclusion | 1609.03193#20 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 21 | # 4 Conclusion
We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC [6], and as accurate (table 1). Our approach breaks free from HMM/GMM pre-training and force-alignment, as well as not being as computationally intensive as RNN-based approaches [1] (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread).
# References | 1609.03193#21 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 22 | # References
[1] AMODEI, D., ANUBHAI, R., BATTENBERG, E., CASE, C., CASPER, J., CATANZARO, B., CHEN, J., CHRZANOWSKI, M., COATES, A., DIAMOS, G., ET AL. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595 (2015).
[2] BAHL, L. R., BROWN, P. F., DE SOUZA, P. V., AND MERCER, R. L. Maximum mutual information estimation of hidden markov model parameters for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 1986 IEEE International Conference on (1986), IEEE, pp. 49â52.
[3] BOTTOU, L. Une approche theorique de lâapprentissage connexionniste et applications a la reconnaissance de la parole. PhD thesis, 1991.
[4] BOTTOU, L., BENGIO, Y., AND LE CUN, Y. Global training of document processing sys- tems using graph transformer networks. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (1997), IEEE, pp. 489â494. | 1609.03193#22 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 23 | [5] GIBSON, M., AND HAIN, T. Hypothesis spaces for minimum bayes risk training in large vocabulary speech recognition. In Proceedings of INTERSPEECH (2006), IEEE, pp. 2406â- 2409.
[6] GRAVES, A., FERNÃNDEZ, S., GOMEZ, F., AND SCHMIDHUBER, J. Connectionist temporal classiï¬cation: labelling unsegmented sequence data with recurrent neural networks. In Proceed- ings of the 23rd international conference on Machine learning (2006), ACM, pp. 369â376.
[7] GRAVES, A., MOHAMED, A.-R., AND HINTON, G. Speech recognition with deep recur- In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE rent neural networks. International Conference on (2013), IEEE, pp. 6645â6649. | 1609.03193#23 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 24 | [8] HANNUN, A., CASE, C., CASPER, J., CATANZARO, B., DIAMOS, G., ELSEN, E., PRENGER, R., SATHEESH, S., SENGUPTA, S., COATES, A., ET AL. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014).
[9] HEAFIELD, K., POUZYREVSKY, I., CLARK, J. H., AND KOEHN, P. Scalable modiï¬ed kneser-ney language model estimation. In ACL (2) (2013), pp. 690â696.
7
[10] HINTON, G., DENG, L., YU, D., DAHL, G. E., MOHAMED, A.-R., JAITLY, N., SENIOR, A., VANHOUCKE, V., NGUYEN, P., SAINATH, T. N., ET AL. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE 29, 6 (2012), 82â97. | 1609.03193#24 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 25 | [11] LAFFERTY, J., MCCALLUM, A., AND PEREIRA, F. Conditional random ï¬elds: Probabilistic models for segmenting and labeling sequence data. In Eighteenth International Conference on Machine Learning, ICML (2001).
[12] LECUN, Y., AND BENGIO, Y. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361, 10 (1995), 1995.
[13] MIAO, Y., GOWAYYED, M., AND METZE, F. Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. arXiv preprint arXiv:1507.08240 (2015).
[14] MOHAMED, A.-R., DAHL, G. E., AND HINTON, G. Acoustic modeling using deep belief networks. Audio, Speech, and Language Processing, IEEE Transactions on 20, 1 (2012), 14â22.
[15] PALAZ, D., COLLOBERT, R., AND DOSS, M. M. Estimating phoneme class conditional probabilities from raw speech signal using convolutional neural networks. arXiv preprint arXiv:1304.1018 (2013). | 1609.03193#25 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 26 | [16] PALAZ, D., COLLOBERT, R., ET AL. Analysis of cnn-based speech recognition system using raw speech as input. In Proceedings of Interspeech (2015), no. EPFL-CONF-210029.
[17] PALAZ, D., MAGIMAI-DOSS, M., AND COLLOBERT, R. Joint phoneme segmentation infer- ence and classiï¬cation using crfs. In Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on (2014), IEEE, pp. 587â591.
[18] PANAYOTOV, V., CHEN, G., POVEY, D., AND KHUDANPUR, S. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (2015), IEEE, pp. 5206â5210.
[19] PEDDINTI, V., CHEN, G., MANOHAR, V., KO, T., POVEY, D., AND KHUDANPUR, S. Jhu aspire system: Robust lvcsr with tdnns, i-vector adaptation, and rnn-lms. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (2015). | 1609.03193#26 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.03193 | 27 | [20] PEDDINTI, V., POVEY, D., AND KHUDANPUR, S. A time delay neural network architecture for efï¬cient modeling of long temporal contexts. In Proceedings of INTERSPEECH (2015).
[21] SAON, G., KUO, H.-K. J., RENNIE, S., AND PICHENY, M. The ibm 2015 english conversa- tional telephone speech recognition system. arXiv preprint arXiv:1505.05899 (2015).
[22] SAON, G., SOLTAU, H., NAHAMOO, D., AND PICHENY, M. Speaker adaptation of neural network acoustic models using i-vectors. In ASRU (2013), pp. 55â59.
[23] SENIOR, A., HEIGOLD, G., BACCHIANI, M., AND LIAO, H. Gmm-free dnn training. In Proceedings of ICASSP (2014), pp. 5639â5643.
[24] SERCU, T., PUHRSCH, C., KINGSBURY, B., AND LECUN, Y. Very deep multilingual convolutional neural networks for lvcsr. arXiv preprint arXiv:1509.08967 (2015). | 1609.03193#27 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | This paper presents a simple end-to-end model for speech recognition,
combining a convolutional network based acoustic model and a graph decoding. It
is trained to output letters, with transcribed speech, without the need for
force alignment of phonemes. We introduce an automatic segmentation criterion
for training from sequence annotation without alignment that is on par with CTC
while being simpler. We show competitive results in word error rate on the
Librispeech corpus with MFCC features, and promising results from raw waveform. | http://arxiv.org/pdf/1609.03193 | Ronan Collobert, Christian Puhrsch, Gabriel Synnaeve | cs.LG, cs.AI, cs.CL, I.2.6; I.2.7 | 8 pages, 4 figures (7 plots/schemas), 2 tables (4 tabulars) | null | cs.LG | 20160911 | 20160913 | [
{
"id": "1509.08967"
},
{
"id": "1512.02595"
},
{
"id": "1507.08240"
},
{
"id": "1505.05899"
}
] |
1609.02200 | 1 | # Jason Tyler Rolfe D-Wave Systems Burnaby, BC V5G-4M9, Canada [email protected]
# ABSTRACT
Probabilistic models with discrete latent variables naturally capture datasets com- posed of discrete classes. However, they are difï¬cult to train efï¬ciently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models com- prises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the discon- nected smooth manifolds induced by the continuous component. As a result, this class of models efï¬ciently learns both the class of objects in an image, and their speciï¬c realization in pixels, from unsupervised data; and outperforms state-of- the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
# INTRODUCTION | 1609.02200#1 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 2 | # INTRODUCTION
Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classiï¬cation (Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difï¬cult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. | 1609.02200#2 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 3 | It would be natural to represent the space within each disconnected component with continuous vari- ables, and the selection amongst these components with discrete variables. In contrast, most state- of-the-art probabilistic models use exclusively discrete variables â as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau- ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) â or exclusively continuous variables â as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efï¬cient variational autoencoder frame- work to models with discrete values, but this has proven difï¬cult, since backpropagation through discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015). | 1609.02200#3 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 4 | We introduce a novel class of probabilistic models, comprising an undirected graphical model de- ï¬ned over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its speciï¬c con- tinuously deformable realization. Moreover, we show how these models can be trained efï¬ciently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong corre- lations. Since these models efï¬ciently marry the variational autoencoder framework with discrete latent variables, we call them discrete variational autoencoders (discrete VAEs).
1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables.
1
Published as a conference paper at ICLR 2017
1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS | 1609.02200#4 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 5 | 1
Published as a conference paper at ICLR 2017
1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS
Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log- likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby, 1993).
In contrast to the exact log-likelihood, it can be computationally efï¬cient to optimize a lower bound (x, θ, Ï); on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, Hinton & Zemel, 1994):
(x, θ, Ï) = log p(x KL[q(z p(z x, θ)], (1)
θ) |
x, Ï) |
# L
â
||
| | 1609.02200#5 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 6 | θ) |
x, Ï) |
# L
â
||
|
x, θ). where q(z | We denote the observed random variables by x, the latent random variables by z, the parameters of the generative model by θ, and the parameters of the approximating posterior by Ï. The variational autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups the evidence lower bound of Equation 1 as:
L(x,6,) = âKL {q(2|2, 6)||p(2|9)] +E, log p(c|z, 6). @) a KL term autoencoding term
# L | 1609.02200#6 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 7 | # L
x) and p(z), the KL term of Equation 2 In many cases of practical interest, such as Gaussian q(z | can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior q(z x) can be drawn using a differentiable, deterministic function f (x, Ï, Ï) of the combination of the inputs, the parameters, and a set of input- D. For instance, samples can be drawn from a and parameter-independent random variables Ï (m(x, Ï), v(x, Ï)), using Gaussian distribution with mean and variance determined by the input, f (x, Ï, Ï) = m(x, Ï) + | 1609.02200#7 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 8 | # â¼ N 1 N
fa) 1 (2) ; : FE g(z|x,0) log p(x|z, 0)| © Vv > 9g ePlels (a, p, ¢), 8). (3) prD ' Oo
# â¼D
The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we ï¬nd that an analog of Equation 3 holds. Speciï¬cally,
Di is the uniform distribution between 0 and 1, and f (x) = Fâ
1(x),
where F is the conditional-marginal cumulative distribution function (CDF) deï¬ned by:
# x
F;(x) -|/ p(a|ai,...,@i-1)- (5) ai =â00 However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable.
ââ
A formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky, 1986):
p(z) = 1-20) _ i . ele Wetblz) (6) P Zp | 1609.02200#8 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
1609.02200 | 9 | p(z) = 1-20) _ i . ele Wetblz) (6) P Zp
where z Zp is the partition function of p(z), and the lateral connection matrix W is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not deï¬ned, as required in Equations 3 and 4.2
# {z ER: Lon
>This problem remains even if we use the quantile function, F;'(p) = inf {z ER: Lon p(zâ) > o} ; the derivative of which is either zero or infinite if p is a discrete distribution.
2
(2)
(4)
Published as a conference paper at ICLR 2017
In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar- chical probabilistic model consising of an RBM,3 followed by multiple directed layers of continuous latent variables. This model is efï¬ciently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables. | 1609.02200#9 | Discrete Variational Autoencoders | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. | http://arxiv.org/pdf/1609.02200 | Jason Tyler Rolfe | stat.ML, cs.LG | Published as a conference paper at ICLR 2017 | null | stat.ML | 20160907 | 20170422 | [
{
"id": "1602.08734"
},
{
"id": "1602.02311"
},
{
"id": "1511.06499"
},
{
"id": "1607.05690"
},
{
"id": "1511.05644"
},
{
"id": "1509.00519"
},
{
"id": "1506.04557"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.