id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
1609.04747#31 | An overview of gradient descent optimization algorithms | # 5.1 Hogwild! Niu et al. [15] introduce an update scheme called Hogwild! that allows performing SGD updates in parallel on CPUs. Processors are allowed to access shared memory without locking the parameters. This only works if the input data is sparse, as each update will only modify a fraction of all parameters. They show that in this case, the update scheme achieves almost an optimal rate of convergence, as it is unlikely that processors will overwrite useful information. # 5.2 Downpour SGD Downpour SGD is an asynchronous variant of SGD that was used by Dean et al. [6] in their DistBelief framework (the predecessor to TensorFlow) at Google. It runs multiple replicas of a model in parallel on subsets of the training data. These models send their updates to a parameter server, which is split across many machines. Each machine is responsible for storing and updating a fraction of the modelâ s parameters. However, as replicas donâ t communicate with each other e.g. by sharing weights or updates, their parameters are continuously at risk of diverging, hindering convergence. # 5.3 Delay-tolerant Algorithms for SGD McMahan and Streeter [12] extend AdaGrad to the parallel setting by developing delay-tolerant algorithms that not only adapt to past gradients, but also to the update delays. This has been shown to work well in practice. # 5.4 TensorFlow TensorFlow14 [1] is Googleâ s recently open-sourced framework for the implementation and deploy- ment of large-scale machine learning models. It is based on their experience with DistBelief and is already used internally to perform computations on a large range of mobile devices as well as on large-scale distributed systems. The distributed version, which was released in April 2016 15 relies on a computation graph that is split into a subgraph for every device, while communication takes place using Send/Receive node pairs. # 5.5 Elastic Averaging SGD Zhang et al. [23] propose Elastic Averaging SGD (EASGD), which links the parameters of the workers of asynchronous SGD with an elastic force, i.e. a center variable stored by the parameter server. This allows the local variables to ï¬ uctuate further from the center variable, which in theory allows for more exploration of the parameter space. They show empirically that this increased capacity for exploration leads to improved performance by ï¬ | 1609.04747#30 | 1609.04747#32 | 1609.04747 | [
"1502.03167"
] |
1609.04747#32 | An overview of gradient descent optimization algorithms | nding new local optima. # 6 Additional strategies for optimizing SGD Finally, we introduce additional strategies that can be used alongside any of the previously mentioned algorithms to further improve the performance of SGD. For a great overview of some other common tricks, refer to [11]. # 14https://www.tensorflow.org/ 15http://googleresearch.blogspot.ie/2016/04/announcing-tensorflow-08-now-with.html 11 # 6.1 Shufï¬ | 1609.04747#31 | 1609.04747#33 | 1609.04747 | [
"1502.03167"
] |
1609.04747#33 | An overview of gradient descent optimization algorithms | ing and Curriculum Learning Generally, we want to avoid providing the training examples in a meaningful order to our model as this may bias the optimization algorithm. Consequently, it is often a good idea to shufï¬ e the training data after every epoch. On the other hand, for some cases where we aim to solve progressively harder problems, supplying the training examples in a meaningful order may actually lead to improved performance and better convergence. The method for establishing this meaningful order is called Curriculum Learning [3]. Zaremba and Sutskever [21] were only able to train LSTMs to evaluate simple programs using Curriculum Learning and show that a combined or mixed strategy is better than the naive one, which sorts examples by increasing difï¬ culty. # 6.2 Batch normalization To facilitate learning, we typically normalize the initial values of our parameters by initializing them with zero mean and unit variance. As training progresses and we update parameters to different extents, we lose this normalization, which slows down training and ampliï¬ es changes as the network becomes deeper. Batch normalization [9] reestablishes these normalizations for every mini-batch and changes are back- propagated through the operation as well. By making normalization part of the model architecture, we are able to use higher learning rates and pay less attention to the initialization parameters. Batch normalization additionally acts as a regularizer, reducing (and sometimes even eliminating) the need for Dropout. # 6.3 Early stopping | 1609.04747#32 | 1609.04747#34 | 1609.04747 | [
"1502.03167"
] |
1609.04747#34 | An overview of gradient descent optimization algorithms | According to Geoff Hinton: â Early stopping (is) beautiful free lunchâ 16. You should thus always monitor error on a validation set during training and stop (with some patience) if your validation error does not improve enough. # 6.4 Gradient noise Neelakantan et al. [13] add noise that follows a Gaussian distribution N (0, Ï 2 update: t ) to each gradient gt,i = gt,i + N (0, Ï 2 t ) (34) They anneal the variance according to the following schedule: Ï 2 t = η (1 + t)γ (35) | 1609.04747#33 | 1609.04747#35 | 1609.04747 | [
"1502.03167"
] |
1609.04747#35 | An overview of gradient descent optimization algorithms | They show that adding this noise makes networks more robust to poor initialization and helps training particularly deep and complex networks. They suspect that the added noise gives the model more chances to escape and ï¬ nd new local minima, which are more frequent for deeper models. # 7 Conclusion In this article, we have initially looked at the three variants of gradient descent, among which mini- batch gradient descent is the most popular. We have then investigated algorithms that are most commonly used for optimizing SGD: Momentum, Nesterov accelerated gradient, Adagrad, Adadelta, RMSprop, Adam, AdaMax, Nadam, as well as different algorithms to optimize asynchronous SGD. | 1609.04747#34 | 1609.04747#36 | 1609.04747 | [
"1502.03167"
] |
1609.04747#36 | An overview of gradient descent optimization algorithms | Finally, weâ ve considered other strategies to improve SGD such as shufï¬ ing and curriculum learning, batch normalization, and early stopping. 16NIPS 2015 Tutorial DL-Tutorial-NIPS2015.pdf slides, slide 63, http://www.iro.umontreal.ca/~bengioy/talks/ 12 # References [1] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man, Rajat Monga, Sherry Moore, Derek Murray, Jon Shlens, Benoit Steiner, Ilya Sutskever, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Oriol Vinyals, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2015. [2] Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. Advances in Optimiz- ing Recurrent Networks. 2012. [3] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. Proceedings of the 26th annual international conference on machine learning, pages 41â 48, 2009. [4] C. Darken, J. Chang, and J. Moody. | 1609.04747#35 | 1609.04747#37 | 1609.04747 | [
"1502.03167"
] |
1609.04747#37 | An overview of gradient descent optimization algorithms | Learning rate schedules for faster stochastic gradient search. Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop, (September):1â 11, 1992. [5] Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non- convex optimization. arXiv, pages 1â | 1609.04747#36 | 1609.04747#38 | 1609.04747 | [
"1502.03167"
] |
1609.04747#38 | An overview of gradient descent optimization algorithms | 14, 2014. [6] Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large Scale Distributed Deep Networks. NIPS 2012: Neural Information Processing Systems, pages 1â 11, 2012. [7] Timothy Dozat. | 1609.04747#37 | 1609.04747#39 | 1609.04747 | [
"1502.03167"
] |
1609.04747#39 | An overview of gradient descent optimization algorithms | Incorporating Nesterov Momentum into Adam. ICLR Workshop, (1):2013â 2016, 2016. [8] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121â 2159, 2011. [9] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167v3, 2015. | 1609.04747#38 | 1609.04747#40 | 1609.04747 | [
"1502.03167"
] |
1609.04747#40 | An overview of gradient descent optimization algorithms | [10] Diederik P. Kingma and Jimmy Lei Ba. Adam: a Method for Stochastic Optimization. Interna- tional Conference on Learning Representations, pages 1â 13, 2015. [11] Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus Robert Müller. Efï¬ cient BackProp. Neural Networks: Tricks of the Trade, 1524:9â 50, 1998. [12] H. Brendan Mcmahan and Matthew Streeter. Delay-Tolerant Algorithms for Asynchronous Distributed Online Learning. Advances in Neural Information Processing Systems (Proceedings of NIPS), pages 1â | 1609.04747#39 | 1609.04747#41 | 1609.04747 | [
"1502.03167"
] |
1609.04747#41 | An overview of gradient descent optimization algorithms | 9, 2014. [13] Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding Gradient Noise Improves Learning for Very Deep Networks. pages 1â 11, 2015. [14] Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady ANSSSR (translated as Soviet.Math.Docl.), 269:543â 547. [15] Feng Niu, Benjamin Recht, R Christopher, and Stephen J Wright. | 1609.04747#40 | 1609.04747#42 | 1609.04747 | [
"1502.03167"
] |
1609.04747#42 | An overview of gradient descent optimization algorithms | Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. pages 1â 22, 2011. [16] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532â 1543, 2014. [17] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks : the ofï¬ cial journal of the International Neural Network Society, 12(1):145â 151, 1999. [18] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400â 407, 1951. [19] Ilya Sutskever. | 1609.04747#41 | 1609.04747#43 | 1609.04747 | [
"1502.03167"
] |
1609.04747#43 | An overview of gradient descent optimization algorithms | Training Recurrent neural Networks. PhD thesis, page 101, 2013. 13 [20] Richard S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks, 1986. [21] Wojciech Zaremba and Ilya Sutskever. Learning to Execute. pages 1â 25, 2014. [22] Matthew D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv preprint arXiv:1212.5701, 2012. [23] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with Elastic Averaging SGD. Neural Information Processing Systems Conference (NIPS 2015), pages 1â 24, 2015. 14 | 1609.04747#42 | 1609.04747 | [
"1502.03167"
] |
|
1609.03499#0 | WaveNet: A Generative Model for Raw Audio | 6 1 0 2 p e S 9 1 ] D S . s c [ 2 v 9 9 4 3 0 . 9 0 6 1 : v i X r a # WAVENET: A GENERATIVE MODEL FOR RAW AUDIO A¨aron van den Oord Sander Dieleman Heiga Zenâ Karen Simonyan Oriol Vinyals Alex Graves Nal Kalchbrenner Andrew Senior Koray Kavukcuoglu {avdnoord, sedielem, heigazen, simonyan, vinyals, gravesa, nalk, andrewsenior, korayk}@google.com Google DeepMind, London, UK â Google, London, UK # ABSTRACT | 1609.03499#1 | 1609.03499 | [
"1601.06759"
] |
|
1609.03499#1 | WaveNet: A Generative Model for Raw Audio | This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predic- tive distribution for each audio sample conditioned on all previous ones; nonethe- less we show that it can be efï¬ ciently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-of- the-art performance, with human listeners rating it as signiï¬ cantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal ï¬ delity, and can switch between them by conditioning on the speaker identity. When trained to model music, we ï¬ nd that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition. | 1609.03499#0 | 1609.03499#2 | 1609.03499 | [
"1601.06759"
] |
1609.03499#2 | WaveNet: A Generative Model for Raw Audio | # INTRODUCTION This work explores raw audio generation techniques, inspired by recent advances in neural autore- gressive generative models that model complex distributions such as images (van den Oord et al., 2016a;b) and text (J´ozefowicz et al., 2016). Modeling joint probabilities over pixels or words using neural architectures as products of conditional distributions yields state-of-the-art generation. Remarkably, these architectures are able to model distributions over thousands of random variables (e.g. 64à 64 pixels as in PixelRNN (van den Oord et al., 2016a)). The question this paper addresses is whether similar approaches can succeed in generating wideband raw audio waveforms, which are signals with very high temporal resolution, at least 16,000 samples per second (see Fig. 1). | 1609.03499#1 | 1609.03499#3 | 1609.03499 | [
"1601.06759"
] |
1609.03499#3 | WaveNet: A Generative Model for Raw Audio | Niven -â Saal Figure 1: A second of generated speech. This paper introduces WaveNet, an audio generative model based on the PixelCNN (van den Oord et al., 2016a;b) architecture. The main contributions of this work are as follows: â ¢ We show that WaveNets can generate raw speech signals with subjective naturalness never before reported in the ï¬ eld of text-to-speech (TTS), as assessed by human raters. | 1609.03499#2 | 1609.03499#4 | 1609.03499 | [
"1601.06759"
] |
1609.03499#4 | WaveNet: A Generative Model for Raw Audio | 1 â ¢ In order to deal with long-range temporal dependencies needed for raw audio generation, we develop new architectures based on dilated causal convolutions, which exhibit very large receptive ï¬ elds. â ¢ We show that when conditioned on a speaker identity, a single model can be used to gener- ate different voices. â ¢ The same architecture shows strong results when tested on a small speech recognition dataset, and is promising when used to generate other audio modalities such as music. We believe that WaveNets provide a generic and ï¬ exible framework for tackling many applications that rely on audio generation (e.g. TTS, music, speech enhancement, voice conversion, source sep- aration). # 2 WAVENET In this paper we introduce a new generative model operating directly on the raw audio waveform. The joint probability of a waveform x = {x1, . . . , xT } is factorised as a product of conditional probabilities as follows: T p(x) =] p(a | 21,.--, 2-1) dd) t=1 t=1 Each audio sample xt is therefore conditioned on the samples at all previous timesteps. Similarly to PixelCNNs (van den Oord et al., 2016a;b), the conditional probability distribution is modelled by a stack of convolutional layers. There are no pooling layers in the network, and the output of the model has the same time dimensionality as the input. The model outputs a categorical distribution over the next value xt with a softmax layer and it is optimized to maximize the log- likelihood of the data w.r.t. the parameters. Because log-likelihoods are tractable, we tune hyper- parameters on a validation set and can easily measure if the model is overï¬ tting or underï¬ tting. 2.1 DILATED CAUSAL CONVOLUTIONS Output O- O Oe Oo Oo 66065650 66606 Hidden Layer 666066066666 ) Hidden Layer Cee eee bs |. Figure 2: Visualization of a stack of causal convolutional layers. The main ingredient of WaveNet are causal convolutions. | 1609.03499#3 | 1609.03499#5 | 1609.03499 | [
"1601.06759"
] |
1609.03499#5 | WaveNet: A Generative Model for Raw Audio | By using causal convolutions, we make sure the model cannot violate the ordering in which we model the data: the prediction p (xt+1 | x1, ..., xt) emitted by the model at timestep t cannot depend on any of the future timesteps xt+1, xt+2, . . . , xT as shown in Fig. 2. For images, the equivalent of a causal convolution is a masked convolution (van den Oord et al., 2016a) which can be implemented by constructing a mask tensor and doing an elementwise multiplication of this mask with the convolution kernel before ap- plying it. For 1-D data such as audio one can more easily implement this by shifting the output of a normal convolution by a few timesteps. At training time, the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known. When generating with the model, the predictions are se- quential: after each sample is predicted, it is fed back into the network to predict the next sample. | 1609.03499#4 | 1609.03499#6 | 1609.03499 | [
"1601.06759"
] |
1609.03499#6 | WaveNet: A Generative Model for Raw Audio | 2 Because models with causal convolutions do not have recurrent connections, they are typically faster to train than RNNs, especially when applied to very long sequences. One of the problems of causal convolutions is that they require many layers, or large ï¬ lters to increase the receptive ï¬ eld. For example, in Fig. 2 the receptive ï¬ eld is only 5 (= #layers + ï¬ lter length - 1). In this paper we use dilated convolutions to increase the receptive ï¬ eld by orders of magnitude, without greatly increasing computational cost. A dilated convolution (also called `a trous, or convolution with holes) is a convolution where the ï¬ lter is applied over an area larger than its length by skipping input values with a certain step. It is equivalent to a convolution with a larger ï¬ lter derived from the original ï¬ lter by dilating it with zeros, but is signiï¬ cantly more efï¬ cient. | 1609.03499#5 | 1609.03499#7 | 1609.03499 | [
"1601.06759"
] |
1609.03499#7 | WaveNet: A Generative Model for Raw Audio | A dilated convolution effectively allows the network to operate on a coarser scale than with a normal convolution. This is similar to pooling or strided convolutions, but here the output has the same size as the input. As a special case, dilated convolution with dilation 1 yields the standard convolution. Fig. 3 depicts dilated causal convolutions for dilations 1, 2, 4, and 8. Dilated convolutions have previously been used in various contexts, e.g. signal processing (Holschneider et al., 1989; Dutilleux, 1989), and image segmentation (Chen et al., 2015; Yu & Koltun, 2016). | 1609.03499#6 | 1609.03499#8 | 1609.03499 | [
"1601.06759"
] |
1609.03499#8 | WaveNet: A Generative Model for Raw Audio | @ ¢g . : ' , 4 . . : ? ' : Output i â : ' i : t : : ' i : : ' Dilation = 8 Hidden Layer Dilation = 4 Hidden Layer Dilation = 2 Hidden Layer Dilation = 4 Input Figure 3: Visualization of a stack of dilated causal convolutional layers. Stacked dilated convolutions enable networks to have very large receptive ï¬ elds with just a few lay- ers, while preserving the input resolution throughout the network as well as computational efï¬ ciency. In this paper, the dilation is doubled for every layer up to a limit and then repeated: e.g. 1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512, 1, 2, 4, . . . , 512. The intuition behind this conï¬ guration is two-fold. First, exponentially increasing the dilation factor results in exponential receptive ï¬ eld growth with depth (Yu & Koltun, 2016). For example each 1, 2, 4, . . . , 512 block has receptive ï¬ eld of size 1024, and can be seen as a more efï¬ cient and dis- criminative (non-linear) counterpart of a 1à 1024 convolution. Second, stacking these blocks further increases the model capacity and the receptive ï¬ eld size. 2.2 SOFTMAX DISTRIBUTIONS One approach to modeling the conditional distributions p (xt | x1, . . . , xtâ 1) over the individual audio samples would be to use a mixture model such as a mixture density network (Bishop, 1994) or mixture of conditional Gaussian scale mixtures (MCGSM) (Theis & Bethge, 2015). However, van den Oord et al. (2016a) showed that a softmax distribution tends to work better, even when the data is implicitly continuous (as is the case for image pixel intensities or audio sample values). One of the reasons is that a categorical distribution is more ï¬ exible and can more easily model arbitrary distributions because it makes no assumptions about their shape. | 1609.03499#7 | 1609.03499#9 | 1609.03499 | [
"1601.06759"
] |
1609.03499#9 | WaveNet: A Generative Model for Raw Audio | Because raw audio is typically stored as a sequence of 16-bit integer values (one per timestep), a softmax layer would need to output 65,536 probabilities per timestep to model all possible values. To make this more tractable, we ï¬ rst apply a µ-law companding transformation (ITU-T, 1988) to the data, and then quantize it to 256 possible values: f (xt) = sign(xt) ln (1 + µ |xt|) ln (1 + µ) , 3 where â 1 < xt < 1 and µ = 255. This non-linear quantization produces a signiï¬ cantly better reconstruction than a simple linear quantization scheme. Especially for speech, we found that the reconstructed signal after quantization sounded very similar to the original. | 1609.03499#8 | 1609.03499#10 | 1609.03499 | [
"1601.06759"
] |
1609.03499#10 | WaveNet: A Generative Model for Raw Audio | 2.3 GATED ACTIVATION UNITS We use the same gated activation unit as used in the gated PixelCNN (van den Oord et al., 2016b): z = tanh (Wy, *x) Oo (Won * x), (2) where * denotes a convolution operator, © denotes an element-wise multiplication operator, o(-) is a sigmoid function, k is the layer index, f and g denote filter and gate, respectively, and W is a learnable convolution filter. In our initial experiments, we observed that this non-linearity worked significantly better than the rectified linear activation function (Nair & Hinton) [2010) for modeling audio signals. | 1609.03499#9 | 1609.03499#11 | 1609.03499 | [
"1601.06759"
] |
1609.03499#11 | WaveNet: A Generative Model for Raw Audio | # 2.4 RESIDUAL AND SKIP CONNECTIONS f E@fa@ 1x 1H) Softmax + Output Skip-connections Dilated Conv -9- 4 Causal Conv r Input Figure 4: Overview of the residual block and the entire architecture. Both residual (He et al., 2015) and parameterised skip connections are used throughout the network, to speed up convergence and enable training of much deeper models. In Fig. 4 we show a residual block of our model, which is stacked many times in the network. 2.5 CONDITIONAL WAVENETS Given an additional input h, WaveNets can model the conditional distribution p (x | h) of the audio given this input. Eq. (1) now becomes T p(x|b)=]] p (a | a,-..,ae-1,h). (3) t=1 | 1609.03499#10 | 1609.03499#12 | 1609.03499 | [
"1601.06759"
] |
1609.03499#12 | WaveNet: A Generative Model for Raw Audio | By conditioning the model on other input variables, we can guide WaveNetâ s generation to produce audio with the required characteristics. For example, in a multi-speaker setting we can choose the speaker by feeding the speaker identity to the model as an extra input. Similarly, for TTS we need to feed information about the text as an extra input. We condition the model on other inputs in two different ways: global conditioning and local condi- tioning. Global conditioning is characterised by a single latent representation h that inï¬ uences the output distribution across all timesteps, e.g. a speaker embedding in a TTS model. The activation function from Eq. (2) now becomes: z = tanh (Wp, *x + Vinh) Oa (Wyn *x+ Vonh) . | 1609.03499#11 | 1609.03499#13 | 1609.03499 | [
"1601.06759"
] |
1609.03499#13 | WaveNet: A Generative Model for Raw Audio | 4 where Vâ ,k is a learnable linear projection, and the vector V T sion. â ,kh is broadcast over the time dimen- For local conditioning we have a second timeseries ht, possibly with a lower sampling frequency than the audio signal, e.g. linguistic features in a TTS model. We ï¬ rst transform this time series using a transposed convolutional network (learned upsampling) that maps it to a new time series y = f (h) with the same resolution as the audio signal, which is then used in the activation unit as follows: z= tanh (Wp, *x+Vpn*y) Oo (Wor *x+Von*y), where Vf,k â y is now a 1à 1 convolution. As an alternative to the transposed convolutional network, it is also possible to use Vf,k â h and repeat these values across time. We saw that this worked slightly worse in our experiments. | 1609.03499#12 | 1609.03499#14 | 1609.03499 | [
"1601.06759"
] |
1609.03499#14 | WaveNet: A Generative Model for Raw Audio | 2.6 CONTEXT STACKS We have already mentioned several different ways to increase the receptive ï¬ eld size of a WaveNet: increasing the number of dilation stages, using more layers, larger ï¬ lters, greater dilation factors, or a combination thereof. A complementary approach is to use a separate, smaller context stack that processes a long part of the audio signal and locally conditions a larger WaveNet that processes only a smaller part of the audio signal (cropped at the end). One can use multiple context stacks with varying lengths and numbers of hidden units. Stacks with larger receptive ï¬ elds have fewer units per layer. Context stacks can also have pooling layers to run at a lower frequency. This keeps the computational requirements at a reasonable level and is consistent with the intuition that less capacity is required to model temporal correlations at longer timescales. | 1609.03499#13 | 1609.03499#15 | 1609.03499 | [
"1601.06759"
] |
1609.03499#15 | WaveNet: A Generative Model for Raw Audio | # 3 EXPERIMENTS To measure WaveNetâ s audio modelling performance, we evaluate it on three different tasks: multi- speaker speech generation (not conditioned on text), TTS, and music audio modelling. We provide samples drawn from WaveNet for these experiments on the accompanying webpage: https://www.deepmind.com/blog/wavenet-generative-model-raw-audio/. 3.1 MULTI-SPEAKER SPEECH GENERATION For the ï¬ rst experiment we looked at free-form speech generation (not conditioned on text). We used the English multi-speaker corpus from CSTR voice cloning toolkit (VCTK) (Yamagishi, 2012) and conditioned WaveNet only on the speaker. The conditioning was applied by feeding the speaker ID to the model in the form of a one-hot vector. The dataset consisted of 44 hours of data from 109 different speakers. Because the model is not conditioned on text, it generates non-existent but human language-like words in a smooth way with realistic sounding intonations. This is similar to generative models of language or images, where samples look realistic at ï¬ rst glance, but are clearly unnatural upon closer inspection. The lack of long range coherence is partly due to the limited size of the modelâ s receptive ï¬ eld (about 300 milliseconds), which means it can only remember the last 2â 3 phonemes it produced. A single WaveNet was able to model speech from any of the speakers by conditioning it on a one- hot encoding of a speaker. | 1609.03499#14 | 1609.03499#16 | 1609.03499 | [
"1601.06759"
] |
1609.03499#16 | WaveNet: A Generative Model for Raw Audio | This conï¬ rms that it is powerful enough to capture the characteristics of all 109 speakers from the dataset in a single model. We observed that adding speakers resulted in better validation set performance compared to training solely on a single speaker. This suggests that WaveNetâ s internal representation was shared among multiple speakers. Finally, we observed that the model also picked up on other characteristics in the audio apart from the voice itself. For instance, it also mimicked the acoustics and recording quality, as well as the breathing and mouth movements of the speakers. | 1609.03499#15 | 1609.03499#17 | 1609.03499 | [
"1601.06759"
] |
1609.03499#17 | WaveNet: A Generative Model for Raw Audio | 5 3.2 TEXT-TO-SPEECH For the second experiment we looked at TTS. We used the same single-speaker speech databases from which Googleâ s North American English and Mandarin Chinese TTS systems are built. The North American English dataset contains 24.6 hours of speech data, and the Mandarin Chinese dataset contains 34.8 hours; both were spoken by professional female speakers. WaveNets for the TTS task were locally conditioned on linguistic features which were derived from input texts. We also trained WaveNets conditioned on the logarithmic fundamental frequency (log F0) values in addition to the linguistic features. External models predicting log F0 values and phone durations from linguistic features were also trained for each language. | 1609.03499#16 | 1609.03499#18 | 1609.03499 | [
"1601.06759"
] |
1609.03499#18 | WaveNet: A Generative Model for Raw Audio | The receptive ï¬ eld size of the WaveNets was 240 milliseconds. As example-based and model-based speech synthesis base- lines, hidden Markov model (HMM)-driven unit selection concatenative (Gonzalvo et al., 2016) and long short-term memory recurrent neural network (LSTM-RNN)-based statistical parametric (Zen et al., 2016) speech synthesizers were built. Since the same datasets and linguistic features were used to train both the baselines and WaveNets, these speech synthesizers could be fairly compared. To evaluate the performance of WaveNets for the TTS task, subjective paired comparison tests and mean opinion score (MOS) tests were conducted. In the paired comparison tests, after listening to each pair of samples, the subjects were asked to choose which they preferred, though they could choose â neutralâ if they did not have any preference. In the MOS tests, after listening to each stimulus, the subjects were asked to rate the naturalness of the stimulus in a ï¬ ve-point Likert scale score (1: Bad, 2: Poor, 3: Fair, 4: Good, 5: Excellent). Please refer to Appendix B for details. Fig. 5 shows a selection of the subjective paired comparison test results (see Appendix B for the complete table). It can be seen from the results that WaveNet outperformed the baseline statisti- cal parametric and concatenative speech synthesizers in both languages. We found that WaveNet conditioned on linguistic features could synthesize speech samples with natural segmental quality but sometimes it had unnatural prosody by stressing wrong words in a sentence. This could be due to the long-term dependency of F0 contours: the size of the receptive ï¬ eld of the WaveNet, 240 milliseconds, was not long enough to capture such long-term dependency. WaveNet conditioned on both linguistic features and F0 values did not have this problem: the external F0 prediction model runs at a lower frequency (200 Hz) so it can learn long-range dependencies that exist in F0 contours. | 1609.03499#17 | 1609.03499#19 | 1609.03499 | [
"1601.06759"
] |
1609.03499#19 | WaveNet: A Generative Model for Raw Audio | Table 1 show the MOS test results. It can be seen from the table that WaveNets achieved 5-scale MOSs in naturalness above 4.0, which were signiï¬ cantly better than those from the baseline systems. They were the highest ever reported MOS values with these training datasets and test sentences. The gap in the MOSs from the best synthetic speech to the natural ones decreased from 0.69 to 0.34 (51%) in US English and 0.42 to 0.13 (69%) in Mandarin Chinese. Subjective 5-scale MOS in naturalness Speech samples North American English Mandarin Chinese LSTM-RNN parametric HMM-driven concatenative WaveNet (L+F) 3.67 ± 0.098 3.86 ± 0.137 4.21 ± 0.081 3.79 ± 0.084 3.47 ± 0.108 4.08 ± 0.085 Natural (8-bit µ-law) Natural (16-bit linear PCM) 4.46 ± 0.067 4.55 ± 0.075 4.25 ± 0.082 4.21 ± 0.071 | 1609.03499#18 | 1609.03499#20 | 1609.03499 | [
"1601.06759"
] |
1609.03499#20 | WaveNet: A Generative Model for Raw Audio | Table 1: Subjective 5-scale mean opinion scores of speech samples from LSTM-RNN-based sta- tistical parametric, HMM-driven unit selection concatenative, and proposed WaveNet-based speech synthesizers, 8-bit µ-law encoded natural speech, and 16-bit linear pulse-code modulation (PCM) natural speech. WaveNet improved the previous state of the art signiï¬ cantly, reducing the gap be- tween natural speech and best previous model by more than 50%. | 1609.03499#19 | 1609.03499#21 | 1609.03499 | [
"1601.06759"
] |
1609.03499#21 | WaveNet: A Generative Model for Raw Audio | 3.3 MUSIC For out third set of experiments we trained WaveNets to model two music datasets: 6 [J-stM~â fiiconcat ~â â [No pref. 100 80 60 40 20 Preference scores (%) North American English Mandarin Chinese 100 [| WaveNet (L) [J WaveNet (L+F) [JJ] No pref. 80 60 40 20 Preference scores (%) North American Mandarin Chinese # English 100 Best baseline [[jWaveNet (L+F) [J ]No pref. 80 60 40 20 Preference scores (%) 0 North American English Mandarin Chinese Figure 5: Subjective preference scores (%) of speech samples between (top) two baselines, (middle) two WaveNets, and (bottom) the best baseline and WaveNet. Note that LSTM and Concat cor- respond to LSTM-RNN-based statistical parametric and HMM-driven unit selection concatenative baseline synthesizers, and WaveNet (L) and WaveNet (L+F) correspond to the WaveNet condi- tioned on linguistic features only and that conditioned on both linguistic features and log F0 values. | 1609.03499#20 | 1609.03499#22 | 1609.03499 | [
"1601.06759"
] |
1609.03499#22 | WaveNet: A Generative Model for Raw Audio | 7 â ¢ the MagnaTagATune dataset (Law & Von Ahn, 2009), which consists of about 200 hours of music audio. Each 29-second clip is annotated with tags from a set of 188, which describe the genre, instrumentation, tempo, volume and mood of the music. â ¢ the YouTube piano dataset, which consists of about 60 hours of solo piano music obtained from YouTube videos. Because it is constrained to a single instrument, it is considerably easier to model. | 1609.03499#21 | 1609.03499#23 | 1609.03499 | [
"1601.06759"
] |
1609.03499#23 | WaveNet: A Generative Model for Raw Audio | Although it is difï¬ cult to quantitatively evaluate these models, a subjective evaluation is possible by listening to the samples they produce. We found that enlarging the receptive ï¬ eld was crucial to ob- tain samples that sounded musical. Even with a receptive ï¬ eld of several seconds, the models did not enforce long-range consistency which resulted in second-to-second variations in genre, instrumen- tation, volume and sound quality. Nevertheless, the samples were often harmonic and aesthetically pleasing, even when produced by unconditional models. Of particular interest are conditional music models, which can generate music given a set of tags specifying e.g. genre or instruments. Similarly to conditional speech models, we insert biases that depend on a binary vector representation of the tags associated with each training clip. This makes it possible to control various aspects of the output of the model when sampling, by feeding in a binary vector that encodes the desired properties of the samples. We have trained such models on the MagnaTagATune dataset; although the tag data bundled with the dataset was relatively noisy and had many omissions, after cleaning it up by merging similar tags and removing those with too few associated clips, we found this approach to work reasonably well. 3.4 SPEECH RECOGNITION Although WaveNet was designed as a generative model, it can straightforwardly be adapted to dis- criminative audio tasks such as speech recognition. Traditionally, speech recognition research has largely focused on using log mel-ï¬ lterbank energies or mel-frequency cepstral coefï¬ cients (MFCCs), but has been moving to raw audio recently (Palaz et al., 2013; T¨uske et al., 2014; Hoshen et al., 2015; Sainath et al., 2015). Recurrent neural networks such as LSTM-RNNs (Hochreiter & Schmidhuber, 1997) have been a key component in these new speech classiï¬ cation pipelines, because they allow for building models with long range contexts. With WaveNets we have shown that layers of dilated convolutions allow the receptive ï¬ eld to grow longer in a much cheaper way than using LSTM units. As a last experiment we looked at speech recognition with WaveNets on the TIMIT (Garofolo et al., 1993) dataset. | 1609.03499#22 | 1609.03499#24 | 1609.03499 | [
"1601.06759"
] |
1609.03499#24 | WaveNet: A Generative Model for Raw Audio | For this task we added a mean-pooling layer after the dilated convolutions that ag- gregated the activations to coarser frames spanning 10 milliseconds (160à downsampling). The pooling layer was followed by a few non-causal convolutions. We trained WaveNet with two loss terms, one to predict the next sample and one to classify the frame, the model generalized better than with a single loss and achieved 18.8 PER on the test set, which is to our knowledge the best score obtained from a model trained directly on raw audio on TIMIT. # 4 CONCLUSION This paper has presented WaveNet, a deep generative model of audio data that operates directly at the waveform level. WaveNets are autoregressive and combine causal ï¬ lters with dilated convolu- tions to allow their receptive ï¬ elds to grow exponentially with depth, which is important to model the long-range temporal dependencies in audio signals. We have shown how WaveNets can be con- ditioned on other inputs in a global (e.g. speaker identity) or local way (e.g. linguistic features). When applied to TTS, WaveNets produced samples that outperform the current best TTS systems in subjective naturalness. Finally, WaveNets showed very promising results when applied to music audio modeling and speech recognition. | 1609.03499#23 | 1609.03499#25 | 1609.03499 | [
"1601.06759"
] |
1609.03499#25 | WaveNet: A Generative Model for Raw Audio | # ACKNOWLEDGEMENTS The authors would like to thank Lasse Espeholt, Jeffrey De Fauw and Grzegorz Swirszcz for their inputs, Adam Cain, Max Cant and Adrian Bolton for their help with artwork, Helen King, Steven 8 Gaffney and Steve Crossan for helping to manage the project, Faith Mackinder for help with prepar- ing the blogpost, James Besley for legal support and Demis Hassabis for managing the project and his inputs. # REFERENCES Agiomyrgiannakis, Yannis. | 1609.03499#24 | 1609.03499#26 | 1609.03499 | [
"1601.06759"
] |
1609.03499#26 | WaveNet: A Generative Model for Raw Audio | Vocaine the vocoder and applications is speech synthesis. In ICASSP, pp. 4230â 4234, 2015. Bishop, Christopher M. Mixture density networks. Technical Report NCRG/94/004, Neural Com- puting Research Group, Aston University, 1994. Chen, Liang-Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. URL http://arxiv.org/abs/1412.7062. Chiba, Tsutomu and Kajiyama, Masato. | 1609.03499#25 | 1609.03499#27 | 1609.03499 | [
"1601.06759"
] |
1609.03499#27 | WaveNet: A Generative Model for Raw Audio | The Vowel: Its Nature and Structure. Tokyo-Kaiseikan, 1942. Dudley, Homer. Remaking speech. The Journal of the Acoustical Society of America, 11(2):169â 177, 1939. Dutilleux, Pierre. An implementation of the â algorithme `a trousâ to compute the wavelet transform. In Combes, Jean-Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 298â | 1609.03499#26 | 1609.03499#28 | 1609.03499 | [
"1601.06759"
] |
1609.03499#28 | WaveNet: A Generative Model for Raw Audio | 304. Springer Berlin Heidelberg, 1989. Fan, Yuchen, Qian, Yao, and Xie, Feng-Long, Soong Frank K. TTS synthesis with bidirectional LSTM based recurrent neural networks. In Interspeech, pp. 1964â 1968, 2014. Fant, Gunnar. Acoustic Theory of Speech Production. Mouton De Gruyter, 1970. Garofolo, John S., Lamel, Lori F., Fisher, William M., Fiscus, Jonathon G., and Pallett, David S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon technical report, 93, 1993. | 1609.03499#27 | 1609.03499#29 | 1609.03499 | [
"1601.06759"
] |
1609.03499#29 | WaveNet: A Generative Model for Raw Audio | Gonzalvo, Xavi, Tazari, Siamak, Chan, Chun-an, Becker, Markus, Gutkin, Alexander, and Silen, Hanna. Recent advances in Google real-time HMM-driven unit selection synthesizer. In Inter- speech, 2016. URL http://research.google.com/pubs/pub45564.html. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735â 1780, 1997. Holschneider, Matthias, Kronland-Martinet, Richard, Morlet, Jean, and Tchamitchian, Philippe. | 1609.03499#28 | 1609.03499#30 | 1609.03499 | [
"1601.06759"
] |
1609.03499#30 | WaveNet: A Generative Model for Raw Audio | A real-time algorithm for signal analysis with the help of the wavelet transform. In Combes, Jean- Michel, Grossmann, Alexander, and Tchamitchian, Philippe (eds.), Wavelets: Time-Frequency Methods and Phase Space, pp. 286â 297. Springer Berlin Heidelberg, 1989. Hoshen, Yedid, Weiss, Ron J., and Wilson, Kevin W. Speech acoustic modeling from raw multi- channel waveforms. In ICASSP, pp. 4624â 4628. IEEE, 2015. Hunt, Andrew J. and Black, Alan W. Unit selection in a concatenative speech synthesis system using a large speech database. In ICASSP, pp. 373â 376, 1996. Imai, Satoshi and Furuichi, Chieko. | 1609.03499#29 | 1609.03499#31 | 1609.03499 | [
"1601.06759"
] |
1609.03499#31 | WaveNet: A Generative Model for Raw Audio | Unbiased estimation of log spectrum. In EURASIP, pp. 203â 206, 1988. Itakura, Fumitada. Line spectrum representation of linear predictor coefï¬ cients of speech signals. The Journal of the Acoust. Society of America, 57(S1):S35â S35, 1975. Itakura, Fumitada and Saito, Shuzo. A statistical method for estimation of speech spectral density and formant frequencies. | 1609.03499#30 | 1609.03499#32 | 1609.03499 | [
"1601.06759"
] |
1609.03499#32 | WaveNet: A Generative Model for Raw Audio | Trans. IEICE, J53A:35â 42, 1970. 9 ITU-T. Recommendation G. 711. Pulse Code Modulation (PCM) of voice frequencies, 1988. J´ozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410. Juang, Biing-Hwang and Rabiner, Lawrence. Mixture autoregressive hidden Markov models for speech signals. IEEE Trans. Acoust. Speech Signal Process., pp. 1404â 1413, 1985. | 1609.03499#31 | 1609.03499#33 | 1609.03499 | [
"1601.06759"
] |
1609.03499#33 | WaveNet: A Generative Model for Raw Audio | Kameoka, Hirokazu, Ohishi, Yasunori, Mochihashi, Daichi, and Le Roux, Jonathan. Speech anal- ysis with multi-kernel linear prediction. In Spring Conference of ASJ, pp. 499â 502, 2010. (in Japanese). Karaali, Orhan, Corrigan, Gerald, Gerson, Ira, and Massey, Noel. Text-to-speech conversion with neural networks: A recurrent TDNN approach. In Eurospeech, pp. 561â 564, 1997. Kawahara, Hideki, Masuda-Katsuse, Ikuyo, and de Cheveign´e, Alain. | 1609.03499#32 | 1609.03499#34 | 1609.03499 | [
"1601.06759"
] |
1609.03499#34 | WaveNet: A Generative Model for Raw Audio | Restructuring speech rep- resentations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency- based f0 extraction: possible role of a repetitive structure in sounds. Speech Commn., 27:187â 207, 1999. Kawahara, Hideki, Estill, Jo, and Fujimura, Osamu. Aperiodicity extraction and control using mixed mode excitation and group delay manipulation for a high quality speech analysis, modiï¬ cation and synthesis system STRAIGHT. In MAVEBA, pp. 13â | 1609.03499#33 | 1609.03499#35 | 1609.03499 | [
"1601.06759"
] |
1609.03499#35 | WaveNet: A Generative Model for Raw Audio | 15, 2001. Law, Edith and Von Ahn, Luis. Input-agreement: a new mechanism for collecting data using human computation games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1197â 1206. ACM, 2009. Maia, Ranniery, Zen, Heiga, and Gales, Mark J. F. Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters. In ISCA SSW7, pp. 88â 93, 2010. | 1609.03499#34 | 1609.03499#36 | 1609.03499 | [
"1601.06759"
] |
1609.03499#36 | WaveNet: A Generative Model for Raw Audio | Morise, Masanori, Yokomori, Fumiya, and Ozawa, Kenji. WORLD: A vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans. Inf. Syst., E99-D(7):1877â 1884, 2016. Moulines, Eric and Charpentier, Francis. Pitch synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Commn., 9:453â 467, 1990. Muthukumar, P. and Black, Alan W. A deep learning approach to data-driven parameterizations for statistical parametric speech synthesis. arXiv:1409.8558, 2014. Nair, Vinod and Hinton, Geoffrey E. | 1609.03499#35 | 1609.03499#37 | 1609.03499 | [
"1601.06759"
] |
1609.03499#37 | WaveNet: A Generative Model for Raw Audio | Rectiï¬ ed linear units improve restricted Boltzmann machines. In ICML, pp. 807â 814, 2010. Nakamura, Kazuhiro, Hashimoto, Kei, Nankaku, Yoshihiko, and Tokuda, Keiichi. Integration of IEICE Trans. Inf. spectral feature extraction and modeling for HMM-based speech synthesis. Syst., E97-D(6):1438â 1448, 2014. Palaz, Dimitri, Collobert, Ronan, and Magimai-Doss, Mathew. | 1609.03499#36 | 1609.03499#38 | 1609.03499 | [
"1601.06759"
] |
1609.03499#38 | WaveNet: A Generative Model for Raw Audio | Estimating phoneme class condi- tional probabilities from raw speech signal using convolutional neural networks. In Interspeech, pp. 1766â 1770, 2013. Peltonen, Sari, Gabbouj, Moncef, and Astola, Jaakko. Nonlinear ï¬ lter design: methodologies and challenges. In IEEE ISPA, pp. 102â 107, 2001. Poritz, Alan B. Linear predictive hidden Markov models and the speech signal. In ICASSP, pp. 1291â 1294, 1982. Rabiner, Lawrence and Juang, Biing-Hwang. Fundamentals of Speech Recognition. PrenticeHall, 1993. | 1609.03499#37 | 1609.03499#39 | 1609.03499 | [
"1601.06759"
] |
1609.03499#39 | WaveNet: A Generative Model for Raw Audio | Sagisaka, Yoshinori, Kaiki, Nobuyoshi, Iwahashi, Naoto, and Mimura, Katsuhiko. ATR ν-talk speech synthesis system. In ICSLP, pp. 483â 486, 1992. 10 Sainath, Tara N., Weiss, Ron J., Senior, Andrew, Wilson, Kevin W., and Vinyals, Oriol. Learning the speech front-end with raw waveform CLDNNs. In Interspeech, pp. 1â | 1609.03499#38 | 1609.03499#40 | 1609.03499 | [
"1601.06759"
] |
1609.03499#40 | WaveNet: A Generative Model for Raw Audio | 5, 2015. Takaki, Shinji and Yamagishi, Junichi. A deep auto-encoder based low-dimensional feature ex- traction from FFT spectral envelopes for statistical parametric speech synthesis. In ICASSP, pp. 5535â 5539, 2016. Takamichi, Shinnosuke, Toda, Tomoki, Black, Alan W., Neubig, Graham, Sakriani, Sakti, and Naka- mura, Satoshi. | 1609.03499#39 | 1609.03499#41 | 1609.03499 | [
"1601.06759"
] |
1609.03499#41 | WaveNet: A Generative Model for Raw Audio | Postï¬ lters to modify the modulation spectrum for statistical parametric speech synthesis. IEEE/ACM Trans. Audio Speech Lang. Process., 24(4):755â 767, 2016. Theis, Lucas and Bethge, Matthias. Generative image modeling using spatial LSTMs. In NIPS, pp. 1927â 1935, 2015. Toda, Tomoki and Tokuda, Keiichi. A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. IEICE Trans. Inf. Syst., E90-D(5):816â 824, 2007. Toda, Tomoki and Tokuda, Keiichi. Statistical approach to vocal tract transfer function estimation based on factor analyzed trajectory hmm. In ICASSP, pp. 3925â 3928, 2008. Tokuda, Keiichi. Speech synthesis as a statistical machine learning problem. http://www.sp. nitech.ac.jp/Ë tokuda/tokuda_asru2011_for_pdf.pdf, 2011. Invited talk given at ASRU. Tokuda, Keiichi and Zen, Heiga. | 1609.03499#40 | 1609.03499#42 | 1609.03499 | [
"1601.06759"
] |
1609.03499#42 | WaveNet: A Generative Model for Raw Audio | Directly modeling speech waveforms by neural networks for statistical parametric speech synthesis. In ICASSP, pp. 4215â 4219, 2015. Tokuda, Keiichi and Zen, Heiga. Directly modeling voiced and unvoiced components in speech waveforms by neural networks. In ICASSP, pp. 5640â 5644, 2016. Tuerk, Christine and Robinson, Tony. Speech synthesis using artiï¬ cial neural networks trained on cepstral coefï¬ cients. In Proc. Eurospeech, pp. 1713â 1716, 1993. T¨uske, Zolt´an, Golik, Pavel, Schl¨uter, Ralf, and Ney, Hermann. | 1609.03499#41 | 1609.03499#43 | 1609.03499 | [
"1601.06759"
] |
1609.03499#43 | WaveNet: A Generative Model for Raw Audio | Acoustic modeling with deep neural networks using raw time signal for LVCSR. In Interspeech, pp. 890â 894, 2014. Uria, Benigno, Murray, Iain, Renals, Steve, Valentini-Botinhao, Cassia, and Bridle, John. Modelling acoustic feature dependencies with artiï¬ cial neural networks: Trajectory-RNADE. In ICASSP, pp. 4465â 4469, 2015. van den Oord, A¨aron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016a. van den Oord, A¨aron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016b. URL http://arxiv.org/abs/1606.05328. Wu, Yi-Jian and Tokuda, Keiichi. | 1609.03499#42 | 1609.03499#44 | 1609.03499 | [
"1601.06759"
] |
1609.03499#44 | WaveNet: A Generative Model for Raw Audio | Minimum generation error training with direct log spectral distor- tion on LSPs for HMM-based speech synthesis. In Interspeech, pp. 577â 580, 2008. Yamagishi, Junichi. English multi-speaker corpus for CSTR voice cloning toolkit, 2012. URL http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html. Yoshimura, Takayoshi. Simultaneous modeling of phonetic and prosodic parameters, and char- acteristic conversion for HMM-based text-to-speech systems. PhD thesis, Nagoya Institute of Technology, 2002. Yu, Fisher and Koltun, Vladlen. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. URL http://arxiv.org/abs/1511.07122. | 1609.03499#43 | 1609.03499#45 | 1609.03499 | [
"1601.06759"
] |
1609.03499#45 | WaveNet: A Generative Model for Raw Audio | Zen, Heiga. An example of context-dependent label format for HMM-based speech synthesis in English, 2006. URL http://hts.sp.nitech.ac.jp/?Download. 11 Feature | ' ic [| prediction |e SeaREES = soeccn Training Synthesis Model training # Figure 6: Outline of statistical parametric speech synthesis. Zen, Heiga, Tokuda, Keiichi, and Kitamura, Tadashi. Reformulating the HMM as a trajectory model by imposing explicit relationships between static and dynamic features. | 1609.03499#44 | 1609.03499#46 | 1609.03499 | [
"1601.06759"
] |
1609.03499#46 | WaveNet: A Generative Model for Raw Audio | Comput. Speech Lang., 21(1):153â 173, 2007. Zen, Heiga, Tokuda, Keiichi, and Black, Alan W. Statistical parametric speech synthesis. Speech Commn., 51(11):1039â 1064, 2009. Zen, Heiga, Senior, Andrew, and Schuster, Mike. Statistical parametric speech synthesis using deep neural networks. In Proc. ICASSP, pp. 7962â 7966, 2013. Zen, Heiga, Agiomyrgiannakis, Yannis, Egberts, Niels, Henderson, Fergus, and Szczepaniak, Prze- mysÅ aw. | 1609.03499#45 | 1609.03499#47 | 1609.03499 | [
"1601.06759"
] |
1609.03499#47 | WaveNet: A Generative Model for Raw Audio | Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthe- sizers for mobile devices. In Interspeech, 2016. URL https://arxiv.org/abs/1606. 06061. # A TEXT-TO-SPEECH BACKGROUND The goal of TTS synthesis is to render naturally sounding speech signals given a text to be syn- thesized. Human speech production process ï¬ rst translates a text (or concept) into movements of muscles associated with articulators and speech production-related organs. Then using air-ï¬ ow from lung, vocal source excitation signals, which contain both periodic (by vocal cord vibration) and aperiodic (by turbulent noise) components, are generated. By ï¬ ltering the vocal source excitation signals by time-varying vocal tract transfer functions controlled by the articulators, their frequency characteristics are modulated. Finally, the generated speech signals are emitted. The aim of TTS is to mimic this process by computers in some way. TTS can be viewed as a sequence-to-sequence mapping problem; from a sequence of discrete sym- bols (text) to a real-valued time series (speech signals). A typical TTS pipeline has two parts; 1) text analysis and 2) speech synthesis. The text analysis part typically includes a number of natural language processing (NLP) steps, such as sentence segmentation, word segmentation, text normal- ization, part-of-speech (POS) tagging, and grapheme-to-phoneme (G2P) conversion. It takes a word sequence as input and outputs a phoneme sequence with a variety of linguistic contexts. The speech synthesis part takes the context-dependent phoneme sequence as its input and outputs a synthesized speech waveform. This part typically includes prosody prediction and speech waveform generation. There are two main approaches to realize the speech synthesis part; non-parametric, example-based approach known as concatenative speech synthesis (Moulines & Charpentier, 1990; Sagisaka et al., 1992; Hunt & Black, 1996), and parametric, model-based approach known as statistical parametric speech synthesis (Yoshimura, 2002; Zen et al., 2009). The concatenative approach builds up the utterance from units of recorded speech, whereas the statistical parametric approach uses a gener- ative model to synthesize the speech. | 1609.03499#46 | 1609.03499#48 | 1609.03499 | [
"1601.06759"
] |
1609.03499#48 | WaveNet: A Generative Model for Raw Audio | The statistical parametric approach ï¬ rst extracts a sequence of vocoder parameters (Dudley, 1939) o = {o1, . . . , oN } from speech signals x = {x1, . . . , xT } and linguistic features l from the text W , where N and T correspond to the numbers of vocoder parameter vectors and speech signals. Typically a vocoder parameter vector on is extracted at ev- ery 5 milliseconds. It often includes cepstra (Imai & Furuichi, 1988) or line spectral pairs (Itakura, 1975), which represent vocal tract transfer function, and fundamental frequency (F0) and aperiodic- ity (Kawahara et al., 2001), which represent characteristics of vocal source excitation signals. Then a set of generative models, such as hidden Markov models (HMMs) (Yoshimura, 2002), feed-forward neural networks (Zen et al., 2013), and recurrent neural networks (Tuerk & Robinson, 1993; Karaali et al., 1997; Fan et al., 2014), is trained from the extracted vocoder parameters and linguistic features | 1609.03499#47 | 1609.03499#49 | 1609.03499 | [
"1601.06759"
] |
1609.03499#49 | WaveNet: A Generative Model for Raw Audio | 12 # as Ë Î = arg max p (o | l, Î ) , Î (4) where Î denotes the set of parameters of the generative model. At the synthesis stage, the most probable vocoder parameters are generated given linguistic features extracted from a text to be syn- thesized as Ë o = arg max p(o | l, Ë Î ). o (5) Then a speech waveform is reconstructed from Ë o using a vocoder. The statistical parametric ap- proach offers various advantages over the concatenative one such as small footprint and ï¬ exibility to change its voice characteristics. However, its subjective naturalness is often signiï¬ cantly worse than that of the concatenative approach; synthesized speech often sounds mufï¬ | 1609.03499#48 | 1609.03499#50 | 1609.03499 | [
"1601.06759"
] |
1609.03499#50 | WaveNet: A Generative Model for Raw Audio | ed and has artifacts. Zen et al. (2009) reported three major factors that can degrade the subjective naturalness; quality of vocoders, accuracy of generative models, and effect of oversmoothing. The ï¬ rst factor causes the artifacts and the second and third factors lead to the mufï¬ eness in the synthesized speech. There have been a number of attempts to address these issues individually, such as developing high-quality vocoders (Kawahara et al., 1999; Agiomyrgiannakis, 2015; Morise et al., 2016), improving the ac- curacy of generative models (Zen et al., 2007; 2013; Fan et al., 2014; Uria et al., 2015), and compen- sating the oversmoothing effect (Toda & Tokuda, 2007; Takamichi et al., 2016). Zen et al. (2016) showed that state-of-the-art statistical parametric speech syntheziers matched state-of-the-art con- catenative ones in some languages. However, its vocoded sound quality is still a major issue. Extracting vocoder parameters can be viewed as estimation of a generative model parameters given speech signals (Itakura & Saito, 1970; Imai & Furuichi, 1988). For example, linear predictive anal- ysis (Itakura & Saito, 1970), which has been used in speech coding, assumes that the generative model of speech signals is a linear auto-regressive (AR) zero-mean Gaussian process; # autoregressive =Car p+ (6) p=1 p=1 er ~ N(0,G?) (7) where ap is a p-th order linear predictive coefï¬ cient (LPC) and G2 is a variance of modeling error. These parameters are estimated based on the maximum likelihood (ML) criterion. In this sense, the training part of the statistical parametric approach can be viewed as a two-step optimization and sub-optimal: extract vocoder parameters by ï¬ tting a generative model of speech signals then model trajectories of the extracted vocoder parameters by a separate generative model for time series (Tokuda, 2011). | 1609.03499#49 | 1609.03499#51 | 1609.03499 | [
"1601.06759"
] |
1609.03499#51 | WaveNet: A Generative Model for Raw Audio | There have been attempts to integrate these two steps into a single one (Toda & Tokuda, 2008; Wu & Tokuda, 2008; Maia et al., 2010; Nakamura et al., 2014; Muthukumar & Black, 2014; Tokuda & Zen, 2015; 2016; Takaki & Yamagishi, 2016). For example, Tokuda & Zen (2016) integrated non-stationary, nonzero-mean Gaussian process generative model of speech signals and LSTM-RNN-based sequence generative model to a single one and jointly optimized them by back-propagation. Although they showed that this model could approximate natural speech signals, its segmental naturalness was signiï¬ cantly worse than the non-integrated model due to over- generalization and over-estimation of noise components in speech signals. The conventional generative models of raw audio signals have a number of assumptions which are inspired from the speech production, such as | 1609.03499#50 | 1609.03499#52 | 1609.03499 | [
"1601.06759"
] |
1609.03499#52 | WaveNet: A Generative Model for Raw Audio | â ¢ Use of ï¬ xed-length analysis window; They are typically based on a stationary stochas- tic process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010). To model time-varying speech signals by a stationary stochas- tic process, parameters of these generative models are estimated within a ï¬ xed-length, over- lapping and shifting analysis window (typically its length is 20 to 30 milliseconds, and shift is 5 to 10 milliseconds). However, some phones such as stops are time-limited by less than 20 milliseconds (Rabiner & Juang, 1993). Therefore, using such ï¬ xed-size analysis win- dow has limitations. â ¢ Linear ï¬ lter; These generative models are typically realized as a linear time-invariant ï¬ l- ter (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010) within a windowed frame. However, the relationship between suc- cessive audio samples can be highly non-linear. | 1609.03499#51 | 1609.03499#53 | 1609.03499 | [
"1601.06759"
] |
1609.03499#53 | WaveNet: A Generative Model for Raw Audio | 13 â ¢ Gaussian process assumption; The conventional generative models are based on Gaussian process (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Kameoka et al., 2010; Tokuda & Zen, 2015; 2016). From the source-ï¬ lter model of speech production (Chiba & Kajiyama, 1942; Fant, 1970) point of view, this is equivalent to assuming that a vocal source excitation signal is a sample from a Gaussian distribu- tion (Itakura & Saito, 1970; Imai & Furuichi, 1988; Poritz, 1982; Juang & Rabiner, 1985; Tokuda & Zen, 2015; Kameoka et al., 2010; Tokuda & Zen, 2016). Together with the lin- ear assumption above, it results in assuming that speech signals are normally distributed. However, distributions of real speech signals can be signiï¬ cantly different from Gaussian. Although these assumptions are convenient, samples from these generative models tend to be noisy and lose important details to make these audio signals sounding natural. WaveNet, which was described in Section 2, has none of the above-mentioned assumptions. It incorporates almost no prior knowledge about audio signals, except the choice of the receptive ï¬ eld and µ-law encoding of the signal. It can also be viewed as a non-linear causal ï¬ lter for quantized signals. Although such non-linear ï¬ lter can represent complicated signals while preserving the details, designing such ï¬ lters is usually difï¬ cult (Peltonen et al., 2001). | 1609.03499#52 | 1609.03499#54 | 1609.03499 | [
"1601.06759"
] |
1609.03499#54 | WaveNet: A Generative Model for Raw Audio | WaveNets give a way to train them from data. # B DETAILS OF TTS EXPERIMENT The HMM-driven unit selection and WaveNet TTS systems were built from speech at 16 kHz sam- pling. Although LSTM-RNNs were trained from speech at 22.05 kHz sampling, speech at 16 kHz sampling was synthesized at runtime using a resampling functionality in the Vocaine vocoder (Agiomyrgiannakis, 2015). Both the LSTM-RNN-based statistical parametric and HMM-driven unit selection speech synthesizers were built from the speech datasets in the 16-bit linear PCM, whereas the WaveNet-based ones were trained from the same speech datasets in the 8-bit µ-law encoding. The linguistic features include phone, syllable, word, phrase, and utterance-level features (Zen, 2006) (e.g. phone identities, syllable stress, the number of syllables in a word, and position of the current syllable in a phrase) with additional frame position and phone duration features (Zen et al., 2013). These features were derived and associated with speech every 5 milliseconds by phone-level forced alignment at the training stage. We used LSTM-RNN-based phone duration and autoregres- sive CNN-based log F0 prediction models. They were trained so as to minimize the mean squared errors (MSE). It is important to note that no post-processing was applied to the audio signals gener- ated from the WaveNets. The subjective listening tests were blind and crowdsourced. 100 sentences not included in the train- ing data were used for evaluation. Each subject could evaluate up to 8 and 63 stimuli for North American English and Mandarin Chinese, respectively. Test stimuli were randomly chosen and pre- sented for each subject. In the paired comparison test, each pair of speech samples was the same text synthesized by the different models. In the MOS test, each stimulus was presented to subjects in isolation. Each pair was evaluated by eight subjects in the paired comparison test, and each stimulus was evaluated by eight subjects in the MOS test. The subjects were paid and native speakers per- forming the task. Those ratings (about 40%) where headphones were not used were excluded when computing the preference and mean opinion scores. Table 2 shows the full details of the paired comparison test shown in Fig. 5. | 1609.03499#53 | 1609.03499#55 | 1609.03499 | [
"1601.06759"
] |
1609.03499#55 | WaveNet: A Generative Model for Raw Audio | 14 Subjective preference (%) in naturalness WaveNet WaveNet No Language LSTM Concat (L) (L+F) _ preference p value North 23.3 63.6 13.1 | < 107% American 18.7 69.3 12.0] <10-° English 7.6 82.0 10.4 | < 107% 32.4 41.2 26.4 0.003 20.1 49.3 30.6 | <10-° 17.8 37.9 44.3 | <10-° Mandarin 50.6 15.6 33.8 | <10-° Chinese 25.0 23.3 51.8 0.476 12.5 29.3 58.2 | < 107% 17.6 43.1 39.3 | «<10-° 7.6 55.9 36.5 | <10-° 10.0 25.5 64.5 | <10-° Table 2: Subjective preference scores of speech samples between LSTM-RNN-based statistical para- metric (LSTM), HMM-driven unit selection concatenative (Concat), and proposed WaveNet-based speech synthesizers. Each row of the table denotes scores of a paired comparison test between two synthesizers. Scores of the synthesizers which were signiï¬ cantly better than their competing ones at p < 0.01 level were shown in the bold type. Note that WaveNet (L) and WaveNet (L+F) correspond to WaveNet conditioned on linguistic features only and that conditioned on both linguistic features and F0 values. | 1609.03499#54 | 1609.03499#56 | 1609.03499 | [
"1601.06759"
] |
1609.03499#56 | WaveNet: A Generative Model for Raw Audio | 15 | 1609.03499#55 | 1609.03499 | [
"1601.06759"
] |
|
1609.03193#0 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | 6 1 0 2 p e S 3 1 ] G L . s c [ 2 v 3 9 1 3 0 . 9 0 6 1 : v i X r a # Wav2Letter: an End-to-End ConvNet-based Speech Recognition System # Ronan Collobert Facebook AI Research, Menlo Park [email protected] # Christian Puhrsch Facebook AI Research, Menlo Park [email protected] # Gabriel Synnaeve Facebook AI Research, New York [email protected] # Abstract This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform. | 1609.03193#1 | 1609.03193 | [
"1509.08967"
] |
|
1609.03193#1 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | # Introduction We present an end-to-end system to speech recognition, going from the speech signal (e.g. Mel- Frequency Cepstral Coefï¬ cients (MFCC), power spectrum, or raw waveform) to the transcription. The acoustic model is trained using letters (graphemes) directly, which take out the need for an intermediate (human or automatic) phonetic transcription. Indeed, the classical pipeline to build state of the art systems for speech recognition consists in ï¬ rst training an HMM/GMM model to force align the units on which the ï¬ nal acoustic model operates (most often context-dependent phone states). This approach takes its roots in HMM/GMM training [27]. The improvements brought by deep neural networks (DNNs) [14, 10] and convolutional neural networks (CNNs) [24, 25] for acoustic modeling only extend this training pipeline. The current state of the art on Librispeech (the dataset that we used for our evaluations) uses this approach too [18, 20], with an additional step of speaker adaptation [22, 19]. Recently, [23] proposed GMM-free training, but the approach still requires to generate a force alignment. An approach that cut ties with the HMM/GMM pipeline (and with force alignment) was to train with a recurrent neural network (RNN) [7] for phoneme transcription. There are now competitive end-to-end approaches of acoustic models toppled with RNNs layers as in [8, 13, 21, 1], trained with a sequence criterion [6]. However these models are computationally expensive, and thus take a long time to train. Compared to classical approaches that need phonetic annotation (often derived from a phonetic dictionary, rules, and generative training), we propose to train the model end-to-end, using graphemes directly. Compared to sequence criterion based approaches that train directly from speech signal to graphemes [13], we propose a simple(r) architecture (23 millions of parameters for our best model, vs. 100 millions of parameters in [1]) based on convolutional networks for the acoustic model, toppled with a graph transformer network [4], trained with a simpler sequence criterion. | 1609.03193#0 | 1609.03193#2 | 1609.03193 | [
"1509.08967"
] |
1609.03193#2 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Our word-error-rate on clean speech is slightly better than [8], and slightly worse than [1], in particular factoring that they train on 12,000 hours while we only train on the 960h available in LibriSpeechâ s train set. Finally, some of our models are also trained on the raw waveform, as in [15, 16]. The rest of the paper is structured as follows: the next section presents the convolutional networks used for acoustic modeling, along with the automatic segmentation criterion. The following section shows experimental results comparing different features, the criterion, and our current best word error rates on LibriSpeech. | 1609.03193#1 | 1609.03193#3 | 1609.03193 | [
"1509.08967"
] |
1609.03193#3 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | # 2 Architecture Our speech recognition system is a standard convolutional neural network [12] fed with various different features, trained through an alternative to the Connectionist Temporal Classiï¬ cation (CTC) [6], and coupled with a simple beam search decoder. In the following sub-sections, we detail each of these components. # 2.1 Features We consider three types of input features for our model: MFCCs, power-spectrum, and raw wave. MFCCs are carefully designed speech-speciï¬ c features, often found in classical HMM/GMM speech systems [27] because of their dimensionality compression (13 coefï¬ - cients are often enough to span speech frequencies). Power-spectrum features are found in most recent deep learning acoustic modeling features [1]. Raw wave has been somewhat explored in few recent work [15, 16]. ConvNets have the advantage to be ï¬ exible enough to be used with either of these input feature types. | 1609.03193#2 | 1609.03193#4 | 1609.03193 | [
"1509.08967"
] |
1609.03193#4 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Our acoustic models output letter scores (one score per letter, given a dictionary # L # 2.2 ConvNet Acoustic Model The acoustic models we considered in this paper are all based on standard 1D convolutional neural networks (ConvNets). ConvNets interleave convolution operations with pointwise non-linearity oper- ations. Often ConvNets also embark pooling layers: these type of layers allow the network to â seeâ a larger context, without increas- ing the number of parameters, by locally aggregating the previous convolution operation output. Instead, our networks leverage striding convolutions. Given (xt)t=1...Tx an input sequence with Tx frames of dx dimensional vectors, a convolution with kernel width kw, stride dw and dy frame size output computes the following: d, kw He =O + DOS wise Coxe arak VWSt< dy, j=lk=1 qd) Rdy and w where b lution (to be learned). â â Rdyà dxà | 1609.03193#3 | 1609.03193#5 | 1609.03193 | [
"1509.08967"
] |
1609.03193#5 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | kw are the parameters of the convo- CONV kw = 1 2000 : 40 CONV kw = 1 2000 : 2000 CONV kw = 32 250_: 2000 CONV kw = 7 250 : 250 CONV kw = 7 250: 250 CONV kw = 48,dw = 2 250: 250 CONV kw = 250, dw = 160 1: 250 al Pointwise non-linear layers are added after convolutional layers. In our experience, we surprisingly found that using hyperbolic tangents, their piecewise linear counterpart HardTanh (as in [16]) or ReLU units lead to similar results. There are some slight variations between the architectures, depending on the input features. MFCC-based networks need less striding, as standard MFCC ï¬ lters are applied with large strides on the input raw sequence. With power spectrum-based and raw wave-based networks, we observed that the overall stride of the network was more important than where the convolution with strides were placed. We found thus preferrable to set the strided convolutions near the ï¬ rst input layers of the network, as it leads to the fastest architectures: with power spectrum features or raw wave, the input sequences are very long and the ï¬ rst convolutions are thus the most expensive ones. | 1609.03193#4 | 1609.03193#6 | 1609.03193 | [
"1509.08967"
] |
1609.03193#6 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Figure 1: Our neural net- work architecture for raw wave. two layers are convolutions with strides. Last two layers are convolu- tions with kw = 1, which are equivalent to fully con- nected layers. Power spec- trum and MFCC based net- works do not have the ï¬ rst layer. 2 The last layer of our convolutional network outputs one score per letter in the letter dictionary (dy = ). Our architecture for raw wave is shown in Figure 1 and is inspired by [16]. The architectures for both power spectrum and MFCC features do not include the ï¬ rst layer. The full network can be seen as a non-linear convolution, with a kernel width of size 31280 and stride equal to 320; given the sample rate of our data is 16KHz, label scores are produced using a window of 1955 ms, with steps of 20ms. # Inferring Segmentation with AutoSegCriterion Most large labeled speech databases provide only a text transcription for each audio ï¬ | 1609.03193#5 | 1609.03193#7 | 1609.03193 | [
"1509.08967"
] |
1609.03193#7 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | le. In a classiï¬ cation framework (and given our acoustic model produces letter predictions), one would need the segmentation of each letter in the transcription to train properly the model. Unfortunately, manually labeling the segmentation of each letter would be tedious. Several solutions have been explored in the speech community to alleviate this issue: HMM/GMM models use an iterative EM procedure: (i) during the Estimation step, the best segmentation is inferred, according to the current model, by maximizing the joint probability of the letter (or any sub-word unit) transcription and input sequence. (ii) During the Maximization step the model is optimized by minimizing a frame-level criterion, based on the (now ï¬ xed) inferred segmentation. This approach is also often used to boostrap the training of neural network-based acoustic models. Other alternatives have been explored in the context of hybrid HMM/NN systems, such as the MMI criterion [2] which maximizes the mutual information between the acoustic sequence and word sequences or the Minimum Bayse Risk (MBR) criterion [5]. More recently, standalone neural network architectures have been trained using criterions which jointly infer the segmentation of the transcription while increase the overall score of the right transcription [6, 17]. The most popular one is certainly the Connectionist Temporal Classiï¬ cation (CTC) criterion, which is at the core of Baiduâ s Deep Speech architecture [1]. CTC assumes that the network output probability scores, normalized at the frame level. It considers all possible sequence of letters (or any sub-word units), which can lead to a to a given transcription. CTC also allow a special â blankâ state to be optionally inserted between each letters. The rational behind the blank state is two- folds: (i) modeling â | 1609.03193#6 | 1609.03193#8 | 1609.03193 | [
"1509.08967"
] |
1609.03193#8 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | garbageâ frames which might occur between each letter and (ii) identifying the separation between two identical consecutive letters in a transcription. Figure 2a shows an example of the sequences accepted by CTC for a given transcription. In practice, this graph is unfolded as shown in Figure 2b, over the available frames output by the acoustic model. We denote ctc(θ, T ) G an unfolded graph over T frames for a given transcription θ, and Ï = Ï 1, . . . , Ï T ctc(θ, T ) a path in this graph representing a (valid) sequence of letters for this transcription. At each time step t, )) each node of the graph is assigned with the corresponding log-probability letter (that we denote ft( · ctc(θ, T ); for output by the acoustic model. CTC aims at maximizing the â overallâ score of paths in that purpose, it minimizes the Forward score: T CTC(0,T) =â logadd SO fx,(x), (2) TEGete (9,7) pay where the â logaddâ operation, also often called â log-sum-expâ is deï¬ ned as logadd(a, b) = exp(log(a) + log(b)). This overall score can be efï¬ ciently computed with the Forward algorithm. To put things in perspective, if one would replace the logadd( ) in (2) (which can be then · · efï¬ ciently computed by the Viterbi algorithm, the counterpart of the Forward algorithm), one would then maximize the score of the best path, according to the model belief. The logadd( ) can be seen · as a smooth version of the max( ): paths with similar scores will be attributed the same weight in the · overall score (and hence receive the same gradient), and paths with much larger score will have much ) works much better more overall weight than paths with low scores. In practice, using the logadd( · than the max( ). It is also worth noting that maximizing (2) does not diverge, as the acoustic model · ). is assumed to output normalized scores (log-probabilities) fi( · | 1609.03193#7 | 1609.03193#9 | 1609.03193 | [
"1509.08967"
] |
1609.03193#9 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | In this paper, we explore an alternative to CTC, with three differences: (i) there are no blank labels, (ii) un-normalized scores on the nodes (and possibly un-normalized transition scores on the edges) (iii) global normalization instead of per-frame normalization: The advantage of (i) is that it produces a much simpler graph (see Figure 3a and Figure 3b). We found that in practice there was no advantage of having a blank class to model the | 1609.03193#8 | 1609.03193#10 | 1609.03193 | [
"1509.08967"
] |
1609.03193#10 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | 3 (a) (b) Figure 2: The CTC criterion graph. (a) Graph which represents all the acceptable sequences of letters (with the blank state denoted â â ), for the transcription â catâ . (b) Shows the same graph unfolded â over 5 frames. There are no transitions scores. At each time step, nodes are assigned a conditional probability output by the neural network acoustic model. # e # e possible â garbageâ frames between letters. Modeling letter repetitions (which is also an important quality of the blank label in CTC) can be easily replaced by repetition character labels (we used two extra labels for two and three repetitions). For example â caterpillarâ could be written as â caterpil2arâ , where â 2â is a label to represent the repetition of the previous letter. Not having blank labels also simpliï¬ es the decoder. With (ii) one can easily plug an external language model, which would insert transition scores on the edges of the graph. This could be particularly useful in future work, if one wanted to model representations more high-level than letters. In that respect, avoiding normalized transitions is important to alleviate the problem of â label biasâ | 1609.03193#9 | 1609.03193#11 | 1609.03193 | [
"1509.08967"
] |
1609.03193#11 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | [3, 11]. In this work, we limited ourselves to transition scalars, which are learned together with the acoustic model. The normalization evoked in (iii) is necessary when using un-normalized scores on nodes or edges; it insures incorrect transcriptions will have a low conï¬ dence. In the following, we name our criterion â Auto Segmentation Criterionâ (ASG). Considering the asg(θ, T ) over T frames for a given same notations than for CTC in (2), and an unfolded graph transcription θ (as in Figure 3b), as well as a fully connected graph f ull(θ, T ) over T frames (representing all possible sequence of letters, as in Figure 3c), ASG aims at minimizing: ASG(6,T) =â _ logadd Slate )+9n1m(2)) + logadd Sul 2) + Gneaer(@)) ®EGasg(9.T) 424 mwEGfut(9,T) a] (3) where gi,j( ) is a transition score model to jump from label i to label j. The left-hand part of 3 · promotes sequences of letters leading to the right transcription, and the right-hand part demotes all sequences of letters. As for CTC, these two parts can be efï¬ ciently computed with the Forward algorithm. Derivatives with respect to fi( ) can be obtained (maths are a bit tedious) by ) and gi,j( · · applying the chain rule through the Forward recursion. # 2.4 Beam-Search Decoder We wrote our own one-pass decoder, which performs a simple beam-search with beam threholding, histogram pruning and language model smearing [26]. We kept the decoder as simple as possible (under 1000 lines of C code). We did not implement any sort of model adaptation before decoding, nor any word graph rescoring. Our decoder relies on KenLM [9] for the language modeling part. It also accepts un-normalized acoustic scores (transitions and emissions from the acoustic model) as input. The decoder attempts to maximize the following: | 1609.03193#10 | 1609.03193#12 | 1609.03193 | [
"1509.08967"
] |
1609.03193#12 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | T L(A) = logadd S~(fa,(2) + Gne1m()) + 10g Pim (9) + 814] , (4) TEGasg(9,T) pay 4 (a) (b) (c) Figure 3: The ASG criterion graph. (a) Graph which represents all the acceptable sequences of letters for the transcription â catâ . (b) Shows the same graph unfolded over 5 frames. (c) Shows the corresponding fully connected graph, which describe all possible sequences of letter; this graph is used for normalization purposes. Un-normalized transitions scores are possible on the edges. At each time step, nodes are assigned a conditional un-normalized score, output by the neural network acoustic model. where Plm(θ) is the probability of the language model given a transcription θ, α and β are two hyper-parameters which control the weight of the language model and the word insertion penalty respectively. | 1609.03193#11 | 1609.03193#13 | 1609.03193 | [
"1509.08967"
] |
1609.03193#13 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | # 3 Experiments We implemented everything using Torch71. The ASG criterion as well as the decoder were imple- mented in C (and then interfaced into Torch). We consider as benchmark LibriSpeech, a large speech database freely available for download [18]. LibriSpeech comes with its own train, validation and test sets. Except when speciï¬ ed, we used all the available data (about 1000h of audio ï¬ les) for training and validating our models. We use the original 16 KHz sampling rate. | 1609.03193#12 | 1609.03193#14 | 1609.03193 | [
"1509.08967"
] |
1609.03193#14 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | The vocabulary contains 30 graphemes: the standard English alphabet plus the apostrophe, silence, and two special â repetitionâ graphemes which encode the duplication (once or twice) of the previous letter (see Section 2.3). The architecture hyper-parameters, as well the decoder ones were tuned using the validation set. In the following, we either report letter-error-rates (LERs) or word-error-rates (WERs). WERs have been obtained by using our own decoder (see Section 2.4), with the standard 4-gram language model provided with LibriSpeech2. | 1609.03193#13 | 1609.03193#15 | 1609.03193 | [
"1509.08967"
] |
1609.03193#15 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | MFCC features are computed with 13 coefï¬ cients, a 25 ms sliding window and 10 ms stride. We included ï¬ rst and second order derivatives. Power spectrum features are computed with a 25 ms window, 10 ms stride, and have 257 components. All features are normalized (mean 0, std 1) per input sequence. # 3.1 Results Table 1 reports a comparison between CTC and ASG, in terms of LER and speed. Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. We picked the CTC criterion implementation provided by Baidu3. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which corresponds to our average use 1http://www.torch.ch. 2http://www.openslr.org/11. 3https://github.com/baidu-research/warp-ctc. | 1609.03193#14 | 1609.03193#16 | 1609.03193 | [
"1509.08967"
] |
1609.03193#16 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | 5 Table 1: CTC vs ASG. CTC is Baiduâ s implementation. ASG is implemented on CPU (core in C, threading in Lua). (a) reports performance in LER. Timings (in ms) for small sequences (input frames: 150, letter vocabulary size: 28, transcription size: 40) and long sequences (input frames: 700, letter vocabulary size: 28, transcription size: 200) are reported in (b) and (c) respectively. Timings include both forward and backward passes. | 1609.03193#15 | 1609.03193#17 | 1609.03193 | [
"1509.08967"
] |
1609.03193#17 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | CPU implementations use 8 threads. (a) (b) dev-clean test-clean ASG CTC 10.7 10.4 10.5 10.1 batch size 1 4 8 ASG CPU GPU CPU 2.5 5.9 2.8 6.0 2.8 6.1 CTC 1.9 2.0 2.0 (c) batch size 1 4 8 CTC ASG GPU CPU 16.0 97.9 17.7 99.6 19.2 100.3 CPU 40.9 41.6 41.7 (a) (b) # a Figure 4: Valid LER (a) and WER (b) v.s. training set size (10h, 100h, 200h, 1000h). This compares MFCC-based and power spectrum-based (POW) architectures. AUG experiments include data augmentation. In (b) we provide Baidu Deep Speech 1 and 2 numbers on LibriSpeech, as a comparison [8, 1]. | 1609.03193#16 | 1609.03193#18 | 1609.03193 | [
"1509.08967"
] |
1609.03193#18 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | case. ASG appears faster on long sequences, even though it is running on CPU only. Baiduâ s GPU CTC implementation seems more aimed at larger vocabularies (e.g. 5000 Chinese characters). We also investigated the impact of the training size on the dataset, as well as the effect of a simple data augmentation procedure, where shifts were introduced in the input frames, as well as stretching. For that purpose, we tuned the size of our architectures (given a particular size of the dataset), to avoid over-ï¬ | 1609.03193#17 | 1609.03193#19 | 1609.03193 | [
"1509.08967"
] |
1609.03193#19 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | tting. Figure 4a shows the augmentation helps for small training set size. However, with enough training data, the effect of data augmentation vanishes, and both type of features appear to perform similarly. Figure 4b reports the WER with respect to the available training data size. We observe that we compare very well against Deep Speech 1 & 2 which were trained with much more data [8, 1]. Finally, we report in Table 2 the best results of our system so far, trained on 1000h of speech, for each type of features. The overall stride of architectures is 320 (see Figure 1), which produces a label every 20 ms. We found that one could squeeze out about 1% in performance by reï¬ ning the precision of the output. This is efï¬ ciently achieved by shifting the input sequence, and feeding it to the network | 1609.03193#18 | 1609.03193#20 | 1609.03193 | [
"1509.08967"
] |
1609.03193#20 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | 6 Table 2: LER/WER of the best sets of hyper-parameters for each feature types. dev-clean test-clean PS LER WER LER WER LER WER 6.9 6.9 MFCC Raw 9.3 9.1 10.3 10.6 7.2 9.4 10.1 several times. Results in Table 2 were obtained by a single extra shift of 10 ms. Both power spectrum and raw features are performing slightly worse than MFCCs. One could expect, however, that with enough data (see Figure 4) the gap would vanish. # 4 Conclusion | 1609.03193#19 | 1609.03193#21 | 1609.03193 | [
"1509.08967"
] |
1609.03193#21 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | We have introduced a simple end-to-end automatic speech recognition system, which combines a standard 1D convolutional neural network, a sequence criterion which can infer the segmentation, and a simple beam-search decoder. The decoding results are competitive on the LibriSpeech corpus with MFCC features (7.2% WER), and promising with power spectrum and raw speech (9.4% WER and 10.1% WER respectively). We showed that our AutoSegCriterion can be faster than CTC [6], and as accurate (table 1). Our approach breaks free from HMM/GMM pre-training and force-alignment, as well as not being as computationally intensive as RNN-based approaches [1] (on average, one LibriSpeech sentence is processed in less than 60ms by our ConvNet, and the decoder runs at 8.6x on a single thread). | 1609.03193#20 | 1609.03193#22 | 1609.03193 | [
"1509.08967"
] |
1609.03193#22 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | # References [1] AMODEI, D., ANUBHAI, R., BATTENBERG, E., CASE, C., CASPER, J., CATANZARO, B., CHEN, J., CHRZANOWSKI, M., COATES, A., DIAMOS, G., ET AL. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595 (2015). [2] BAHL, L. R., BROWN, P. F., DE SOUZA, P. V., AND MERCER, R. L. | 1609.03193#21 | 1609.03193#23 | 1609.03193 | [
"1509.08967"
] |
1609.03193#23 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Maximum mutual information estimation of hidden markov model parameters for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 1986 IEEE International Conference on (1986), IEEE, pp. 49â 52. [3] BOTTOU, L. Une approche theorique de lâ apprentissage connexionniste et applications a la reconnaissance de la parole. PhD thesis, 1991. [4] BOTTOU, L., BENGIO, Y., AND LE CUN, Y. Global training of document processing sys- tems using graph transformer networks. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (1997), IEEE, pp. 489â | 1609.03193#22 | 1609.03193#24 | 1609.03193 | [
"1509.08967"
] |
1609.03193#24 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | 494. [5] GIBSON, M., AND HAIN, T. Hypothesis spaces for minimum bayes risk training in large vocabulary speech recognition. In Proceedings of INTERSPEECH (2006), IEEE, pp. 2406â - 2409. [6] GRAVES, A., FERNà NDEZ, S., GOMEZ, F., AND SCHMIDHUBER, J. Connectionist temporal classiï¬ cation: labelling unsegmented sequence data with recurrent neural networks. In Proceed- ings of the 23rd international conference on Machine learning (2006), ACM, pp. 369â 376. [7] GRAVES, A., MOHAMED, A.-R., AND HINTON, G. | 1609.03193#23 | 1609.03193#25 | 1609.03193 | [
"1509.08967"
] |
1609.03193#25 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Speech recognition with deep recur- In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE rent neural networks. International Conference on (2013), IEEE, pp. 6645â 6649. [8] HANNUN, A., CASE, C., CASPER, J., CATANZARO, B., DIAMOS, G., ELSEN, E., PRENGER, R., SATHEESH, S., SENGUPTA, S., COATES, A., ET AL. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567 (2014). | 1609.03193#24 | 1609.03193#26 | 1609.03193 | [
"1509.08967"
] |
1609.03193#26 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | [9] HEAFIELD, K., POUZYREVSKY, I., CLARK, J. H., AND KOEHN, P. Scalable modiï¬ ed kneser-ney language model estimation. In ACL (2) (2013), pp. 690â 696. 7 [10] HINTON, G., DENG, L., YU, D., DAHL, G. E., MOHAMED, A.-R., JAITLY, N., SENIOR, A., VANHOUCKE, V., NGUYEN, P., SAINATH, T. N., ET AL. | 1609.03193#25 | 1609.03193#27 | 1609.03193 | [
"1509.08967"
] |
1609.03193#27 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE 29, 6 (2012), 82â 97. [11] LAFFERTY, J., MCCALLUM, A., AND PEREIRA, F. Conditional random ï¬ elds: Probabilistic models for segmenting and labeling sequence data. In Eighteenth International Conference on Machine Learning, ICML (2001). [12] LECUN, Y., AND BENGIO, Y. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361, 10 (1995), 1995. [13] MIAO, Y., GOWAYYED, M., AND METZE, F. | 1609.03193#26 | 1609.03193#28 | 1609.03193 | [
"1509.08967"
] |
1609.03193#28 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. arXiv preprint arXiv:1507.08240 (2015). [14] MOHAMED, A.-R., DAHL, G. E., AND HINTON, G. Acoustic modeling using deep belief networks. Audio, Speech, and Language Processing, IEEE Transactions on 20, 1 (2012), 14â 22. [15] PALAZ, D., COLLOBERT, R., AND DOSS, M. M. Estimating phoneme class conditional probabilities from raw speech signal using convolutional neural networks. arXiv preprint arXiv:1304.1018 (2013). [16] PALAZ, D., COLLOBERT, R., ET AL. Analysis of cnn-based speech recognition system using raw speech as input. In Proceedings of Interspeech (2015), no. EPFL-CONF-210029. [17] PALAZ, D., MAGIMAI-DOSS, M., AND COLLOBERT, R. Joint phoneme segmentation infer- ence and classiï¬ | 1609.03193#27 | 1609.03193#29 | 1609.03193 | [
"1509.08967"
] |
1609.03193#29 | Wav2Letter: an End-to-End ConvNet-based Speech Recognition System | cation using crfs. In Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on (2014), IEEE, pp. 587â 591. [18] PANAYOTOV, V., CHEN, G., POVEY, D., AND KHUDANPUR, S. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (2015), IEEE, pp. 5206â | 1609.03193#28 | 1609.03193#30 | 1609.03193 | [
"1509.08967"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.