id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1609.03193#30
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
5210. [19] PEDDINTI, V., CHEN, G., MANOHAR, V., KO, T., POVEY, D., AND KHUDANPUR, S. Jhu aspire system: Robust lvcsr with tdnns, i-vector adaptation, and rnn-lms. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (2015). [20] PEDDINTI, V., POVEY, D., AND KHUDANPUR, S. A time delay neural network architecture for efï¬ cient modeling of long temporal contexts. In Proceedings of INTERSPEECH (2015). [21] SAON, G., KUO, H.-K. J., RENNIE, S., AND PICHENY, M.
1609.03193#29
1609.03193#31
1609.03193
[ "1509.08967" ]
1609.03193#31
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
The ibm 2015 english conversa- tional telephone speech recognition system. arXiv preprint arXiv:1505.05899 (2015). [22] SAON, G., SOLTAU, H., NAHAMOO, D., AND PICHENY, M. Speaker adaptation of neural network acoustic models using i-vectors. In ASRU (2013), pp. 55â 59. [23] SENIOR, A., HEIGOLD, G., BACCHIANI, M., AND LIAO, H.
1609.03193#30
1609.03193#32
1609.03193
[ "1509.08967" ]
1609.03193#32
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
Gmm-free dnn training. In Proceedings of ICASSP (2014), pp. 5639â 5643. [24] SERCU, T., PUHRSCH, C., KINGSBURY, B., AND LECUN, Y. Very deep multilingual convolutional neural networks for lvcsr. arXiv preprint arXiv:1509.08967 (2015). [25] SOLTAU, H., SAON, G., AND SAINATH, T.
1609.03193#31
1609.03193#33
1609.03193
[ "1509.08967" ]
1609.03193#33
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
N. Joint training of convolutional and non- convolutional neural networks. In ICASSP (2014), pp. 5572â 5576. [26] STEINBISS, V., TRAN, B.-H., AND NEY, H. Improvements in beam search. In ICSLP (1994), vol. 94, pp. 2143â 2146. [27] WOODLAND, P. C., AND YOUNG, S. J.
1609.03193#32
1609.03193#34
1609.03193
[ "1509.08967" ]
1609.03193#34
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
The htk tied-state continuous speech recogniser. In Eurospeech (1993). 8
1609.03193#33
1609.03193
[ "1509.08967" ]
1609.02200#0
Discrete Variational Autoencoders
7 1 0 2 r p A 2 2 ] L M . t a t s [ 2 v 0 0 2 2 0 . 9 0 6 1 : v i X r a Published as a conference paper at ICLR 2017 # DISCRETE VARIATIONAL AUTOENCODERS # Jason Tyler Rolfe D-Wave Systems Burnaby, BC V5G-4M9, Canada [email protected] # ABSTRACT
1609.02200#1
1609.02200
[ "1602.08734" ]
1609.02200#1
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets com- posed of discrete classes. However, they are difï¬ cult to train efï¬ ciently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models com- prises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the discon- nected smooth manifolds induced by the continuous component. As a result, this class of models efï¬ ciently learns both the class of objects in an image, and their speciï¬ c realization in pixels, from unsupervised data; and outperforms state-of- the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. # INTRODUCTION Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classiï¬ cation (Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting.
1609.02200#0
1609.02200#2
1609.02200
[ "1602.08734" ]
1609.02200#2
Discrete Variational Autoencoders
At the same time, it is extremely difï¬ cult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. It would be natural to represent the space within each disconnected component with continuous vari- ables, and the selection amongst these components with discrete variables. In contrast, most state- of-the-art probabilistic models use exclusively discrete variables â as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau- ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) â or exclusively continuous variables â as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efï¬ cient variational autoencoder frame- work to models with discrete values, but this has proven difï¬ cult, since backpropagation through discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015). We introduce a novel class of probabilistic models, comprising an undirected graphical model de- ï¬ ned over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its speciï¬ c con- tinuously deformable realization. Moreover, we show how these models can be trained efï¬ ciently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong corre- lations. Since these models efï¬ ciently marry the variational autoencoder framework with discrete latent variables, we call them discrete variational autoencoders (discrete VAEs). 1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables.
1609.02200#1
1609.02200#3
1609.02200
[ "1602.08734" ]
1609.02200#3
Discrete Variational Autoencoders
1 Published as a conference paper at ICLR 2017 1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log- likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby, 1993). In contrast to the exact log-likelihood, it can be computationally efï¬ cient to optimize a lower bound (x, θ, Ï ); on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, Hinton & Zemel, 1994): (x, θ, Ï ) = log p(x KL[q(z p(z x, θ)], (1) θ) | x, Ï ) | # L â || | x, θ). where q(z | We denote the observed random variables by x, the latent random variables by z, the parameters of the generative model by θ, and the parameters of the approximating posterior by Ï . The variational autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups the evidence lower bound of Equation 1 as: L(x,6,) = â KL {q(2|2, 6)||p(2|9)] +E, log p(c|z, 6). @) a KL term autoencoding term # L x) and p(z), the KL term of Equation 2 In many cases of practical interest, such as Gaussian q(z | can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior q(z x) can be drawn using a differentiable, deterministic function f (x, Ï , Ï
1609.02200#2
1609.02200#4
1609.02200
[ "1602.08734" ]
1609.02200#4
Discrete Variational Autoencoders
) of the combination of the inputs, the parameters, and a set of input- D. For instance, samples can be drawn from a and parameter-independent random variables Ï (m(x, Ï ), v(x, Ï )), using Gaussian distribution with mean and variance determined by the input, f (x, Ï , Ï ) = m(x, Ï ) + # â ¼ N 1 N fa) 1 (2) ; : FE g(z|x,0) log p(x|z, 0)| © Vv > 9g ePlels (a, p, ¢), 8). (3) prD ' Oo # â ¼D
1609.02200#3
1609.02200#5
1609.02200
[ "1602.08734" ]
1609.02200#5
Discrete Variational Autoencoders
The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we ï¬ nd that an analog of Equation 3 holds. Speciï¬ cally, Di is the uniform distribution between 0 and 1, and f (x) = Fâ 1(x), where F is the conditional-marginal cumulative distribution function (CDF) deï¬ ned by: # x F;(x) -|/ p(a|ai,...,@i-1)- (5) ai =â 00 However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable. â â
1609.02200#4
1609.02200#6
1609.02200
[ "1602.08734" ]
1609.02200#6
Discrete Variational Autoencoders
A formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky, 1986): p(z) = 1-20) _ i . ele Wetblz) (6) P Zp where z Zp is the partition function of p(z), and the lateral connection matrix W is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not deï¬ ned, as required in Equations 3 and 4.2 # {z ER: Lon >This problem remains even if we use the quantile function, F;'(p) = inf {z ER: Lon p(zâ ) > o} ; the derivative of which is either zero or infinite if p is a discrete distribution. 2 (2) (4)
1609.02200#5
1609.02200#7
1609.02200
[ "1602.08734" ]
1609.02200#7
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar- chical probabilistic model consising of an RBM,3 followed by multiple directed layers of continuous latent variables. This model is efï¬ ciently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables. 1.2 RELATED WORK Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016), Hamiltonian variational inference (Salimans et al., 2015), normalizing ï¬ ows (Rezende & Mohamed, 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos- terior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling (Du et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi- mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distri- butions (Johnson et al., 2016). It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B. Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset.
1609.02200#6
1609.02200#8
1609.02200
[ "1602.08734" ]
1609.02200#8
Discrete Variational Autoencoders
Rectiï¬ ed Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units. The generative model underlying the discrete variational autoencoder resembles a deep belief net- work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz- mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer j receives connections from all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. 2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (deï¬ ned by Equation 5 and Appendix A) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4.
1609.02200#7
1609.02200#9
1609.02200
[ "1602.08734" ]
1609.02200#9
Discrete Variational Autoencoders
We redeï¬ ne the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter 3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the â visibleâ units and the â hiddenâ units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome â fully hidden bipartite Boltzmann machine.â
1609.02200#8
1609.02200#10
1609.02200
[ "1602.08734" ]
1609.02200#10
Discrete Variational Autoencoders
3 Published as a conference paper at ICLR 2017 (a) Approximating posterior q(ζ, z|x) (b) Prior p(x, ζ, z) (c) Autoencoding term Figure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari- ables ζi are smoothed analogs of discrete latent variables zi, and insulate z from the observed vari- ables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input Ï
1609.02200#9
1609.02200#11
1609.02200
[ "1602.08734" ]
1609.02200#11
Discrete Variational Autoencoders
â ¼ the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted as adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a small minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix C. Speciï¬ cally, as shown in Figure 1a, we augment the latent representation in the approximating pos- terior with continuous random variables ζ,4 conditioned on the discrete latent variables z of the RBM: (,2le.6) = r(¢\z)-a(zle.4), where r(¢lz) = iat r(Gilzi)- a support of r(¢|z) for all values of z must be connected, so the marginal distribution (¢|a, 6) =o. r(¢|z) - ee @) has a constant, connected support so long as 0 < q(z|2,¢) < 1. We further require that r(¢|z) is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of q(¢|x, ¢) is differentiable in Equations 3 and 4, as we discuss in Appendix A. As shown in Figure 1b, we correspondingly augment the prior with ζ: p(ζ, z z) p(z θ) = r(ζ | θ), | z) is the same as for the approximating posterior. Finally, we require that the conditional | where r(ζ distribution over the observed variables only depends on ζ: | ζ, z, θ) = p(x | (7) The smoothing distribution r(ζ z) transforms the model into a continuous function of the distri- | bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. |
1609.02200#10
1609.02200#12
1609.02200
[ "1602.08734" ]
1609.02200#12
Discrete Variational Autoencoders
Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z and applying Equation 16 of Appendix A, which generalizes Equation 3: (2) 1 (2) _ (2) 1 (2) _ ag eacle9) [log p(x|¢, z,)] = Vv > 9g BP (IF sd j2,0)(0)-4) . (8) pr (0,1)" ⠼ 4We always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z can conveniently be thought of as English z.
1609.02200#11
1609.02200#13
1609.02200
[ "1602.08734" ]
1609.02200#13
Discrete Variational Autoencoders
4 Published as a conference paper at ICLR 2017 If the approximating posterior is factorial, then each Fi is an independent CDF, without conditioning or marginalization. 1 x, Ï ), where x,Ï )(Ï ) is a function of q(z = 1 As we shall demonstrate in Section 2.1, Fâ q(ζ | | x, Ï ) is a deterministic probability value calculated by a parameterized function, such as q(z = 1 | a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input x x, Ï ), for which the ï¬ nal nonlinearity is is passed into a deterministic feedforward network q(z = 1 | the logistic function. Its output q, along with an independent random variable Ï U [0, 1], is passed 1 x,Ï )(Ï ) to produce a sample of ζ. This ζ, along with the original into the deterministic function Fâ q(ζ | input x, is ï¬ nally passed to log p (x ζ, θ). The expectation of this log probability with respect to Ï is | the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the input and the independent Ï , this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efï¬ cient approximation to the gradient. # 2.1 SPIKE-AND-EXPONENTIAL SMOOTHING TRANSFORMATION As a concrete example consistent with sparse coding, consider the spike-and-exponential transfor- mation from binary z to continuous ζ: oo, if¢;=0 0, otherwise Frcda=o(6) = 1 nala=0)={ ae ey eg ec] Bo SBC! gy MGlyâ -la-dea HOSGS Fue (C) = â â | = = (Gilzi = 1) fi otherwise rca (O) = a5 eal â where F,,(¢â ) = fs p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1].
1609.02200#12
1609.02200#14
1609.02200
[ "1602.08734" ]
1609.02200#14
Discrete Variational Autoencoders
This transformation from 2; to ¢; is invertible: ¢; = 0 = z; = 0, and ¢; > 0 & z; = 1 almost surely.â We can now find the CDF for q(¢|x, #) as a function of q(z = 1|x, d) in the domain (0, 1], marginal- izing out the discrete z: Fagje,6) (6) = (1 = a(2 = Le, 9) - Freijereo) (0) + a(2 = U2, 6) - Freier (0) ef 4 = q(z = 1\z,¢)- Boi 1) +1. To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8, we must invert the conditional-marginal CDF Fq(ζ Fi¢jx,4): ; 4 -tog | (#4) - (e 1) +1] ifp>1â q Frode o)() = 4% 4 ther ° acle,6) (P) 4 otherwise â q to simplify notation. For all values of the inde- x, Ï ) where we use the substitution q(z = 1 | 1 pendent random variable Ï x, Ï ) if x,Ï )(Ï ) rectiï¬ es the input q(z = 1 U [0, 1], the function F â q(ζ | | Ï in a manner analogous to a rectiï¬ ed linear unit (ReLU), as shown in Figure 2a. It is q â ¤ 1 is increasing but concave-down if q > 1 Ï . The effect of Ï on also quasi-sigmoidal, in that F â 1 is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the F â noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c.
1609.02200#13
1609.02200#15
1609.02200
[ "1602.08734" ]
1609.02200#15
Discrete Variational Autoencoders
Other expansions to the continuous space are possible. In Appendix D.1, we consider the case where zi = 1) are linear functions of ζ; in Appendix D.2, we develop a spike- both r(ζi| and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous ζ is directly dependent on the input x in addition to the discrete z. 5In the limit β â â , ζi = zi almost surely, and the continuous variables ζ can effectively be removed from the model. This trick can be used after training with ï¬ nite β to produce a model without smoothing variables ζ.
1609.02200#14
1609.02200#16
1609.02200
[ "1602.08734" ]
1609.02200#16
Discrete Variational Autoencoders
5 Published as a conference paper at ICLR 2017 (a) Spike-and-exp, β â {1, 3, 5} (b) ReLU with dropout (c) ReLU with batch norm smoothing transformation for Figure 2: Ï ; β = 1 (dotted), β = 3 (solid), and β = 5 (dashed) (a). Rectiï¬ ed linear } unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), 0.3 (dotted), or 0 (solid blue); before a rectiï¬ ed linear unit (c). In all cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The 1 novel stochastic nonlinearity F â x,Ï )(Ï ) from Figure 1c, of which (a) is an example, is qualitatively q(ζ | similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c). # 3 ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL # APPROXIMATING POSTERIOR When a probabilistic model is deï¬ ned in terms of a prior distribution p(z) and a conditional dis- tribution p(x x) due | to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-ï¬ eld methods, but also Kingma & Welling (2014); Rezende et al. (2014)). To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior q(z x) over the discrete latent variables. | Speciï¬ cally, we divide the latent variables z of the RBM into disjoint groups, z1, . . . , zk,6 and deï¬ ne the approximating posterior via a directed acyclic graphical model over these groups:
1609.02200#15
1609.02200#17
1609.02200
[ "1602.08734" ]
1609.02200#17
Discrete Variational Autoencoders
a(21,G1s-++ 2k Cele, 6) = TT r(Gjlzs)-a(zilGccj.,¢) where 1<j<k 9 (Gi<i sty) "+25 Tle,e2, (1 + ef, (Gi<j--9)) ; (10) (25 |Gi<j, 2, @) # zιâ n, and gj(ζi<j, x, Ï ) is a parameterized function of the inputs and preceding ζi, such as zj â { a neural network. The corresponding graphical model is depicted in Figure 3a, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap- pendix A. If each group zj contains a single variable, this dependence structure is analogous to that of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution. However, the dependence of zj on the preceding discrete variables zi<j is always mediated by the continuous variables ζi<j. This hierarchical approximating posterior does not affect the form of the autoencoding term in Equa- tion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic probability value q(zj = 1 ζi<j, x, Ï ) of Equation 10 is parameterized, generally by a neural net- | work, in a manner analogous to Section 2. However, the ï¬ nal logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all ζi<j, x, Ï ). Its output qj, along with an previous ζi<j are passed into the network computing q(z = 1 | 6The continuous latent variables ζ are divided into complementary disjoint groups ζ1, . . . , ζk. 6
1609.02200#16
1609.02200#18
1609.02200
[ "1602.08734" ]
1609.02200#18
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 (a) Hierarch approx post q(ζ, z|x) (b) Hierarchical ELBO autoencoding term Figure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables zj only depend on the previous zi<j through their smoothed analogs ζi<j. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input Ï
1609.02200#17
1609.02200#19
1609.02200
[ "1602.08734" ]
1609.02200#19
Discrete Variational Autoencoders
. independent random variable Ï Î¶i<j ,x,Ï )(Ï ) to produce a sample of ζj. Once all ζj have been recursively computed, the full ζ along with the ζ, θ). The expectation of this log probability with respect original input x is ï¬ nally passed to log p (x | to Ï is again the autoencoding term of the VAE formalism, as in Equation 2. In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using: (2) OE, (z, 0) OE, (2,0 (2) OE, (z, 0) OE, (2,0 ag KE lallel = Eqceste.0) [+ [Bacuccrane | oe il Ex | fe | and a - â pt 8 _ ot ww. (1226 24 SKE lalel =B, [(o(e.c) 0)" S227 we. (70 SAY), (12) â â
1609.02200#18
1609.02200#20
1609.02200
[ "1602.08734" ]
1609.02200#20
Discrete Variational Autoencoders
In particular, Equation 12 is substantially lower variance than the naive approach to calculate â â Ï KL [q || # 4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks (Goodfel- low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined ï¬ rst by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. Speciï¬ cally, we augment the latent representation with continuous random variables z,7 and deï¬ ne both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi- cal models. We use the same autoregressive variable order for the approximating posterior as for the 7We always use a variant of z for latent variables. This is Fraktur z, or German z. 7 (p) (11)
1609.02200#19
1609.02200#21
1609.02200
[ "1602.08734" ]
1609.02200#21
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 (a) Approx post w/ cont latent vars q(z, ζ, z|x) (b) Prior w/ cont latent vars p(x, z, ζ, z) Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1b respectively. The continuous latent variables z build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables z, which can represent the discrete types of objects in the image. prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015), the deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016).
1609.02200#20
1609.02200#22
1609.02200
[ "1602.08734" ]
1609.02200#22
Discrete Variational Autoencoders
We discuss the motivation for this ordering in Appendix G. The directed graphical model of the approximating posterior and prior are deï¬ ned by: a(30.---:3nlt,6)= T] a (3mlsi<ms2.¢) and 0<m<n P(30,---s3n19) = [] p(omlarcm,9)- (13) 0<m<n â ¤ â ¤ The full set of latent variables associated with the RBM is now denoted by z0 = z1, ζ1, . . . , zk, ζk} . { However, the conditional distributions in Equation 13 only depend on the continuous ζj. Each zm 1 denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model. The ELBO decomposes as: L (x, θ, Ï ) = E x,Ï ) [log p(x q(z | z, θ)] | â m E q(zl<m| x,Ï ) [KL [q(zm| zl<m, x, Ï ) p(zm| || zl<m, θ)]] . (14) If both q(zm| zl<m, θ) are Gaussian, then their KL divergence has a simple zl<m, x, Ï ) and p(zm| closed form, which is computationally efï¬ cient if the covariance matrices are diagonal. Gradients x, Ï ) using the traditional reparameterization trick, described in can be passed through the q(zl<m| Section 1.1. # 5 RESULTS Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx- imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution r(ζ z) dis- cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014; Rezende et al., 2014), we deï¬ ne all approximating posteriors q to be explicit functions of x, with parameters Ï shared between all inputs x.
1609.02200#21
1609.02200#23
1609.02200
[ "1602.08734" ]
1609.02200#23
Discrete Variational Autoencoders
For distributions over discrete variables, the neural net- works output the parameters of a factorial Bernoulli distribution using a logistic ï¬ nal layer, as in Equation 10; for the continuous z, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear ï¬ nal layer. Each layer of the neu- ral networks parameterizing the distributions over z, z, and x consists of a linear transformation, 8 Published as a conference paper at ICLR 2017 batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectiï¬ ed-linear point- wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior p(z θ) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma & Ba, 2015) with a decaying step size.
1609.02200#22
1609.02200#24
1609.02200
[ "1602.08734" ]
1609.02200#24
Discrete Variational Autoencoders
The hierarchical structure of Section 4 is very powerful, and overï¬ ts without strong regularization of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce signiï¬ cant overï¬ tting. To address this problem, we use conditional distributions over the input ζ, θ) without any deterministic hidden layers, except on Omniglot. Moreover, all other neural p(x | networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over z. We present the details of the architecture in Appendix H. We train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om- niglot8 (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST, we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood9 of these models, computed using the method of (Burda et al., 2016) with 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis- crete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04, 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou- ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and 0.66. MNIST (dynamic binarization) LL MNIST (static binarization) ELBO LL DBN IWAE Ladder VAE Discrete VAE -84.55 -82.90 -81.74 -80.15 -88.30 -87.40 -85.10 -85.51 -83.67
1609.02200#23
1609.02200#25
1609.02200
[ "1602.08734" ]
1609.02200#25
Discrete Variational Autoencoders
Omniglot Caltech-101 Silhouettes LL LL IWAE Ladder VAE RBM DBN Discrete VAE -103.38 -102.11 -100.46 -100.45 -97.43 IWAE RWS SBN RBM NAIS NADE Discrete VAE -117.2 -113.3 -107.8 -100.0 -97.6 Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor- mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I. We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con- stant across each sub-row of ï¬ ve samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of 8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT. 9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.
1609.02200#24
1609.02200#26
1609.02200
[ "1602.08734" ]
1609.02200#26
Discrete Variational Autoencoders
9 Published as a conference paper at ICLR 2017 we e4E VNAVINS LEKOKFLOVACL NNANYNN FS OK QHOADS Ow. LO WHUAWYH ARVBNYNRUBDBN WK ee ? NVAKYUNVUYYRAVYVOVPYLDYA YNUN SQ LOK +1 © at) 3 5 3 6 % 3 Bg = 8 g 8 & t sf % 3 AS 3 Ss 3 YVAHHEMWHMPRHYHHYWHWDHWVSOW WA®W wor md MQM Mq OW WH ~~ G & WARQWADA HMM VYAROHPHYAWWANUW ~wW NS NN BR ww me N RH TEENS Oe as RAVER FORK HBDHREHAAKAKRS | BREAARSCHHROKF SEH HRSEGCRARA S| SCTFENANTANETHSETAEHKHKE RN RNARKRKREOKSTHEORACSEARQREOS NARHSHHAKEESESCHHR AK SCKAAKASEA WREPRMPAKTHKTSSTKREKCHTHSOHKH PRPHYPVWEARARKRHASESKRAKKRAGDA RPHYVRLNRESTHKOHCHN KE BE HEH BSVPHNAHRE RNS CSE RAKR SS NRX INNNN FORHOS HECK Ns HT NJAVANN SE cw we cod BY L0H 01 Me EWWHRHWH AHWAHEAH KOOWDVY® WWHAWHAWOHWEAYHHNAHA WY WW QPP WO Qwanngyn sy OWYYYNLAWEANDNAAEANAGGaAY BQO OWUWHOYBRRY HAUHWHAWO VY PNENEPN OHO PPL RPS UNND PESO PNK YHA NVELNENEND DL NP YUSDXYDHNLUBSHABMHNY PON P) e 3 iS 3 3 3 Ss 9 e 2 & & & 8 a g x i) § ~N~NY NER KH wR UPNN P
1609.02200#25
1609.02200#27
1609.02200
[ "1602.08734" ]
1609.02200#27
Discrete Variational Autoencoders
Figure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner. (a) Block Gibbs iterations (b) Num RBM units (c) RBM approx post layers Figure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a), the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per- formance, but the network is robust to the size of the RBM (b). thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes.
1609.02200#26
1609.02200#28
1609.02200
[ "1602.08734" ]
1609.02200#28
Discrete Variational Autoencoders
We present corresponding ï¬ gures for the other datasets, and results on simpliï¬ ed architec- tures, in Appendix J. The large mixing time of block Gibbs sampling on the RBM suggests that training may be con- strained by sample quality. Figure 6a shows that performance10 improves as we increase the num- θ) in ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z | Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986). 10All models in Figure 6 use only 10 layers of continuous latent variables, for computational efï¬ ciency. 10
1609.02200#27
1609.02200#29
1609.02200
[ "1602.08734" ]
1609.02200#29
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes. The beneï¬ t of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the ap- proximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx- imating posterior layers signiï¬ cantly increases the number of parameters that must be trained, and increases the risk of overï¬
1609.02200#28
1609.02200#30
1609.02200
[ "1602.08734" ]
1609.02200#30
Discrete Variational Autoencoders
tting. # 6 CONCLUSION Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difï¬ cult to train probabilistic models over discrete latent variables using efï¬ cient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013). We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
1609.02200#29
1609.02200#31
1609.02200
[ "1602.08734" ]
1609.02200#31
Discrete Variational Autoencoders
# ACKNOWLEDGEMENTS Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix D.3. # REFERENCES
1609.02200#30
1609.02200#32
1609.02200
[ "1602.08734" ]
1609.02200#32
Discrete Variational Autoencoders
Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084â 3092, 2013. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Charles H. Bennett. Efï¬ cient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2):245â 268, 1976. J¨org Bornschein and Yoshua Bengio.
1609.02200#31
1609.02200#33
1609.02200
[ "1602.08734" ]
1609.02200#33
Discrete Variational Autoencoders
Reweighted wake-sleep. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2751, 2015. J¨org Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511â 2519, 2016. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy In Proceedings of the 20th SIGNLL Bengio. Generating sentences from a continuous space. Conference on Computational Natural Language Learning, pp. 10â
1609.02200#32
1609.02200#34
1609.02200
[ "1602.08734" ]
1609.02200#34
Discrete Variational Autoencoders
21, 2016. 11 Published as a conference paper at ICLR 2017 Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Proceedings of the 18th International Conference on Artiï¬ cial Intelligence and Statistics, 2015. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed- ings of the International Conference on Learning Representations, arXiv:1509.00519, 2016.
1609.02200#33
1609.02200#35
1609.02200
[ "1602.08734" ]
1609.02200#35
Discrete Variational Autoencoders
Steve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work- ing paper, 2006. KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltz- mann machines. Neural Computation, 25(3):805â 831, 2013. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Ben- gio.
1609.02200#34
1609.02200#36
1609.02200
[ "1602.08734" ]
1609.02200#36
Discrete Variational Autoencoders
A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems, pp. 2980â 2988, 2015. Aaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images by spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning, pp. 1145â 1152, 2011. Paul Dagum and Michael Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard.
1609.02200#35
1609.02200#37
1609.02200
[ "1602.08734" ]
1609.02200#37
Discrete Variational Autoencoders
Artiï¬ cial Intelligence, 60(1):141â 153, 1993. Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic MCMC. arXiv preprint arXiv:1506.04557, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672â 2680, 2014.
1609.02200#36
1609.02200#38
1609.02200
[ "1602.08734" ]
1609.02200#38
Discrete Variational Autoencoders
Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016. Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. In Proceedings of the 31st International Conference on Machine Learning, pp. 1242â 1250, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW:
1609.02200#37
1609.02200#39
1609.02200
[ "1602.08734" ]
1609.02200#39
Discrete Variational Autoencoders
A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1462â 1471, 2015. Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527â 1554, 2006. Geoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3â 10. Morgan Kaufmann Publishers, Inc., 1994.
1609.02200#38
1609.02200#40
1609.02200
[ "1602.08734" ]
1609.02200#40
Discrete Variational Autoencoders
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448â 456, 2015. Matthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems, pp. 2946â 2954, 2016.
1609.02200#39
1609.02200#41
1609.02200
[ "1602.08734" ]
1609.02200#41
Discrete Variational Autoencoders
Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183â 233, 1999. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, arXiv:1412.6980, 2015. 12 Published as a conference paper at ICLR 2017 Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581â 3589, 2014.
1609.02200#40
1609.02200#42
1609.02200
[ "1602.08734" ]
1609.02200#42
Discrete Variational Autoencoders
Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the Interna- tional Conference on Learning Representations, arXiv:1312.6114, 2014. Brenden M. Lake, Ruslan R. Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Processing Systems, pp. 2526â 2534, 2013.
1609.02200#41
1609.02200#43
1609.02200
[ "1602.08734" ]
1609.02200#43
Discrete Variational Autoencoders
Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artiï¬ cial Intelligence and Statistics, 2011. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998. Yingzhen Li and Richard E. Turner.
1609.02200#42
1609.02200#44
1609.02200
[ "1602.08734" ]
1609.02200#44
Discrete Variational Autoencoders
Variational inference with R´enyi divergence. arXiv preprint arXiv:1602.02311, 2016. Philip M. Long and Rocco Servedio. Restricted Boltzmann machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pp. 703â 710, 2010. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Inductive principles for restricted Boltzmann machine learning.
1609.02200#43
1609.02200#45
1609.02200
[ "1602.08734" ]
1609.02200#45
Discrete Variational Autoencoders
In Proceedings of the 13th International Conference on Artiï¬ cial Intelligence and Statistics, pp. 509â 516, 2010. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro- ceedings of the 31st International Conference on Machine Learning, pp. 1791â 1799, 2014. Andriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed- ings of the 33rd International Conference on Machine Learning, pp. 2188â 2196, 2016.
1609.02200#44
1609.02200#46
1609.02200
[ "1602.08734" ]
1609.02200#46
Discrete Variational Autoencoders
Iain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, pp. 1137â 1144, 2009. Radford M. Neal. Connectionist learning of belief networks. Artiï¬ cial Intelligence, 56(1):71â 113, 1992. Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive ï¬
1609.02200#45
1609.02200#47
1609.02200
[ "1602.08734" ]
1609.02200#47
Discrete Variational Autoencoders
eld properties by learning a sparse code for natural images. Nature, 381(6583):607â 609, 1996. John Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012. Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor- gan Kaufmann, 1988.
1609.02200#46
1609.02200#48
1609.02200
[ "1602.08734" ]
1609.02200#48
Discrete Variational Autoencoders
Tapani Raiko, Harri Valpola, Markus Harva, and Juha Karhunen. Building blocks for variational Bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155â 201, 2007. Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2989, 2015. Semi- supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546â 3554, 2015. 13
1609.02200#47
1609.02200#49
1609.02200
[ "1602.08734" ]
1609.02200#49
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 Danilo Rezende and Shakir Mohamed. Variational inference with normalizing ï¬ ows. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1530â 1538, 2015. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278â 1286, 2014. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines.
1609.02200#48
1609.02200#50
1609.02200
[ "1602.08734" ]
1609.02200#50
Discrete Variational Autoencoders
In Proceedings of the 12th International Conference on Artiï¬ cial Intelligence and Statistics, pp. 448â 455, 2009. Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872â 879. ACM, 2008. Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv preprint arXiv:1602.08734, 2016. Tim Salimans, Diederik P. Kingma, Max Welling, et al.
1609.02200#49
1609.02200#51
1609.02200
[ "1602.08734" ]
1609.02200#51
Discrete Variational Autoencoders
Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1218â 1226, 2015. Michael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008. Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6, pp. 194â 281. MIT Press, Cambridge, 1986.
1609.02200#50
1609.02200#52
1609.02200
[ "1602.08734" ]
1609.02200#52
Discrete Variational Autoencoders
Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738â 3746, 2016. David J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities on directed graphical structures. Networks, 20(5):579â 605, 1990. Nitish Srivastava, Geoffrey E.
1609.02200#51
1609.02200#53
1609.02200
[ "1602.08734" ]
1609.02200#53
Discrete Variational Autoencoders
Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬ tting. Journal of Machine Learning Research, 15(1):1929â 1958, 2014. Robert H. Swendsen and Jian-Sheng Wang. Replica Monte Carlo simulation of spin-glasses. Phys- ical Review Letters, 57(21):2607, 1986.
1609.02200#52
1609.02200#54
1609.02200
[ "1602.08734" ]
1609.02200#54
Discrete Variational Autoencoders
Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064â 1071. ACM, 2008. Dustin Tran, Rajesh Ranganath, and David M. Blei. The variational Gaussian process. Proceedings of the International Conference on Learning Representations, arXiv:1511.06499, 2016. Ronald J. Williams.
1609.02200#53
1609.02200#55
1609.02200
[ "1602.08734" ]
1609.02200#55
Discrete Variational Autoencoders
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. A MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION FUNCTION The reparameterization trick is always possible if the cumulative distribution function (CDF) of x, Ï ) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014). q(z | However, for multivariate distributions, the CDF is deï¬
1609.02200#54
1609.02200#56
1609.02200
[ "1602.08734" ]
1609.02200#56
Discrete Variational Autoencoders
ned by: F(x) = os D(ath +2): a =â 00 w!,=â 00 â â â â 14 Published as a conference paper at ICLR 2017 # n R The multivariate CDF maps In place of the multivariate CDF, consider the set of conditional-marginal CDFs deï¬ ned by:12 â p(ai|ai,...,@i-1)- (15) â â That is, Fj(x) is the CDF of xj, conditioned on all xi such that i < h, and marginalized over all xk such the j < k. The range of each Fj is [0, 1], so F maps the domain of the original [0, 1]n. To invert F, we need only invert each conditional-marginal CDF in turn, distribution to Ï 1 conditioning xj = F â 1(Ï ). These inverses exist so long as 1 = F â j j â the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively deï¬ ne F â (Ï ) based upon xi<j, rather than Ï i<j, since by induction we can uniquely determine j xi<j given Ï i<j. Using integration-by-substition, we can compute the gradient of the ELBO by taking the expectation 1 of a uniform random variable Ï on [0, 1]n, and using Fâ x,Ï ) to transform Ï back to the element q(z | of z on which p(x z, θ) is conditioned. To perform integration-by-substitution, we will require the | determinant of the Jacobian of Fâ The derivative of a CDF is the probability density function at the selected point, and Fj is a simple CDF when we hold ï¬ xed the variables xi<j on which it is conditioned, so using the inverse function theorem we ï¬ nd: p (23 =F; '()\ti<j,) where p is a vector, and FY is 55, or is triangular, since the earlier conditional- j marginal CDFs Fâ ; are independent of the value of the later x, 7 < k, over which they are marginal- ized.
1609.02200#55
1609.02200#57
1609.02200
[ "1602.08734" ]
1609.02200#57
Discrete Variational Autoencoders
Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F, so the Jacobian of F~! is also triangular. The determinant of a triangular matrix is the product of the diagonal elements. . The Jacobian matrix 11For instance, for the bivariate uniform distribution on the interval [0, 1]2, the CDF F (x, y) = x · y for x yields F (x, y) = c. Clearly, many different pairs 0 ⠤ x, y ⠤ 1, so for any 0 ⠤ c ⠤ 1 and c ⠤ x ⠤ 1, y = c (x, y) yield each possible value c of F (x, y). The set of marginal CDFs, used to define copulas, is invertible. However, it does not gener- ally map the original distribution to a simple joint distribution, such as a multivariate unform distribu- are (e)
1609.02200#56
1609.02200#58
1609.02200
[ "1602.08734" ]
1609.02200#58
Discrete Variational Autoencoders
# det (e) (z|2,6) q (2: = does q(z|x,Ï ) â Ï does not cancel out tion, as required for variational autoencoders. In Equation 16, The determinant of the inverse Jacobian is instead [[]; 1 qd (Fydje.) (Ole, ). The determinant of the inverse Jacobian is instead [[]; q (2: = F;'(p))] 1 which -1 Fâ 1 Fâ 1 q(z|x,Ï )(Ï ) q if q is not factorial. As a result, we do not recover the variational autoen- differs from coder formulation of Equation 16.
1609.02200#57
1609.02200#59
1609.02200
[ "1602.08734" ]
1609.02200#59
Discrete Variational Autoencoders
15 Published as a conference paper at ICLR 2017 Using these facts to perform a multivariate integration-by-substitution, we obtain: # E ee aot â 4 OF! - fis (Fy Lie, » (p)|x.9) â logp (2 [Fb jno)(P):9) | eee 4 een walt ~ j [ 4 (Fye.o)(0)le.9) 0 =0 [Lj 9 (27 = Fj (e)lz<3 1 =|, log p (IF ,d |x,6) (0), 6) Bycsino) logpte|2.8)] =f alsle.6) -logplele.8) = | â a (Fide (le+4) log (IF 2 ).,5)(0)-9) - ) â logp (« aE ele, )(P p); 6) # q(z Ï =0 The variable Ï has dimensionality equal to that of z; 0 is the vector of all 0s; 1 is the vector of all 1s. The gradient with respect to Ï is then easy to approximate stochastically: (a) 1 (2) _ ag natele#) flog p(z]z, @)] © WV » a6 log p (cP sd j2.0)(0)> 6) (17) pru(0,l)" â ¼ Note that if q(z x, Ï ) is factorial (i.e., the product of independent distributions in each dimension zj), | then the conditional-marginal CDFs Fj are just the marginal CDFs in each direction. However, even if q(z x, Ï ) is not factorial, Equation 17 still holds so long as F is nevertheless deï¬ ned to be the set | of conditional-marginal CDFs of Equation 15. # B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITH # REINFORCE It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables.
1609.02200#58
1609.02200#60
1609.02200
[ "1602.08734" ]
1609.02200#60
Discrete Variational Autoencoders
Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016): (a) a] FBacoe le nll] = Eye [Boe r(ale,8) ~ BC tow alle. 6)] =F LD (lowr(els.4) â Ble}: Fo towalcle.9)) 8 znq(2|2,) x,Ï ) | â ¼
1609.02200#59
1609.02200#61
1609.02200
[ "1602.08734" ]
1609.02200#61
Discrete Variational Autoencoders
where B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but can reduce the variance of a stochastic estimate of the expectation. In REINFORCE, â z, θ)] is effectively estimated by something akin to a ï¬ nite â Ï | difference approximation to the derivative. The autoencoding term is a function of the conditional x, Ï ), which deter- log-likelihood log p(x | mines the value of z at which p(x z, θ) is evaluated. However, the conditional log-likelihood is | never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con- ditional log-likelihood is evaluated at many different points z x, Ï ), and a weighted sum of these values is used to approximate the gradient, just like in the ï¬ nite difference approximation. # E Equation 18 of REINFORCE captures much less information about p(|z, 0) per sample than Equa- tion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of p(x|z, @) in some direction dcan only affect the REINFORCE gradient estimate if a sam- ple is taken with a component in direction d. Ina D-dimensional latent space, at least D samples are
1609.02200#60
1609.02200#62
1609.02200
[ "1602.08734" ]
1609.02200#62
Discrete Variational Autoencoders
16 (16) Published as a conference paper at ICLR 2017 required to capture the variation of p(x z, θ) in all directions; fewer samples span a smaller subspace. | Since the latent representation commonly consists of dozens of variables, the REINFORCE gradi- ent estimate can be much less efï¬ cient than one that makes direct use of the gradient of p(x z, θ). | Moreover, we will show in Section 5 that, when the gradient is calculated efï¬ ciently, hundreds of latent variables can be used effectively. C AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES Intuitively, variational autoencoders break the encoder13 distribution into â
1609.02200#61
1609.02200#63
1609.02200
[ "1602.08734" ]
1609.02200#63
Discrete Variational Autoencoders
packetsâ of probability of inï¬ nitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region ri < Ï i < ri + δ for all i in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, so x,Ï )(ζ) maps intervals high-probability values are more likely to be selected. More rigorously, Fq(z | 1, so a randomly selected Ï of high probability to larger spans of 0 U [0, 1] is more likely â ¼ 1 to be mapped to a high-probability point by Fâ q(z x,Ï )(Ï ). | As the parameters of the encoder are changed, the location of a packet can move, while its mass is 1 x,Ï )(Ï ) is a function of Ï , whereas the probability mass associated held constant. That is, ζ = Fâ q(z | 1 with a region of Ï -space is constant by deï¬
1609.02200#62
1609.02200#64
1609.02200
[ "1602.08734" ]
1609.02200#64
Discrete Variational Autoencoders
nition. So long as Fâ x,Ï ) exists and is differentiable, a q(z | small change in Ï will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space. In contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites- simal but equal volume; e.g., z; < Zz < 2; +6 for all i (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments, but the probability mass varies between them. Specifically, the probability mass of the segment z <2! < 246 is proportional to q(z|x, ¢). x, Ï ). | â ¤
1609.02200#63
1609.02200#65
1609.02200
[ "1602.08734" ]
1609.02200#65
Discrete Variational Autoencoders
Once a segment is selected in the latent space, its location is independent of the encoder and decoder. In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is ï¬ xed. Only the probability mass assigned to the segment is relevant. Although variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear). Nevertheless, Equation 17 of the VAE can be understood in analogy to dropout (Srivastava et al., 1 2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, Fâ x,Ï )(Ï ) is an q(z | 1 x,Ï )(Ï
1609.02200#64
1609.02200#66
1609.02200
[ "1602.08734" ]
1609.02200#66
Discrete Variational Autoencoders
) selects a point element-wise stochastic nonlinearity applied to a hidden layer. Since Fâ q(z in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and- Gaussian distribution of Section E.1 and let the standard deviation Ï go to zero. However, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation, 13Since the approximating posterior q(z|x, Ï ) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood p(x|z, θ) maps each conï¬ gu- ration of the latent variables to a distribution over the input space, it is called the decoder.
1609.02200#65
1609.02200#67
1609.02200
[ "1602.08734" ]
1609.02200#67
Discrete Variational Autoencoders
17 Published as a conference paper at ICLR 2017 the gradient of the decoder cannot be used to accurately estimate the change in the loss function, since the gradient only captures the effect of very small movements of the probability packet. To use discrete latent representations in the variational autoencoder framework, we must ï¬ rst trans- form to a continuous latent space, within which probability packets move smoothly. That is, we must compute Equation 17 over a different distribution than the original posterior distribution.
1609.02200#66
1609.02200#68
1609.02200
[ "1602.08734" ]
1609.02200#68
Discrete Variational Autoencoders
Sur- prisingly, we need not sacriï¬ ce the original discrete latent space, with its associated approximating posterior. Rather, we extend the encoder q(z θ) with a transformation to a continuous, auxiliary latent representation ζ, and correspondingly make the decoder a function of this new continuous representation. By extending both the encoder and the prior in the same way, we avoid affecting the remaining KL divergence in Equation 2.14 The gradient is deï¬ ned everywhere if we require that each point in the original latent space map to nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a function of their main argument, and thus are invertible. If we ignore the cases where some discrete latent variable has probability 0 or 1, we need only require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters Ï of the encoder, q(z x, Ï ), change, redistributing weight amongst | the associated regions of the auxiliary continuous space. # D ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS The spike-and-exponential transformation from discrete latent variables z to continuous latent vari- ables ζ presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations.
1609.02200#67
1609.02200#69
1609.02200
[ "1602.08734" ]
1609.02200#69
Discrete Variational Autoencoders
# D.1 MIXTURE OF RAMPS As another concrete example, we consider a case where both r(ζi| linear functions of ζi: zi = 0) and r(ζi| zi = 1) are 2-(1â G), if0<G<1 ¢ 2 r(Gilzi = 0) = {i 7! otherwise Frcil2v=0) (6) = 2G; â Co = 2¢'-¢ 2-G, if0<G<1 fp m(cili = 1) = to otherwise Frgja=iy(C) =~ ean =¢ where F,,(¢â ) = f p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1]. The CDF for q(¢|z, ¢) as a function of g(z = 1\x, ¢) is: x, Ï ) is: x, Ï ) as a function of q(z = 1 | | Fy(ciao)(6!) = (1 (2 = Her, 9) - (26) + ale = Ie, 9) 6â =2-g(z=la,9)- (67 = 6) 426-67, (19)
1609.02200#68
1609.02200#70
1609.02200
[ "1602.08734" ]
1609.02200#70
Discrete Variational Autoencoders
14Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous space to the decoder, since this does not change the space of the probabilty packets. 18 Published as a conference paper at ICLR 2017 We can calculate Fide, #) explicitly, using the substitutions Fiy(¢jz,4) > p, q(z = 1x, 6) â q, and ¢â â ¢ in Equation 19 to simplify notation: â pa=2-q(C-O+%e-C 0= (29-1): +2(1-4)-C-p ¢ 2(qâ -1) + V4 - 2¢ + @) + 4(2q â Lp 2(2q â 1) q-)+VP +2(0- Nat C= p) te â 1 2 ; Ï = ζ otherwise. F â q(ζ if q 1 2 ; Ï = ζ otherwise. F â q(ζ = 1 x,Ï ) has the desired range [0, 1] if we choose | F-1(p) (@Q-1)+ VP +2(o-Da+(â - p) 2q-1 _4g-1+VJV(-1)?+(¢-1)-p (20) 2q-1 F â â = 1 if q in Figure 7. 2 , and F â 1(Ï ) = Ï if q = 1 1 2 . We plot F â q(ζ x,Ï )(Ï ) as a function of q for various values of Ï | (0) -1 q(C|x., i i i i 0 02 04 06 08 q(z = 1|2,¢@) a # Figure 7: Inverse CDF of the mixture of ramps transformation for Ï â ¬ {0.2,0.5,0.8} â { } In Equation 20, F ela, g)(P) is quasi-sigmoidal as a function of g(z = 1|x,¢).
1609.02200#69
1609.02200#71
1609.02200
[ "1602.08734" ]
1609.02200#71
Discrete Variational Autoencoders
If p < 0.5, Fou is concave-up; if p > 0.5, F~! is concave-down; if p ~ 0.5, F~! is sigmoid. In no case is F~! extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of z inevitably flattens. # D.2 SPIKE-AND-SLAB We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011): ee Oe oor) if¢; =0 nh r(Gilzi = 0) = {o otherwise Fy(¢\2:=0)(¢') = 1 4, fl, if0<G<1 nGls=1)= {9 Npemie Fygiaay(C) = Gig = where F,(¢â ) = f° â wc p(¢) -d¢ is the cumulative tion pin he lomein (0, 1]. The CDF for g(¢|a, dζ is the cumulative distribution function (CDF) of probability distribu- p(ζ) · â â @) as a function of g(z = 1|x, 4) is: Frejev=oy (6) + a(2 = Wor, 6)» +1. Facclx,0)(6) = 1 a(z = Me, 6)» Frejev=oy (6) + a(2 = Wor, 6)» Fegijevaay (0) = q(z=1\2,¢)-(¢/-1) +1.
1609.02200#70
1609.02200#72
1609.02200
[ "1602.08734" ]
1609.02200#72
Discrete Variational Autoencoders
â = q(z = 1 | â 19 Published as a conference paper at ICLR 2017 1 We can calculate F â q(ζ x, Ï ) â q to simplify notation: #) explicitly, using the substitution g(z = p-l 7 eli, 1 q + 1, â 0, if Ï â â ¥ otherwise 1 1 F â q(ζ x,Ï )(Ï ) = | q 1 We plot F â q(ζ x,Ï )(Ï ) as a function of q for various values of Ï in Figure 8. | 0.8 0.6 )(p) 0.4 -1 aC|a.e 0.2 0 i i i i 0 02 04 06 08 q(z = 1|2,¢@) a # Figure 8: Inverse CDF of the spike-and-slab transformation for Ï â ¬ {0.2, 0.5, 0.8} â { } # D.3 ENGINEERING EFFECTIVE SMOOTHING TRANSFORMATIONS If the smoothing transformation is not chosen appropriately, the contribution of low-probability regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we ï¬
1609.02200#71
1609.02200#73
1609.02200
[ "1602.08734" ]
1609.02200#73
Discrete Variational Autoencoders
nd: (a) OF OF (a) F(F(p)) + = =F "(p) =0 06 06 F-1(p) Oz F-1(p) 00 a4, OF (2) - 30° (p) = ~ 90 |.â where z = F'~1(p). Consider the case where r(¢;|z; = 0) and r(¢;|z; = 1) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard deviations apart. For values of ¢; between the two modes, F(¢;) ~ q(zi = O|x,¢@), assuming without loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of ¢; than that corresponding to z; = 1. As a result, x = 1 between the two modes, and or ~] A even if r(¢;) ~& 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon a have large variance. These high-variance gradient estimates arise because r(¢;|z; = 0) and r(¢;|z; = 1) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans- formations are analogous to a sigmoid transfer function o(c - x), where o is the logistic function and c â
1609.02200#72
1609.02200#74
1609.02200
[ "1602.08734" ]
1609.02200#74
Discrete Variational Autoencoders
oo. The smoothing provided by the continuous random variables ¢ is only effective if there is a region of meaningful overlap between r(¢|z = 0) and r(¢\z = 1). In particular, Y., (Glzi = 0) +r(Gilzi = 1) > 0 for all ¢; between the modes of r(¢;|zi = 0) and r(Gi]z; = 1), so p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in Section 2.1, this overlap can be ensured by fixing or bounding £. # E TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS THAT DEPEND UPON THE INPUT It is not necessary to deï¬ ne the transformation from discrete to continuous latent variables in the z), to be independent of the input x. In the true posterior distribution, approximating posterior, r(ζ |
1609.02200#73
1609.02200#75
1609.02200
[ "1602.08734" ]
1609.02200#75
Discrete Variational Autoencoders
20 (21) Published as a conference paper at ICLR 2017 p(ζ p(ζ little as a function of x, since z, x) | z) only if z already captures most of the information about x and p(ζ â | z, x) changes | p(ζ z) = | x p(ζ, x z) = | x p(ζ z, x) | · p(x z). | This is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution.
1609.02200#74
1609.02200#76
1609.02200
[ "1602.08734" ]
1609.02200#76
Discrete Variational Autoencoders
To address this, we can deï¬ ne: q(ζ, z x, Ï ) = q(z | θ) = p(ζ p(ζ, z | x, Ï ) | z) | q(ζ | θ) | · p(z · z, x, Ï ) | This leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term: LV AE(x, θ, Ï ) = log p(x = log p(x 6) = log (2/0) â KL [q(z,¢|2.¢)|lp(z, Cx, 0] = log p(2|4) â KL [q(¢|2,2, 6) - (ele, 9)||p(Cl2.@,8) plete. 4)) _ _ a, [plalg. 8) -pC|z.8) - Plzl8) =X [aca0) a(ele. 6) los | CaS by ace) = Ey(¢\2,2,6)-q(z|x,¢) Log p(x|¢, )] â KL [q(zIa, ¢)||p(218)] â Yi alele, 6) KL [a(¢|z, 2, 9)||p(¢\2)] The extension to hierarchical approximating posteriors proceeds as in sections 3 and 4. If both g(¢|z, x,¢) and p(¢|z) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. However, while the gra- dients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect of q(z|x, @) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18): log q(z\z,9) YEO) fa(lz.2,9)lv(Cl=)] log q(z\z,9) YEO) KL fa(lz.2,9)lv(Cl=)] = Buta [RL[a(l2.2.9)|p(Cl2)]- BE Oo z
1609.02200#75
1609.02200#77
1609.02200
[ "1602.08734" ]
1609.02200#77
Discrete Variational Autoencoders
(23) The reward signal is now KL [q(ζ z, θ), but the effect on the | variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function. However, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance. Specifically, if q(z|a,@) and q(¢|z,a,) are factorial, with q(¢;|zi,7,@) only dependent on z;, then KL [q(¢|z, x, )||p(¢|z)] decomposes into a sum of the KL divergences over each variable, as does Steg qle.6) The expectation of all terms in the resulting product of sums is zero except those of the form E [KL {gil |pi] - legs) , due to the identity explained in Equation 27. We then use the reparameterization trick to eliminate all hierarchical layers before the current one, and marginalize over each z;. As a result, we can compute the term of Equation 23 by backpropagating KL [q(ζ p(ζ z = 1)] p(ζ z = 1, x, Ï | â x, Ï ). This is especially simple if q(ζi| | z = 0, x, Ï ) | KL [q(ζ z = 0, x, Ï ) | zi, x, Ï ) = p(ζi| z = 0)] | || | || into q(z KL [q(ζ p(ζ zi) when zi = 0, since then z = 0)] = 0. | || # E.1 SPIKE-AND-GAUSSIAN zi, x, Ï ) to be a separate Gaussian for both values of the binary zi. However, it We might wish q(ζi| is difï¬ cult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise: 0,
1609.02200#76
1609.02200#78
1609.02200
[ "1602.08734" ]
1609.02200#78
Discrete Variational Autoencoders
0, if¢; <0 q(Gi\zi = 0, 2,4) = 5(G) Fa (é:\2:=0,0,6)(G) = H(Gi) = {i otherwise zi =1,2,6) =N (ug i (2, 102 (x, ¢ 2,=1,0,0) (Gi i +t Gi = Mail, 6) a(cil 1,2, ) (Hq alt, 6), 05 :(,)) Fa(eler=t.e.o) (Gi) 3 f at( Via,,:(@, 6) 21 (22) .
1609.02200#77
1609.02200#79
1609.02200
[ "1602.08734" ]
1609.02200#79
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 where µq(x, Ï ) and Ï q(x, Ï ) are functions of x and Ï . We use the substitutions q(zi = 1 | µq,i(x, Ï ) is similarly parameterized. We can now ï¬ nd the CDF for q(ζ q: x, Ï ) x, Ï ) as a function of q(z = 1 | | â Fa(gje.o) (Gi) = (1 = a) - (Gi) Gi = Hayi l+erf (Sat )| Gi +5 Since zi = 0 makes no contribution to the CDF until ζi = 0, the value of Ï at which ζi = 0 is Ï step i = qi 2 1 + erf µq,i â â 2Ï q,i # so: qi + V20q,-erf + (% _ 1) , if pi < pre? oye it piâ < pi < pl"? + (1a) Hq + V 204, erf | (248 + 1) , otherwise Gradients are always evaluated for ï¬ xed choices of Ï , and gradients are never taken with respect to Ï
1609.02200#78
1609.02200#80
1609.02200
[ "1602.08734" ]
1609.02200#80
Discrete Variational Autoencoders
. As a result, expectations with respect to Ï are invariant to permutations of Ï . Furthermore, 2p: _ | 2 -D) Gi Gi +1 # where pi, = # use qi). We can thus shift the delta spike to the beginning of the range of p;, and 9, ifp) <1-â q â 1 ( 2(pi-1) : Hq i + V20q;- ert â A +1), otherwise p; + (1 â # qi â G=
1609.02200#79
1609.02200#81
1609.02200
[ "1602.08734" ]
1609.02200#81
Discrete Variational Autoencoders
All parameters of the multivariate Gaussians should be trainable functions of x, and independent of q. The new term in Equation 22 is: SY alele, 4) KL [a(Clz,«, 8)|le(Cl2)] = z Se ala = Ua, 6) KL [a(Gilzi = 1,2, )loGlzi = 0) + (1â 4q(z = 1,6) - KL fa(¢il2: = 0,2, 6)|Ip(Gilz« = 0)] x, Ï )) q(zi = 1 KL [q(ζi| | â zi = 0, θ), and KL [q(ζi| zi = 0, x, Ï ) = p(ζi| p,i and Ï 2 q,i, is p(ζi| || zi = 0, x, Ï ) zi = 0)] p(ζi| || If zi = 0, then q(ζi| zi = 0, θ)] = 0 as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance matrices, with means µp,i, µq,i, and covariances Ï
1609.02200#80
1609.02200#82
1609.02200
[ "1602.08734" ]
1609.02200#82
Discrete Variational Autoencoders
2 Ï 2 q,i + (µq,i â Ï 2 2 p,i · zi = 1, x, Ï ) x, Ï ), we thus need to backpropagate KL [q(ζi| To train q(zi = 1 | 1 5) p(ζi| || zi = 1)] into it. Finally, â KL[q || â µq,i â KL[q || â Ï q,i p] p] = = µq,i â Ï 2 p,i â 1 Ï q,i µp,i + Ï q,i Ï 2 p,i 22
1609.02200#81
1609.02200#83
1609.02200
[ "1602.08734" ]
1609.02200#83
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 # so Hq ~ Epi / (a) , a q(2\z, 9) + Dias KL [q||p] = 9(z: = 1x, 4) - o, 1 , (a) , Ooi Di ale, @) - 5 â KL [ally] = az = Me, 9) - (-; +3 qi qi yi z ) For p, it is not useful to make the mean values of ζ adjustable for each value of z, since this is redundant with the parameterization of the decoder. With ï¬ xed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be one. # F COMPUTING THE GRADIENT OF KL [q(ζ, z x, Ï ) | # p(ζ, z # F COMPUTING THE GRADIENT OF KL [q(¢, z|x, ¢)||p(¢, z|4)] || θ)] | The KL term of the ELBO (Equation 2) is not signiï¬ cantly affected by the introduction of additional z) for both the approximat- continuous latent variables ζ, so long as we use the same expansion r(ζ | ing posterior and the prior: KL [allel = ~/{ I -L/ {0 Thej<e r(Glzi) - (2lGi<j,2) (2) - Thej<n r(Gil23) Thej<e steer) (2) : (G25) + W(zilG<j,2) | + log | 1<j<k r(Gjlz5) > a(25\Gicg,) } + log (24) 1<j<k The gradient of Equation 24 with respect to the parameters θ of the prior, p(z timated stochastically using samples from the approximating posterior, q(ζ, z prior, p(z | â â 2 x1 tale] = â So ale 2l0.4) - PEO â Lae y- BL) 00 OE,(z,) OE,(z,4) â
1609.02200#82
1609.02200#84
1609.02200
[ "1602.08734" ]
1609.02200#84
Discrete Variational Autoencoders
E4y(21|2,9) [ [Eataiccnae) 6 om + Epc2|a) 30 (25) ζi<k, x, Ï ) can be performed analytically; all other expec- The ï¬ nal expectation with respect to q(zk| tations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units. # F.1 GRADIENT OF THE ENTROPY WITH RESPECT TO Ï In contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break KL [q||p| into two terms, the negative entropy Vc qlog q, and the cross-entropy â Vac qlog p, and compute their gradients separately.
1609.02200#83
1609.02200#85
1609.02200
[ "1602.08734" ]
1609.02200#85
Discrete Variational Autoencoders
23 Published as a conference paper at ICLR 2017 We can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through |]; -; ¢(zj|G<j, 2): i<j q(zj| â H() = ff TL Glen) atesleecs.2) | tos] TL alesis) 2 %S \i<j<k 1<j<k ->/ [[Gilz)- az5lGi<j,2) | SP log q(zil¢i<j,2) z j j ->y/ [] Giles): a(zilGn<i. 2) log q(zj\Ci<j,2) je °S Visi = SE ces2ccjle0) | > az ilbi<j,2) * log a(zi|Gi<j,2) Fi 23 â LE Pics Ya (zj|Pi<j,) - log q(zj|pi<j, 2) 25 â Ï i<j, x) is where indices i and j denote hierarchical groups of variables. The probability q(zj| evaluated analytically, whereas all variables zi<j and ζi<j are implicitly sampled stochastically via Ï i<j. We wish to take the gradient of H(q) in Equation 26. Using the identity: â (a) (2) (a) ale kin -Ee(Me)-k(E)-0 â H(q) for any constant c, we can eliminate the gradient of log qj # Ï i<j in â Ï , and obtain: for any constant c, we can eliminate the gradient of log qjp,., in â oa) and obtain: â | (a) SHC = LE E (g504 (2j|Pi<j,t 2) sloral=s[ei<ist) Moreover, we can eliminate any log-partition function in log q(zj| to Equation 27.15 By repeating this argument one more time, we can break â factorial components.16 If zi â { reduces to:
1609.02200#84
1609.02200#86
1609.02200
[ "1602.08734" ]
1609.02200#86
Discrete Variational Autoencoders
Og. Og aC = LE a VY al 2) (a: Oe -> (atad-a *)) (g.- %) ej Og; 5 = Den [FEW 0 lat = a=] j where ι and zι correspond to single variables within the hierarchical groups denoted by j. In Ten- sorFlow, it might be simpler to write: aT fe) 0q; (2; =1 5 H(q) = Ep. iG dai : 0¢ J Oo = =c $s >. G@=d: Su ce: = ZS 3 1: ba: TLjvi G = 0. z q = 0, where c is the log partition function of q(zj|Ï i<j, x). PS ce: = =c $s >. 4 = 0, where c is the log partition function of q(z;|pi<j, ©). # oa The 16 ZS 3 1: G@=d: Su Ili q;, so the q;4; marginalize out of oa The qj When multiplied by log qi. When 1 ba: TLjvi qj is multiplied by one of the log q;4:, the sum over z; can be taken inside the coe and again Bp oe, G = 0. 24 (26)
1609.02200#85
1609.02200#87
1609.02200
[ "1602.08734" ]
1609.02200#87
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 # F.2 GRADIENT OF THE CROSS-ENTROPY The gradient of the cross-entropy with respect to the parameters Ï of the approximating posterior does not depend on the partition function of the prior # Zp, since: â log q â Ï (6) (6) (6) 0 ~ 9g 2218 Du gt Bet 359 8% dat # Ep by Equations 6 and 27, so we are left with the gradient of the average energy Ep. The remaining cross-entropy term is
1609.02200#86
1609.02200#88
1609.02200
[ "1602.08734" ]
1609.02200#88
Discrete Variational Autoencoders
Soa: Ep =-E, [2'-W-z+b"- 2]. 2 # â z analytically, since zi â { · We can handle the term b' - z analytically, since z; â ¬ {0,1}, and 0, 1 , and } EÏ [q(z = 1)] . =b" # EÏ The approximating posterior q is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients: (a) 0¢ E, [bt : 2] =b! -E, Fue = | : In contrast, each element of the sum 2) Wee SOW 25-2 ij depends upon variables that are not usually in the same hierarchical level, so in general E, [Wij212)] A WijE, [zi] - Ep [2)]. term into Ep [Wij2i2%j] = Wij Epc: [21 Epes: [2], We might decompose this term into EÏ [Wijzizj] = Wij · where without loss of generality zi is in an earlier hierarchical layer than zj; however, it is not clear how to take the derivative of zi, since it is a discontinuous function of Ï k â ¤ F.3 NAIVE APPROACH The naive approach would be to take the gradient of the expectation using the gradient of log- probabilities over all variables:
1609.02200#87
1609.02200#89
1609.02200
[ "1602.08734" ]
1609.02200#89
Discrete Variational Autoencoders
(a) ; (a) aa" [Wij 2i2;] = Ey [Waxes : 30 08 i . (a) = Eq, aay. |Wigz3 D> 5g OB Itch (28) k , 1 OdK|t<k = Eg aii, wus > Tuten : 6 : # â qk|l<k â Ï , we can drop out terms involving only zi<k and zj<k that occur hierarchically before k, For since those terms can be pulled out of the expectation over qk, and we can apply Equation 27. However, for terms involving zi>k or zj>k that occur hierarchically after k, the expected value of zi or zj depends upon the chosen value of zk. The gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18). Moreover, the variance of the estimate is proportional to the number of terms (to the extent that the terms are independent). The number of terms contributing to each gradient grows quadrati- cally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor, 2014):
1609.02200#88
1609.02200#90
1609.02200
[ "1602.08734" ]
1609.02200#90
Discrete Variational Autoencoders
O Ey | (Wizz; â c(x)) - a6 loga| ; but this approximation is still high-variance. 25 Published as a conference paper at ICLR 2017 F.4 DECOMPOSITION OF â â Ï Wijzizj VIA THE CHAIN RULE When using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec- tions 2.1 D.2, and E.1, we can decompose the gradient of E [Wijzizj] using the chain rule. Previ- ously, we have considered z to be a function of Ï and Ï . We can instead formulate z as a function of q(z = 1) and Ï , where q(z = 1) is itself a function of Ï and Ï . Speciï¬ cally, 0 ifp;<l-qg(a=)=a(a=0 alla.) ={) Menge, MDH) 09) â qj (zj =1) Using the chain rule, â =j ï¬ xed, even â Ï though they all depend on the common variables Ï and parameters Ï . We use the chain rule to differentiate with respect to q(z = 1) since it allows us to pull part of the integral over Ï inside the derivative with respect to Ï
1609.02200#89
1609.02200#91
1609.02200
[ "1602.08734" ]
1609.02200#91
Discrete Variational Autoencoders
. In the sequel, we sometimes write q in place of q(z = 1) to minimize notational clutter. Expanding the desired gradient using the reparameterization trick and the chain rule, we ï¬ nd: a] (a) age Wizizd] = 55 Eo Wiss] 06 OWij212 Ok (Zk eas, eS (30) We can change the order of integration (via the expectation) and differentiation since Wijzizj| â ¤ | Wij < â for all Ï and bounded Ï
1609.02200#90
1609.02200#92
1609.02200
[ "1602.08734" ]
1609.02200#92
Discrete Variational Autoencoders
(Cheng, 2006). Although z(q, Ï ) is a step function, and its derivative is a delta function, the integral (corresponding to the expectation with respect to Ï ) of its derivative is ï¬ nite. Rather than dealing with generalized functions directly, we apply the deï¬ nition of the derivative, and push through the matching integral to recover a ï¬ nite quantity. For simplicity, we pull the sum over & out of the expectation in Equation 30, and consider each summand independently. From Equation 29, we see that z; is only a function of q;, so all terms in the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we consider the term k = 2; the term k = j is symmetric. Applying the definition of the gradient to one of the summands, and then analytically taking the expectation with respect to p;, we obtain: OW 2G) (Gp) Ogi(zi = 1) | OW 2G) (Gp) Ogi(zi = 1) dail = 1) a6 _ wm ig 2i(G + 810) 254 + 84.) â Way = (4,0) (G0) Oai(zi = Y) ©? | éqi(i=1) 30 qi 06 = Ep, lim a; Wij 1-2j(¢,0) â Wij -0- 25(G p) _ Ogi(zi = 1) 5q:(2i=1) 0 bd: de pi=ai(zi=0) 7 , 0G: (%i = = Boys fw, 2; (4, p)- 36 sweco] # EÏ The third line follows from Equation 29, since zi(q + δqi, Ï ) differs from zi(q, Ï ) only in the region = zi(q, Ï ). Regardless of of Ï of size δqi around qi(zi = 0) = 1 â the choice of Ï , zj(q + δqi, Ï ) = zj(q, Ï ). The third line ï¬ xes Ï
1609.02200#91
1609.02200#93
1609.02200
[ "1602.08734" ]
1609.02200#93
Discrete Variational Autoencoders
i to the transition between zi = 0 and zi = 1 at qi(zi = 0). Since zi = 0 implies ζi = 0,17 and ζ is a continuous function of Ï , the third line implies that ζi = 0. At the same time, since qi is only a function of Ï k<i from earlier in the hierarchy, the term â qi â Ï is not affected by the choice of Ï i.18 As noted above, due to the chain rule, the perturbation δqi has no effect on other 17We chose the conditional distribution r(ζi|zi = 0) to be a delta spike at zero. 18In contrast, zi is a function of Ï i. 26
1609.02200#92
1609.02200#94
1609.02200
[ "1602.08734" ]
1609.02200#94
Discrete Variational Autoencoders
Published as a conference paper at ICLR 2017 qj by deï¬ nition; the gradient is evaluated with those values held constant. On the other hand, â qi generally nonzero for all parameters governing hierarchical levels k < i. Since Ï i is ï¬ xed such that ζi = 0, all units further down the hierarchy must be sampled consis- tent with this restriction. A sample from Ï has ζi = 0 if zi = 0, which occurs with probability qi(zi = 0).19 We can compute the gradient with a stochastic approximation by multiplying each = 0 are ignored,20 and scaling up the gradient when zi = 0 sample by 1 1 by qi(zi=0) : 2, so that terms with ¢; 4 0 are ignored,â ° and scaling (6) 1- 3% â EWyaz] =E, |We- Ly de Wis2i2i] = Bo |W I-qa=) 7 is not necessary if 7 comes before i in the hierarchy. â â Ï zi 1 qi(zi = 1) · â qi(zi = 1) â Ï E [Wijzizj] = EÏ . â 1 (31) â # The term 1 1 # zi qi # et â â
1609.02200#93
1609.02200#95
1609.02200
[ "1602.08734" ]