doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1609.02200
10
1.2 RELATED WORK Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016), Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed, 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos- terior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling (Du et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi- mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distri- butions (Johnson et al., 2016).
1609.02200#10
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
11
It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B. Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units.
1609.02200#11
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
12
The generative model underlying the discrete variational autoencoder resembles a deep belief net- work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz- mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer j receives connections from all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. 2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES
1609.02200#12
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
13
2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter 3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the “visible” units and the “hidden” units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome “fully hidden bipartite Boltzmann machine.” 3 Published as a conference paper at ICLR 2017 (a) Approximating posterior q(ζ, z|x) (b) Prior p(x, ζ, z) (c) Autoencoding term
1609.02200#13
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
14
(a) Approximating posterior q(ζ, z|x) (b) Prior p(x, ζ, z) (c) Autoencoding term Figure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari- ables ζi are smoothed analogs of discrete latent variables zi, and insulate z from the observed vari- ables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input ρ ∼ the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted as adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a small minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix C.
1609.02200#14
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
15
Specifically, as shown in Figure 1a, we augment the latent representation in the approximating pos- terior with continuous random variables ζ,4 conditioned on the discrete latent variables z of the RBM: (,2le.6) = r(¢\z)-a(zle.4), where r(¢lz) = iat r(Gilzi)a support of r(¢|z) for all values of z must be connected, so the marginal distribution (¢|a, 6) =o. r(¢|z) - ee @) has a constant, connected support so long as 0 < q(z|2,¢) < 1. We further require that r(¢|z) is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of q(¢|x, ¢) is differentiable in Equations 3 and 4, as we discuss in Appendix A. As shown in Figure 1b, we correspondingly augment the prior with ζ: p(ζ, z z) p(z θ) = r(ζ | θ), | z) is the same as for the approximating posterior. Finally, we require that the conditional | where r(ζ distribution over the observed variables only depends on ζ: |
1609.02200#15
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
16
z) is the same as for the approximating posterior. Finally, we require that the conditional | where r(ζ distribution over the observed variables only depends on ζ: | ζ, z, θ) = p(x | (7) The smoothing distribution r(ζ z) transforms the model into a continuous function of the distri- | bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. | Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z and applying Equation 16 of Appendix A, which generalizes Equation 3: (2) 1 (2) _ (2) 1 (2) _ ag eacle9) [log p(x|¢, z,)] = Vv > 9g BP (IF sd j2,0)(0)-4) . (8) pr (0,1)" ∼ 4We always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z can conveniently be thought of as English z. 4 Published as a conference paper at ICLR 2017 If the approximating posterior is factorial, then each Fi is an independent CDF, without conditioning or marginalization.
1609.02200#16
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
17
1 x, φ), where x,φ)(ρ) is a function of q(z = 1 As we shall demonstrate in Section 2.1, F− q(ζ | | x, φ) is a deterministic probability value calculated by a parameterized function, such as q(z = 1 | a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input x x, φ), for which the final nonlinearity is is passed into a deterministic feedforward network q(z = 1 | the logistic function. Its output q, along with an independent random variable ρ U [0, 1], is passed 1 x,φ)(ρ) to produce a sample of ζ. This ζ, along with the original into the deterministic function F− q(ζ | input x, is finally passed to log p (x ζ, θ). The expectation of this log probability with respect to ρ is | the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the input and the independent ρ, this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efficient approximation to the gradient.
1609.02200#17
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
18
# 2.1 SPIKE-AND-EXPONENTIAL SMOOTHING TRANSFORMATION As a concrete example consistent with sparse coding, consider the spike-and-exponential transfor- mation from binary z to continuous ζ: oo, if¢;=0 0, otherwise Frcda=o(6) = 1 nala=0)={ ae ey eg ec] Bo SBC! gy MGly—-la-dea HOSGS Fue (C) = ——| = = (Gilzi = 1) fi otherwise rca (O) = a5 eal − where F,,(¢’) = fs p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1]. This transformation from 2; to ¢; is invertible: ¢; = 0 = z; = 0, and ¢; > 0 & z; = 1 almost surely.” We can now find the CDF for q(¢|x, #) as a function of q(z = 1|x, d) in the domain (0, 1], marginal- izing out the discrete z:
1609.02200#18
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
20
q to simplify notation. For all values of the inde- x, φ) where we use the substitution q(z = 1 | 1 pendent random variable ρ x, φ) if x,φ)(ρ) rectifies the input q(z = 1 U [0, 1], the function F − q(ζ | | ρ in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is q ≤ 1 is increasing but concave-down if q > 1 ρ. The effect of ρ on also quasi-sigmoidal, in that F − 1 is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the F − noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c. Other expansions to the continuous space are possible. In Appendix D.1, we consider the case where zi = 1) are linear functions of ζ; in Appendix D.2, we develop a spike- both r(ζi| and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous ζ is directly dependent on the input x in addition to the discrete z.
1609.02200#20
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
21
5In the limit β → ∞, ζi = zi almost surely, and the continuous variables ζ can effectively be removed from the model. This trick can be used after training with finite β to produce a model without smoothing variables ζ. 5 Published as a conference paper at ICLR 2017 (a) Spike-and-exp, β ∈ {1, 3, 5} (b) ReLU with dropout (c) ReLU with batch norm smoothing transformation for Figure 2: ρ ; β = 1 (dotted), β = 3 (solid), and β = 5 (dashed) (a). Rectified linear } unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), 0.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In all cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The 1 novel stochastic nonlinearity F − x,φ)(ρ) from Figure 1c, of which (a) is an example, is qualitatively q(ζ | similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).
1609.02200#21
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
22
# 3 ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL # APPROXIMATING POSTERIOR When a probabilistic model is defined in terms of a prior distribution p(z) and a conditional dis- tribution p(x x) due | to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)). To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior q(z x) over the discrete latent variables. | Specifically, we divide the latent variables z of the RBM into disjoint groups, z1, . . . , zk,6 and define the approximating posterior via a directed acyclic graphical model over these groups:
1609.02200#22
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
23
a(21,G1s-++ 2k Cele, 6) = TT r(Gjlzs)-a(zilGccj.,¢) where 1<j<k 9 (Gi<i sty) "+25 Tle,e2, (1 + ef, (Gi<j--9)) ; (10) (25 |Gi<j, 2, @) # zι∈ n, and gj(ζi<j, x, φ) is a parameterized function of the inputs and preceding ζi, such as zj ∈ { a neural network. The corresponding graphical model is depicted in Figure 3a, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap- pendix A. If each group zj contains a single variable, this dependence structure is analogous to that of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution. However, the dependence of zj on the preceding discrete variables zi<j is always mediated by the continuous variables ζi<j.
1609.02200#23
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
24
This hierarchical approximating posterior does not affect the form of the autoencoding term in Equa- tion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic probability value q(zj = 1 ζi<j, x, φ) of Equation 10 is parameterized, generally by a neural net- | work, in a manner analogous to Section 2. However, the final logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all ζi<j, x, φ). Its output qj, along with an previous ζi<j are passed into the network computing q(z = 1 | 6The continuous latent variables ζ are divided into complementary disjoint groups ζ1, . . . , ζk. 6 Published as a conference paper at ICLR 2017 (a) Hierarch approx post q(ζ, z|x) (b) Hierarchical ELBO autoencoding term
1609.02200#24
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
25
(a) Hierarch approx post q(ζ, z|x) (b) Hierarchical ELBO autoencoding term Figure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables zj only depend on the previous zi<j through their smoothed analogs ζi<j. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input ρ. independent random variable ρ ζi<j ,x,φ)(ρ) to produce a sample of ζj. Once all ζj have been recursively computed, the full ζ along with the ζ, θ). The expectation of this log probability with respect original input x is finally passed to log p (x | to ρ is again the autoencoding term of the VAE formalism, as in Equation 2. In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using: (2) OE, (z, 0) OE, (2,0
1609.02200#25
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
27
|| # 4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks (Goodfel- low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. Specifically, we augment the latent representation with continuous random variables z,7 and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi- cal models. We use the same autoregressive variable order for the approximating posterior as for the 7We always use a variant of z for latent variables. This is Fraktur z, or German z. 7 (p) (11)
1609.02200#27
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
28
7We always use a variant of z for latent variables. This is Fraktur z, or German z. 7 (p) (11) Published as a conference paper at ICLR 2017 (a) Approx post w/ cont latent vars q(z, ζ, z|x) (b) Prior w/ cont latent vars p(x, z, ζ, z) Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1b respectively. The continuous latent variables z build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables z, which can represent the discrete types of objects in the image. prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015), the deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016). We discuss the motivation for this ordering in Appendix G. The directed graphical model of the approximating posterior and prior are defined by:
1609.02200#28
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
29
The directed graphical model of the approximating posterior and prior are defined by: a(30.---:3nlt,6)= T] a (3mlsi<ms2.¢) and 0<m<n P(30,---s3n19) = [] p(omlarcm,9)- (13) 0<m<n ≤ ≤ The full set of latent variables associated with the RBM is now denoted by z0 = z1, ζ1, . . . , zk, ζk} . { However, the conditional distributions in Equation 13 only depend on the continuous ζj. Each zm 1 denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model. The ELBO decomposes as: L (x, θ, φ) = E x,φ) [log p(x q(z | z, θ)] | − m E q(zl<m| x,φ) [KL [q(zm| zl<m, x, φ) p(zm| || zl<m, θ)]] . (14)
1609.02200#29
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
31
# 5 RESULTS Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx- imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution r(ζ z) dis- cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014; Rezende et al., 2014), we define all approximating posteriors q to be explicit functions of x, with parameters φ shared between all inputs x. For distributions over discrete variables, the neural net- works output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation 10; for the continuous z, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu- ral networks parameterizing the distributions over z, z, and x consists of a linear transformation, 8 Published as a conference paper at ICLR 2017
1609.02200#31
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
32
8 Published as a conference paper at ICLR 2017 batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point- wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior p(z θ) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma & Ba, 2015) with a decaying step size. The hierarchical structure of Section 4 is very powerful, and overfits without strong regularization of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the input ζ, θ) without any deterministic hidden layers, except on Omniglot. Moreover, all other neural p(x | networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over z. We present the details of the architecture in Appendix H.
1609.02200#32
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
33
We train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om- niglot8 (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST, we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood9 of these models, computed using the method of (Burda et al., 2016) with 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis- crete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04, 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou- ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and 0.66.
1609.02200#33
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
34
MNIST (dynamic binarization) LL MNIST (static binarization) ELBO LL DBN IWAE Ladder VAE Discrete VAE -84.55 -82.90 -81.74 -80.15 -88.30 -87.40 -85.10 -85.51 -83.67 Omniglot Caltech-101 Silhouettes LL LL IWAE Ladder VAE RBM DBN Discrete VAE -103.38 -102.11 -100.46 -100.45 -97.43 IWAE RWS SBN RBM NAIS NADE Discrete VAE -117.2 -113.3 -107.8 -100.0 -97.6 Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor- mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.
1609.02200#34
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
35
We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con- stant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of 8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT. 9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1. 9
1609.02200#35
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
37
we e4E VNAVINS LEKOKFLOVACL NNANYNN FS OK QHOADS Ow. LO WHUAWYH ARVBNYNRUBDBN WK ee ? NVAKYUNVUYYRAVYVOVPYLDYA YNUN SQ LOK +1 © at) 3 5 3 6 % 3 Bg = 8 g 8 & t sf % 3 AS 3 Ss 3 YVAHHEMWHMPRHYHHYWHWDHWVSOW WA®W wor md MQM Mq OW WH ~~ G & WARQWADA HMM VYAROHPHYAWWANUW ~wW NS NN BR ww me N RH TEENS Oe as RAVER FORK HBDHREHAAKAKRS | BREAARSCHHROKF SEH HRSEGCRARA S| SCTFENANTANETHSETAEHKHKE RN RNARKRKREOKSTHEORACSEARQREOS NARHSHHAKEESESCHHR AK SCKAAKASEA WREPRMPAKTHKTSSTKREKCHTHSOHKH PRPHYPVWEARARKRHASESKRAKKRAGDA RPHYVRLNRESTHKOHCHN KE BE HEH BSVPHNAHRE RNS CSE RAKR SS NRX INNNN
1609.02200#37
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
39
Figure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner. (a) Block Gibbs iterations (b) Num RBM units (c) RBM approx post layers Figure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a), the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per- formance, but the network is robust to the size of the RBM (b). thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes. We present corresponding figures for the other datasets, and results on simplified architec- tures, in Appendix J.
1609.02200#39
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
40
The large mixing time of block Gibbs sampling on the RBM suggests that training may be con- strained by sample quality. Figure 6a shows that performance10 improves as we increase the num- θ) in ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z | Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986). 10All models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency. 10 Published as a conference paper at ICLR 2017 Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes.
1609.02200#40
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
41
The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the ap- proximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx- imating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting. # 6 CONCLUSION Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013).
1609.02200#41
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
42
We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. # ACKNOWLEDGEMENTS Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix D.3. # REFERENCES
1609.02200#42
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
43
# REFERENCES Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084–3092, 2013. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Charles H. Bennett. Efficient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2):245–268, 1976. J¨org Bornschein and Yoshua Bengio. Reweighted wake-sleep. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2751, 2015. J¨org Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511– 2519, 2016. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy In Proceedings of the 20th SIGNLL Bengio. Generating sentences from a continuous space. Conference on Computational Natural Language Learning, pp. 10–21, 2016. 11 Published as a conference paper at ICLR 2017
1609.02200#43
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
44
11 Published as a conference paper at ICLR 2017 Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed- ings of the International Conference on Learning Representations, arXiv:1509.00519, 2016. Steve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work- ing paper, 2006. KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltz- mann machines. Neural Computation, 25(3):805–831, 2013. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems, pp. 2980–2988, 2015.
1609.02200#44
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
45
Aaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images by spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning, pp. 1145–1152, 2011. Paul Dagum and Michael Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60(1):141–153, 1993. Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic MCMC. arXiv preprint arXiv:1506.04557, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672–2680, 2014. Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016.
1609.02200#45
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
46
Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016. Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. In Proceedings of the 31st International Conference on Machine Learning, pp. 1242–1250, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1462–1471, 2015. Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. Geoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3–10. Morgan Kaufmann Publishers, Inc., 1994.
1609.02200#46
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
47
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456, 2015. Matthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems, pp. 2946–2954, 2016. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, arXiv:1412.6980, 2015. 12 Published as a conference paper at ICLR 2017 Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.
1609.02200#47
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
48
Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the Interna- tional Conference on Learning Representations, arXiv:1312.6114, 2014. Brenden M. Lake, Ruslan R. Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Processing Systems, pp. 2526– 2534, 2013. Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Yingzhen Li and Richard E. Turner. Variational inference with R´enyi divergence. arXiv preprint arXiv:1602.02311, 2016. Philip M. Long and Rocco Servedio. Restricted Boltzmann machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pp. 703–710, 2010.
1609.02200#48
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
49
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Inductive principles for restricted Boltzmann machine learning. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 509–516, 2010. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro- ceedings of the 31st International Conference on Machine Learning, pp. 1791–1799, 2014. Andriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed- ings of the 33rd International Conference on Machine Learning, pp. 2188–2196, 2016. Iain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, pp. 1137–1144, 2009. Radford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71–113, 1992.
1609.02200#49
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
50
Radford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71–113, 1992. Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. John Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012. Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor- gan Kaufmann, 1988. Tapani Raiko, Harri Valpola, Markus Harva, and Juha Karhunen. Building blocks for variational Bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155–201, 2007. Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2989, 2015. Semi- supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546–3554, 2015. 13
1609.02200#50
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
51
Semi- supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546–3554, 2015. 13 Published as a conference paper at ICLR 2017 Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1530–1538, 2015. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278–1286, 2014. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, pp. 448–455, 2009. Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872–879. ACM, 2008. Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv preprint arXiv:1602.08734, 2016.
1609.02200#51
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
52
Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv preprint arXiv:1602.08734, 2016. Tim Salimans, Diederik P. Kingma, Max Welling, et al. Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1218–1226, 2015. Michael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008. Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6, pp. 194–281. MIT Press, Cambridge, 1986. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738–3746, 2016. David J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities on directed graphical structures. Networks, 20(5):579–605, 1990.
1609.02200#52
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
53
David J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities on directed graphical structures. Networks, 20(5):579–605, 1990. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. Robert H. Swendsen and Jian-Sheng Wang. Replica Monte Carlo simulation of spin-glasses. Phys- ical Review Letters, 57(21):2607, 1986. Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064– 1071. ACM, 2008. Dustin Tran, Rajesh Ranganath, and David M. Blei. The variational Gaussian process. Proceedings of the International Conference on Learning Representations, arXiv:1511.06499, 2016. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
1609.02200#53
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
54
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. A MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION FUNCTION The reparameterization trick is always possible if the cumulative distribution function (CDF) of x, φ) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014). q(z | However, for multivariate distributions, the CDF is defined by: F(x) = os D(ath +2): a =—00 w!,=—00 −∞ −∞ 14 Published as a conference paper at ICLR 2017 # n R The multivariate CDF maps In place of the multivariate CDF, consider the set of conditional-marginal CDFs defined by:12 → p(ai|ai,...,@i-1)- (15) −∞
1609.02200#54
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
55
→ p(ai|ai,...,@i-1)- (15) −∞ That is, Fj(x) is the CDF of xj, conditioned on all xi such that i < h, and marginalized over all xk such the j < k. The range of each Fj is [0, 1], so F maps the domain of the original [0, 1]n. To invert F, we need only invert each conditional-marginal CDF in turn, distribution to ρ 1 conditioning xj = F − 1(ρ). These inverses exist so long as 1 = F − j j − the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively define F − (ρ) based upon xi<j, rather than ρi<j, since by induction we can uniquely determine j xi<j given ρi<j. Using integration-by-substition, we can compute the gradient of the ELBO by taking the expectation 1 of a uniform random variable ρ on [0, 1]n, and using F− x,φ) to transform ρ back to the element q(z | of z on which p(x z, θ) is conditioned. To perform integration-by-substitution, we will require the | determinant of the Jacobian of F−
1609.02200#55
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
56
The derivative of a CDF is the probability density function at the selected point, and Fj is a simple CDF when we hold fixed the variables xi<j on which it is conditioned, so using the inverse function theorem we find: p (23 =F; '() i<j,) where p is a vector, and FY is 55, or is triangular, since the earlier conditional- j marginal CDFs F’; are independent of the value of the later x, 7 < k, over which they are marginal- ized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F, so the Jacobian of F~! is also triangular. The determinant of a triangular matrix is the product of the diagonal elements. . The Jacobian matrix 11For instance, for the bivariate uniform distribution on the interval [0, 1]2, the CDF F (x, y) = x · y for x yields F (x, y) = c. Clearly, many different pairs 0 ≤ x, y ≤ 1, so for any 0 ≤ c ≤ 1 and c ≤ x ≤ 1, y = c (x, y) yield each possible value c of F (x, y).
1609.02200#56
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
57
The set of marginal CDFs, used to define copulas, is invertible. However, it does not gener- ally map the original distribution to a simple joint distribution, such as a multivariate unform distribu- are (e) # det (e) (z|2,6) q (2: = does q(z|x,φ) ∂φ does not cancel out tion, as required for variational autoencoders. In Equation 16, The determinant of the inverse Jacobian is instead [[]; 1 qd (Fydje.) (Ole, ). The determinant of the inverse Jacobian is instead [[]; q (2: = F;'(p))] 1 which -1 F−1 F−1 q(z|x,φ)(ρ) q if q is not factorial. As a result, we do not recover the variational autoen- differs from coder formulation of Equation 16. 15 Published as a conference paper at ICLR 2017 Using these facts to perform a multivariate integration-by-substitution, we obtain: # E
1609.02200#57
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
58
15 Published as a conference paper at ICLR 2017 Using these facts to perform a multivariate integration-by-substitution, we obtain: # E ee aot “4 OF! - fis (Fy Lie, » (p)|x.9) “logp (2 [Fb jno)(P):9) | eee 4 een walt ~ j [ 4 (Fye.o)(0)le.9) 0 =0 [Lj 9 (27 = Fj (e)lz<3 1 =|, log p (IF ,d |x,6) (0), 6) Bycsino) logpte|2.8)] =f alsle.6) -logplele.8) = | “a (Fide (le+4) log (IF 2 ).,5)(0)-9) - ) “logp (« aE ele, )(P p); 6) # q(z ρ=0 The variable ρ has dimensionality equal to that of z; 0 is the vector of all 0s; 1 is the vector of all 1s. The gradient with respect to φ is then easy to approximate stochastically:
1609.02200#58
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
59
The gradient with respect to φ is then easy to approximate stochastically: (a) 1 (2) _ ag natele#) flog p(z]z, @)] © WV » a6 log p (cP sd j2.0)(0)> 6) (17) pru(0,l)" ∼ Note that if q(z x, φ) is factorial (i.e., the product of independent distributions in each dimension zj), | then the conditional-marginal CDFs Fj are just the marginal CDFs in each direction. However, even if q(z x, φ) is not factorial, Equation 17 still holds so long as F is nevertheless defined to be the set | of conditional-marginal CDFs of Equation 15. # B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITH # REINFORCE
1609.02200#59
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
60
# B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITH # REINFORCE It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016): (a) a] FBacoe le nll] = Eye [Boe r(ale,8) ~ BC tow alle. 6)] =F LD (lowr(els.4) — Ble}: Fo towalcle.9)) 8 znq(2|2,) x,φ) | ∼
1609.02200#60
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
61
x,φ) | ∼ where B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but can reduce the variance of a stochastic estimate of the expectation. In REINFORCE, ∂ z, θ)] is effectively estimated by something akin to a finite ∂φ | difference approximation to the derivative. The autoencoding term is a function of the conditional x, φ), which deter- log-likelihood log p(x | mines the value of z at which p(x z, θ) is evaluated. However, the conditional log-likelihood is | never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con- ditional log-likelihood is evaluated at many different points z x, φ), and a weighted sum of these values is used to approximate the gradient, just like in the finite difference approximation. # E
1609.02200#61
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
62
# E Equation 18 of REINFORCE captures much less information about p(|z, 0) per sample than Equa- tion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of p(x|z, @) in some direction dcan only affect the REINFORCE gradient estimate if a sam- ple is taken with a component in direction d. Ina D-dimensional latent space, at least D samples are 16 (16) Published as a conference paper at ICLR 2017 required to capture the variation of p(x z, θ) in all directions; fewer samples span a smaller subspace. | Since the latent representation commonly consists of dozens of variables, the REINFORCE gradi- ent estimate can be much less efficient than one that makes direct use of the gradient of p(x z, θ). | Moreover, we will show in Section 5 that, when the gradient is calculated efficiently, hundreds of latent variables can be used effectively. C AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES
1609.02200#62
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
63
C AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES Intuitively, variational autoencoders break the encoder13 distribution into “packets” of probability of infinitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region ri < ρi < ri + δ for all i in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, so x,φ)(ζ) maps intervals high-probability values are more likely to be selected. More rigorously, Fq(z | 1, so a randomly selected ρ of high probability to larger spans of 0 U [0, 1] is more likely ∼ 1 to be mapped to a high-probability point by F− q(z
1609.02200#63
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
64
x,φ)(ρ). | As the parameters of the encoder are changed, the location of a packet can move, while its mass is 1 x,φ)(ρ) is a function of φ, whereas the probability mass associated held constant. That is, ζ = F− q(z | 1 with a region of ρ-space is constant by definition. So long as F− x,φ) exists and is differentiable, a q(z | small change in φ will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space. In contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites- simal but equal volume; e.g., z; < Zz < 2; +6 for all i (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments, but the probability mass varies between them. Specifically, the probability mass of the segment z <2! < 246 is proportional to q(z|x, ¢). x, φ). | ≤
1609.02200#64
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
65
x, φ). | ≤ Once a segment is selected in the latent space, its location is independent of the encoder and decoder. In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is fixed. Only the probability mass assigned to the segment is relevant. Although variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear).
1609.02200#65
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
66
Nevertheless, Equation 17 of the VAE can be understood in analogy to dropout (Srivastava et al., 1 2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, F− x,φ)(ρ) is an q(z | 1 x,φ)(ρ) selects a point element-wise stochastic nonlinearity applied to a hidden layer. Since F− q(z in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and- Gaussian distribution of Section E.1 and let the standard deviation σ go to zero. However, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation,
1609.02200#66
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
67
13Since the approximating posterior q(z|x, φ) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood p(x|z, θ) maps each configu- ration of the latent variables to a distribution over the input space, it is called the decoder. 17 Published as a conference paper at ICLR 2017 the gradient of the decoder cannot be used to accurately estimate the change in the loss function, since the gradient only captures the effect of very small movements of the probability packet. To use discrete latent representations in the variational autoencoder framework, we must first trans- form to a continuous latent space, within which probability packets move smoothly. That is, we must compute Equation 17 over a different distribution than the original posterior distribution. Sur- prisingly, we need not sacrifice the original discrete latent space, with its associated approximating posterior. Rather, we extend the encoder q(z θ) with a transformation to a continuous, auxiliary latent representation ζ, and correspondingly make the decoder a function of this new continuous representation. By extending both the encoder and the prior in the same way, we avoid affecting the remaining KL divergence in Equation 2.14
1609.02200#67
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
68
The gradient is defined everywhere if we require that each point in the original latent space map to nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a function of their main argument, and thus are invertible. If we ignore the cases where some discrete latent variable has probability 0 or 1, we need only require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters φ of the encoder, q(z x, φ), change, redistributing weight amongst | the associated regions of the auxiliary continuous space. # D ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS The spike-and-exponential transformation from discrete latent variables z to continuous latent vari- ables ζ presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations. # D.1 MIXTURE OF RAMPS
1609.02200#68
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
69
# D.1 MIXTURE OF RAMPS As another concrete example, we consider a case where both r(ζi| linear functions of ζi: zi = 0) and r(ζi| zi = 1) are 2-(1—G), if0<G<1 ¢ 2 r(Gilzi = 0) = {i 7! otherwise Frcil2v=0) (6) = 2G; — Co = 2¢'-¢ 2-G, if0<G<1 fp m(cili = 1) = to otherwise Frgja=iy(C) =~ ean =¢ where F,,(¢’) = f p(¢) - d¢ is the CDF of probability distribution p in the domain [0, 1]. The CDF for q(¢|z, ¢) as a function of g(z = 1\x, ¢) is: x, φ) is: x, φ) as a function of q(z = 1 | | Fy(ciao)(6!) = (1 (2 = Her, 9) - (26) + ale = Ie, 9) 6” =2-g(z=la,9)- (67 = 6) 426-67, (19)
1609.02200#69
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
70
14Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous space to the decoder, since this does not change the space of the probabilty packets. 18 Published as a conference paper at ICLR 2017 We can calculate Fide, #) explicitly, using the substitutions Fiy(¢jz,4) > p, q(z = 1x, 6) — q, and ¢’ — ¢ in Equation 19 to simplify notation: → pa=2-q(C-O+%e-C 0= (29-1): +2(1-4)-C-p ¢ 2(q—-1) + V4 - 2¢ + @) + 4(2q — Lp 2(2q — 1) q-)+VP +2(0- Nat C= p) te − 1 2 ; ρ = ζ otherwise. F − q(ζ if q 1 2 ; ρ = ζ otherwise. F − q(ζ = 1 x,φ) has the desired range [0, 1] if we choose | F-1(p) (@Q-1)+ VP +2(o-Da+(—- p) 2q-1 _4g-1+VJV(-1)?+(¢-1)-p (20) 2q-1 F − −
1609.02200#70
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
71
F − − = 1 if q in Figure 7. 2 , and F − 1(ρ) = ρ if q = 1 1 2 . We plot F − q(ζ x,φ)(ρ) as a function of q for various values of ρ | (0) -1 q(C|x., i i i i 0 02 04 06 08 q(z = 1|2,¢@) a # Figure 7: Inverse CDF of the mixture of ramps transformation for ρ € {0.2,0.5,0.8} ∈ { } In Equation 20, F ela, g)(P) is quasi-sigmoidal as a function of g(z = 1|x,¢). If p < 0.5, Fou is concave-up; if p > 0.5, F~! is concave-down; if p ~ 0.5, F~! is sigmoid. In no case is F~! extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of z inevitably flattens. # D.2 SPIKE-AND-SLAB We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011):
1609.02200#71
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
72
We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011): ee Oe oor) if¢; =0 nh r(Gilzi = 0) = {o otherwise Fy(¢:=0)(¢') = 1 4, fl, if0<G<1 nGls=1)= {9 Npemie Fygiaay(C) = Gig = where F,(¢’) = f° “wc p(¢) -d¢ is the cumulative tion pin he lomein (0, 1]. The CDF for g(¢|a, dζ is the cumulative distribution function (CDF) of probability distribu- p(ζ) · −∞ @) as a function of g(z = 1|x, 4) is: Frejev=oy (6) + a(2 = Wor, 6)» +1. Facclx,0)(6) = 1 a(z = Me, 6)» Frejev=oy (6) + a(2 = Wor, 6)» Fegijevaay (0) = q(z=1,¢)-(¢/-1) +1. − = q(z = 1 |
1609.02200#72
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
73
− 19 Published as a conference paper at ICLR 2017 1 We can calculate F − q(ζ x, φ) → q to simplify notation: #) explicitly, using the substitution g(z = p-l 7 eli, 1 q + 1, − 0, if ρ − ≥ otherwise 1 1 F − q(ζ x,φ)(ρ) = | q 1 We plot F − q(ζ x,φ)(ρ) as a function of q for various values of ρ in Figure 8. | 0.8 0.6 )(p) 0.4 -1 aC|a.e 0.2 0 i i i i 0 02 04 06 08 q(z = 1|2,¢@) a # Figure 8: Inverse CDF of the spike-and-slab transformation for ρ € {0.2, 0.5, 0.8} ∈ { } # D.3 ENGINEERING EFFECTIVE SMOOTHING TRANSFORMATIONS If the smoothing transformation is not chosen appropriately, the contribution of low-probability regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we find:
1609.02200#73
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
75
where z = F'~1(p). Consider the case where r(¢;|z; = 0) and r(¢;|z; = 1) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard deviations apart. For values of ¢; between the two modes, F(¢;) ~ q(zi = O|x,¢@), assuming without loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of ¢; than that corresponding to z; = 1. As a result, x = 1 between the two modes, and or ~] A even if r(¢;) ~& 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon a have large variance. These high-variance gradient estimates arise because r(¢;|z; = 0) and r(¢;|z; = 1) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans- formations are analogous to a sigmoid transfer function o(c - x), where o is the logistic function and c — oo. The smoothing provided by the continuous random variables ¢ is only
1609.02200#75
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
76
to a sigmoid transfer function o(c - x), where o is the logistic function and c — oo. The smoothing provided by the continuous random variables ¢ is only effective if there is a region of meaningful overlap between r(¢|z = 0) and r(¢\z = 1). In particular, Y., (Glzi = 0) +r(Gilzi = 1) > 0 for all ¢; between the modes of r(¢;|zi = 0) and r(Gi]z; = 1), so p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in Section 2.1, this overlap can be ensured by fixing or bounding £.
1609.02200#76
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
77
# E TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS THAT DEPEND UPON THE INPUT It is not necessary to define the transformation from discrete to continuous latent variables in the z), to be independent of the input x. In the true posterior distribution, approximating posterior, r(ζ | 20 (21) Published as a conference paper at ICLR 2017 p(ζ p(ζ little as a function of x, since z, x) | z) only if z already captures most of the information about x and p(ζ ≈ | z, x) changes | p(ζ z) = | x p(ζ, x z) = | x p(ζ z, x) | · p(x z). | This is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. To address this, we can define: q(ζ, z x, φ) = q(z | θ) = p(ζ p(ζ, z | x, φ) | z) | q(ζ | θ) | · p(z · z, x, φ) | This leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term:
1609.02200#77
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
78
| This leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term: LV AE(x, θ, φ) = log p(x = log p(x 6) = log (2/0) — KL [q(z,¢|2.¢)|lp(z, Cx, 0] = log p(2|4) — KL [q(¢|2,2, 6) - (ele, 9)||p(Cl2.@,8) plete. 4)) _ _ a, [plalg. 8) -pC|z.8) - Plzl8) =X [aca0) a(ele. 6) los | CaS by ace) = Ey(¢,2,6)-q(z|x,¢) Log p(x|¢, )] — KL [q(zIa, ¢)||p(218)] — Yi alele, 6) KL [a(¢|z, 2, 9)||p(¢)] The extension to hierarchical approximating posteriors proceeds as in sections 3 and 4.
1609.02200#78
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
79
The extension to hierarchical approximating posteriors proceeds as in sections 3 and 4. If both g(¢|z, x,¢) and p(¢|z) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. However, while the gra- dients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect of q(z|x, @) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18): log q(z\z,9) YEO) fa(lz.2,9)lv(Cl=)] log q(z\z,9) YEO) KL fa(lz.2,9)lv(Cl=)] = Buta [RL[a(l2.2.9)|p(Cl2)]- BE Oo z (23) The reward signal is now KL [q(ζ z, θ), but the effect on the | variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function.
1609.02200#79
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
80
However, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance. Specifically, if q(z|a,@) and q(¢|z,a,) are factorial, with q(¢;|zi,7,@) only dependent on z;, then KL [q(¢|z, x, )||p(¢|z)] decomposes into a sum of the KL divergences over each variable, as does Steg qle.6) The expectation of all terms in the resulting product of sums is zero except those of the form E [KL {gil |pi] - legs) , due to the identity explained in Equation 27. We then use the reparameterization trick to eliminate all hierarchical layers before the current one, and marginalize over each z;. As a result, we can compute the term of Equation 23 by backpropagating KL [q(ζ p(ζ z = 1)] p(ζ z = 1, x, φ | − x, φ). This is especially simple if q(ζi| | z = 0, x, φ) | KL [q(ζ z = 0, x, φ) | zi, x, φ) = p(ζi|
1609.02200#80
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
81
KL [q(ζ z = 0, x, φ) | zi, x, φ) = p(ζi| z = 0)] | || | || into q(z KL [q(ζ p(ζ zi) when zi = 0, since then z = 0)] = 0. | || # E.1 SPIKE-AND-GAUSSIAN zi, x, φ) to be a separate Gaussian for both values of the binary zi. However, it We might wish q(ζi| is difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise: 0,
1609.02200#81
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
82
0, 0, if¢; <0 q(Gi\zi = 0, 2,4) = 5(G) Fa (é::=0,0,6)(G) = H(Gi) = {i otherwise zi =1,2,6) =N (ug i (2, 102 (x, ¢ 2,=1,0,0) (Gi i +t Gi = Mail, 6) a(cil 1,2, ) (Hq alt, 6), 05 :(,)) Fa(eler=t.e.o) (Gi) 3 f at( Via,,:(@, 6) 21 (22) . Published as a conference paper at ICLR 2017 where µq(x, φ) and σq(x, φ) are functions of x and φ. We use the substitutions q(zi = 1 | µq,i(x, φ) is similarly parameterized. We can now find the CDF for q(ζ q: x, φ) x, φ) as a function of q(z = 1 | | → Fa(gje.o) (Gi) = (1 = a) - (Gi) Gi = Hayi l+erf (Sat )| Gi +5
1609.02200#82
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
83
→ Fa(gje.o) (Gi) = (1 = a) - (Gi) Gi = Hayi l+erf (Sat )| Gi +5 Since zi = 0 makes no contribution to the CDF until ζi = 0, the value of ρ at which ζi = 0 is ρstep i = qi 2 1 + erf µq,i − √2σq,i # so: qi + V20q,-erf + (% _ 1) , if pi < pre? oye it pi” < pi < pl"? + (1a) Hq + V 204, erf | (248 + 1) , otherwise Gradients are always evaluated for fixed choices of ρ, and gradients are never taken with respect to ρ. As a result, expectations with respect to ρ are invariant to permutations of ρ. Furthermore, 2p: _ | 2 -D) Gi Gi +1 # where pi, = # use qi). We can thus shift the delta spike to the beginning of the range of p;, and 9, ifp) <1-—q —1 ( 2(pi-1) : Hq i + V20q;- ert “A +1), otherwise p; + (1 — # qi − G=
1609.02200#83
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
84
p; + (1 — # qi − G= All parameters of the multivariate Gaussians should be trainable functions of x, and independent of q. The new term in Equation 22 is: SY alele, 4) KL [a(Clz,«, 8)|le(Cl2)] = z Se ala = Ua, 6) KL [a(Gilzi = 1,2, )loGlzi = 0) + (1—4q(z = 1,6) - KL fa(¢il2: = 0,2, 6)|Ip(Gilz« = 0)] x, φ)) q(zi = 1 KL [q(ζi| | − zi = 0, θ), and KL [q(ζi| zi = 0, x, φ) = p(ζi| p,i and σ2 q,i, is p(ζi| || zi = 0, x, φ) zi = 0)] p(ζi| ||
1609.02200#84
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
85
If zi = 0, then q(ζi| zi = 0, θ)] = 0 as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance matrices, with means µp,i, µq,i, and covariances σ2 σ2 q,i + (µq,i − σ2 2 p,i · zi = 1, x, φ) x, φ), we thus need to backpropagate KL [q(ζi| To train q(zi = 1 | 1 5) p(ζi| || zi = 1)] into it. Finally, ∂KL[q || ∂µq,i ∂KL[q || ∂σq,i p] p] = = µq,i − σ2 p,i − 1 σq,i µp,i + σq,i σ2 p,i 22 Published as a conference paper at ICLR 2017 # so
1609.02200#85
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
86
22 Published as a conference paper at ICLR 2017 # so Hq ~ Epi / (a) , a q(2\z, 9) + Dias KL [q||p] = 9(z: = 1x, 4) - o, 1 , (a) , Ooi Di ale, @) - 5 —KL [ally] = az = Me, 9) - (-; +3 qi qi yi z ) For p, it is not useful to make the mean values of ζ adjustable for each value of z, since this is redundant with the parameterization of the decoder. With fixed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be one. # F COMPUTING THE GRADIENT OF KL [q(ζ, z x, φ) | # p(ζ, z # F COMPUTING THE GRADIENT OF KL [q(¢, z|x, ¢)||p(¢, z|4)] || θ)] | The KL term of the ELBO (Equation 2) is not significantly affected by the introduction of additional z) for both the approximat- continuous latent variables ζ, so long as we use the same expansion r(ζ | ing posterior and the prior:
1609.02200#86
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
87
KL [allel = ~/{ I -L/ {0 Thej<e r(Glzi) - (2lGi<j,2) (2) - Thej<n r(Gil23) Thej<e steer) (2) : (G25) + W(zilG<j,2) | + log | 1<j<k r(Gjlz5) > a(25\Gicg,) } + log (24) 1<j<k The gradient of Equation 24 with respect to the parameters θ of the prior, p(z timated stochastically using samples from the approximating posterior, q(ζ, z prior, p(z | — − 2 x1 tale] = — So ale 2l0.4) - PEO ‘Lae y- BL) 00 OE,(z,) OE,(z,4) —E4y(21|2,9) [ [Eataiccnae) 6 om + Epc2|a) 30 (25)
1609.02200#87
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
88
ζi<k, x, φ) can be performed analytically; all other expec- The final expectation with respect to q(zk| tations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units. # F.1 GRADIENT OF THE ENTROPY WITH RESPECT TO φ In contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break KL [q||p| into two terms, the negative entropy Vc qlog q, and the cross-entropy — Vac qlog p, and compute their gradients separately. 23 Published as a conference paper at ICLR 2017 We can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through |]; -; ¢(zj|G<j, 2): i<j q(zj|
1609.02200#88
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
89
i<j q(zj| —H() = ff TL Glen) atesleecs.2) | tos] TL alesis) 2 %S \i<j<k 1<j<k ->/ [[Gilz)- az5lGi<j,2) | SP log q(zil¢i<j,2) z j j ->y/ [] Giles): a(zilGn<i. 2) log q(zj\Ci<j,2) je °S Visi = SE ces2ccjle0) | > az ilbi<j,2) * log a(zi|Gi<j,2) Fi 23 “LE Pics Ya (zj|Pi<j,) - log q(zj|pi<j, 2) 25 − ρi<j, x) is where indices i and j denote hierarchical groups of variables. The probability q(zj| evaluated analytically, whereas all variables zi<j and ζi<j are implicitly sampled stochastically via ρi<j. We wish to take the gradient of H(q) in Equation 26. Using the identity: − (a) (2) (a) ale kin -Ee(Me)-k(E)-0 ∂H(q) for any constant c, we can eliminate the gradient of log qj
1609.02200#89
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
90
∂H(q) for any constant c, we can eliminate the gradient of log qj # ρi<j in ∂φ , and obtain: for any constant c, we can eliminate the gradient of log qjp,., in — oa) and obtain: − | (a) SHC = LE E (g504 (2j|Pi<j,t 2) sloral=s[ei<ist) Moreover, we can eliminate any log-partition function in log q(zj| to Equation 27.15 By repeating this argument one more time, we can break ∂ factorial components.16 If zi ∈ { reduces to: Og. Og aC = LE a VY al 2) (a: Oe -> (atad-a *)) (g.- %) ej Og; 5 = Den [FEW 0 lat = a=] j where ι and zι correspond to single variables within the hierarchical groups denoted by j. In Ten- sorFlow, it might be simpler to write: aT fe) 0q; (2; =1 5 H(q) = Ep. iG dai : 0¢ J Oo = =c $s >. G@=d: Su ce: = ZS 3 1: ba: TLjvi G = 0.
1609.02200#90
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
91
= =c $s >. G@=d: Su ce: = ZS 3 1: ba: TLjvi G = 0. z q = 0, where c is the log partition function of q(zj|ρi<j, x). PS ce: = =c $s >. 4 = 0, where c is the log partition function of q(z;|pi<j, ©). # oa The 16 ZS 3 1: G@=d: Su Ili q;, so the q;4; marginalize out of oa The qj When multiplied by log qi. When 1 ba: TLjvi qj is multiplied by one of the log q;4:, the sum over z; can be taken inside the coe and again Bp oe, G = 0. 24 (26) Published as a conference paper at ICLR 2017 # F.2 GRADIENT OF THE CROSS-ENTROPY The gradient of the cross-entropy with respect to the parameters φ of the approximating posterior does not depend on the partition function of the prior # Zp, since: ∂ log q ∂φ (6) (6) (6) 0 ~ 9g 2218 Du gt Bet 359 8% dat # Ep by Equations 6 and 27, so we are left with the gradient of the average energy Ep.
1609.02200#91
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
93
=b" # Eρ The approximating posterior q is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients: (a) 0¢ E, [bt : 2] =b! -E, Fue = | : In contrast, each element of the sum 2) Wee SOW 25-2 ij depends upon variables that are not usually in the same hierarchical level, so in general E, [Wij212)] A WijE, [zi] - Ep [2)]. term into Ep [Wij2i2%j] = Wij Epc: [21 Epes: [2], We might decompose this term into Eρ [Wijzizj] = Wij · where without loss of generality zi is in an earlier hierarchical layer than zj; however, it is not clear how to take the derivative of zi, since it is a discontinuous function of ρk ≤ F.3 NAIVE APPROACH The naive approach would be to take the gradient of the expectation using the gradient of log- probabilities over all variables:
1609.02200#93
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
94
≤ F.3 NAIVE APPROACH The naive approach would be to take the gradient of the expectation using the gradient of log- probabilities over all variables: (a) ; (a) aa" [Wij 2i2;] = Ey [Waxes : 30 08 i . (a) = Eq, aay. |Wigz3 D> 5g OB Itch (28) k , 1 OdK|t<k = Eg aii, wus > Tuten : 6 : # ∂qk|l<k ∂φ , we can drop out terms involving only zi<k and zj<k that occur hierarchically before k, For since those terms can be pulled out of the expectation over qk, and we can apply Equation 27. However, for terms involving zi>k or zj>k that occur hierarchically after k, the expected value of zi or zj depends upon the chosen value of zk. The gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18). Moreover, the variance of the estimate is proportional to the number of terms (to the extent that the terms are independent). The number of terms contributing to each gradient grows quadrati- cally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor, 2014):
1609.02200#94
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
95
O Ey | (Wizz; — c(x)) - a6 loga| ; but this approximation is still high-variance. 25 Published as a conference paper at ICLR 2017 F.4 DECOMPOSITION OF ∂ ∂φ Wijzizj VIA THE CHAIN RULE When using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec- tions 2.1 D.2, and E.1, we can decompose the gradient of E [Wijzizj] using the chain rule. Previ- ously, we have considered z to be a function of ρ and φ. We can instead formulate z as a function of q(z = 1) and ρ, where q(z = 1) is itself a function of ρ and φ. Specifically, 0 ifp;<l-qg(a=)=a(a=0 alla.) ={) Menge, MDH) 09)
1609.02200#95
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
96
0 ifp;<l-qg(a=)=a(a=0 alla.) ={) Menge, MDH) 09) ∂qj (zj =1) Using the chain rule, ∂ =j fixed, even ∂φ though they all depend on the common variables ρ and parameters φ. We use the chain rule to differentiate with respect to q(z = 1) since it allows us to pull part of the integral over ρ inside the derivative with respect to φ. In the sequel, we sometimes write q in place of q(z = 1) to minimize notational clutter. Expanding the desired gradient using the reparameterization trick and the chain rule, we find: a] (a) age Wizizd] = 55 Eo Wiss] 06 OWij212 Ok (Zk eas, eS (30) We can change the order of integration (via the expectation) and differentiation since Wijzizj| ≤ | Wij < ∞
1609.02200#96
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
97
We can change the order of integration (via the expectation) and differentiation since Wijzizj| ≤ | Wij < ∞ for all ρ and bounded φ (Cheng, 2006). Although z(q, ρ) is a step function, and its derivative is a delta function, the integral (corresponding to the expectation with respect to ρ) of its derivative is finite. Rather than dealing with generalized functions directly, we apply the definition of the derivative, and push through the matching integral to recover a finite quantity. For simplicity, we pull the sum over & out of the expectation in Equation 30, and consider each summand independently. From Equation 29, we see that z; is only a function of q;, so all terms in the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we consider the term k = 2; the term k = j is symmetric. Applying the definition of the gradient to one of the summands, and then analytically taking the expectation with respect to p;, we obtain: OW 2G) (Gp) Ogi(zi = 1)
1609.02200#97
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
98
| OW 2G) (Gp) Ogi(zi = 1) dail = 1) a6 _ wm ig 2i(G + 810) 254 + 84.) — Way = (4,0) (G0) Oai(zi = Y) ©? | éqi(i=1) 30 qi 06 = Ep, lim a; Wij 1-2j(¢,0) — Wij -0- 25(G p) _ Ogi(zi = 1) 5q:(2i=1) 0 bd: de pi=ai(zi=0) 7 , 0G: (%i = = Boys fw, 2; (4, p)- 36 sweco] # Eρ The third line follows from Equation 29, since zi(q + δqi, ρ) differs from zi(q, ρ) only in the region = zi(q, ρ). Regardless of of ρ of size δqi around qi(zi = 0) = 1 − the choice of ρ, zj(q + δqi, ρ) = zj(q, ρ).
1609.02200#98
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
99
The third line fixes ρi to the transition between zi = 0 and zi = 1 at qi(zi = 0). Since zi = 0 implies ζi = 0,17 and ζ is a continuous function of ρ, the third line implies that ζi = 0. At the same time, since qi is only a function of ρk<i from earlier in the hierarchy, the term ∂qi ∂φ is not affected by the choice of ρi.18 As noted above, due to the chain rule, the perturbation δqi has no effect on other 17We chose the conditional distribution r(ζi|zi = 0) to be a delta spike at zero. 18In contrast, zi is a function of ρi. 26 Published as a conference paper at ICLR 2017 qj by definition; the gradient is evaluated with those values held constant. On the other hand, ∂qi generally nonzero for all parameters governing hierarchical levels k < i.
1609.02200#99
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
100
Since ρi is fixed such that ζi = 0, all units further down the hierarchy must be sampled consis- tent with this restriction. A sample from ρ has ζi = 0 if zi = 0, which occurs with probability qi(zi = 0).19 We can compute the gradient with a stochastic approximation by multiplying each = 0 are ignored,20 and scaling up the gradient when zi = 0 sample by 1 1 by qi(zi=0) : 2, so that terms with ¢; 4 0 are ignored,”° and scaling (6) 1- 3% ” EWyaz] =E, |We- Ly de Wis2i2i] = Bo |W I-qa=) 7 is not necessary if 7 comes before i in the hierarchy. ∂ ∂φ zi 1 qi(zi = 1) · ∂qi(zi = 1) ∂φ E [Wijzizj] = Eρ . − 1 (31) − # The term 1 1 # zi qi # et − −
1609.02200#100
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
101
− # The term 1 1 # zi qi # et − − While Equation 31 appears similar to REINFORCE, it is better understood as an importance- weighted estimate of an efficient gradient calculation. Just as a ReLU only has a nonzero gradi- ent in the linear regime, ∂zi ∂φ effectively only has a nonzero gradient when zi = 0, in which case ∂qi(zi=1) ∂zi . Unlike in REINFORCE, we do effectively differentiate the reward, Wijzizj. ∂φ ∼ ∂φ Moreover, the number of terms contributing to each gradient ∂qi(zi=1) ber of units in an RBM, whereas it grows quadratically in the method of Section F.3. # G MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER
1609.02200#101
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
102
# G MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER Intuition regarding the difficulty of approximating the posterior distribution over the latent variables given the data can be developed by considering sparse coding, an approach that uses a basis set of spatially locallized filters (Olshausen & Field, 1996). The basis set is overcomplete, and there are generally many basis elements similar to any selected basis element. However, the sparsity prior pushes the posterior distribution to use only one amongst each set of similar basis elements. As a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However, having changed one basis element, the optimal choice for the adjacent elements also changes so the filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated, since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements.
1609.02200#102
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
103
These equivalent representations can easily be disambiguated by the successive layers of the rep- resentation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by inferring the approximating posterior over the top-most latent layer first. Only then do we compute the conditional approximating posteriors of lower layers given a sample from the approximating posterior of the higher layers, breaking the symmetry between representations of similar quality. # H ARCHITECTURE The stochastic approximation to the ELBO is computed via one pass down the approximating pos- terior (Figure 4a), sampling from each continuous latent layer ζi and zm>1 in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not flow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure.
1609.02200#103
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
104
19It might also be the case that ζi = 0 when zi = 1, but with our choice of r(ζ|z), this has vanishingly small probability. 20This takes advantage of the fact that zi ∈ {0, 1}. 27 Published as a conference paper at ICLR 2017 All hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128 units (64 units per side, with full bipartite connections between the two sides), with 4 layers of hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20 persistent chains per element of the minibatch, to sample from the prior in the stochastic approxi- mation to Equation 11.
1609.02200#104
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
105
When using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overfit if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger and more powerful approximating posterior generally did not reduce performance within the range examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks implementing components of the approximating posterior contain two hidden layers of 2000 units. (a) Prior (b) Approximating posterior Figure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green) in (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing z) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with p(x | no parameter sharing.
1609.02200#105
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
106
Num layers Vars per layer Hids per prior layer Param sharing MNIST (dyn bin) MNIST (static bin) Omniglot Caltech-101 Sil 18 20 16 12 64 256 256 80 1000 2000 800 100 none 2 groups 2 groups complete Table 2: Architectural hyperparameters used for each dataset. Successive columns list the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more regularization, and achieve optimal performance with a smaller prior. On statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using recurrent parameter sharing. In the simplest case, each p (3m|3i<m,9) and p(a|3,@) is a func- tion of rem 3b rather than a function of the concatenation [30,31,---,3m-—i]. Moreover, all P(3m>1|31<m,9) share parameters. The RBM layer 39 is rendered compatible with this parame- terization by using a trainable linear transformation of ¢, M - ¢; where the number of rows in M is
1609.02200#106
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
107
28 Published as a conference paper at ICLR 2017 equal to the number of variables in each zm>0. We refer to this architecture as complete recurrent parameter sharing. On datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneficial. We define the n group architecture by dividing the 1 into n equally sized groups of consecutive layers. Each such group is continuous latent layers zm independently subject to recurrent parameter sharing analogous to the complete sharing architecture, and the RBM layer z0 is independently parameterized. We use the spike-and-exponential transformation described in Section 2.1. The exponent is a train- able parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sønderby et al., 2016).
1609.02200#107
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
108
z) is linear, all nonlinear transformations are part of the prior over the latent variables. When p(x In contrast, it is also possible to define the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the final decoder p(x z), as in | traditional VAEs. The former case can be reduced to something analogous to the latter case using the reparameterization trick. However, a VAE with a completely independent prior does not regularize the nonlinearity of the prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on the true posterior) be well-represented by the approximating posterior. Viewed another way, a com- pletely independent prior requires the model to consist of many independent sources of variance, so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows the data manifold to remain curled within a higher-dimensional ambient space, with the approximating posterior merely tracking its contortions. A higher-dimensional ambient space makes sense when modeling multiple classes of objects. For instance, the parameters characterizing limb positions and orientations for people have no analog for houses. # H.1 ESTIMATING THE LOG PARTITION FUNCTION
1609.02200#108
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
109
# H.1 ESTIMATING THE LOG PARTITION FUNCTION We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM (log Zp from Equation 6) from an importance-weighted computation analogous to that of Burda et al. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant of Bennett’s acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces unbiased estimates of the partition function. Interpolating distributions were of the form p(x)β, and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing parameters β in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run, and number of repeated experiments, to achieve sufficient statistical accuracy in the log partition function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about 0.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats.
1609.02200#109
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
110
# H.2 CONSTRAINED LAPLACIAN BATCH NORMALIZATION Rather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normaliza- tion on the L1 norm. Specifically, we use: y=x-xX Xn = y/ (B+6) Osto, where x is a minibatch of scalar values, X denotes the mean of x, © indicates element-wise mul- tiplication, € is a small positive constant, s is a learned scale, and o is a learned offset. For the approximating posterior over the RBM units, we bound 2 < s < 3, and —-s < o < s. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used. 29 Published as a conference paper at ICLR 2017 (a) MNIST (dyn bin) (b) MNIST (static bin) (c) Omniglot (d) Caltech-101 Silhouettes
1609.02200#110
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
111
(a) MNIST (dyn bin) (b) MNIST (static bin) (c) Omniglot (d) Caltech-101 Silhouettes Figure 10: Distribution of estimates of the log-partition function, using Bennett’s acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a), statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d) # I COMPARISON MODELS In Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance- weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladder VAE; Sønderby et al., 2016).
1609.02200#111
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
112
For the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto- nian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW; Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing flows (Nor- malizing flows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al., 2016). On Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016); ladder variational autoencoder (Ladder VAE; Sønderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting the results of Burda et al. (2015).
1609.02200#112
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
113
Finally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a deep sigmoid belief network (RWS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015). 30 Published as a conference paper at ICLR 2017
1609.02200#113
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
114
haan MPN BDLKRAG ANAIwRNY OWS PAWADDDROCHUWPYYNHKELGN PEIwWHWMODDSXHAL MUP NW EDNAD YA OSHOVEKR BATES WHA SWAN OHINSOSSCHOYAN Ke wKweeawys \YUwen wk SYVAOBOCHACH AXwWhrwo 2aDwaA YU AWYMWHArNWDOSOA MH VMOHKY NPN HOVDdhHANHW HAPTAOSCISSIGOCISGOoSCeHAAN WHENRNDOCOVSOCFAUSOOCOOSOHALS AXSFEVOSASOeASOSOSCE ARYA BATAONSCHOHVNAHSCHSOOSARSN NN KT RWEKUNYAMY YN PVU~~NY DN] BD LBW NANYDH PNY ~W NNHKORTWNWRMYWYMN PNY HWS WN HKOKRON RMU VDNHVN~~WA NN SRO RRONKAKQOUUPN~Y-AWI N=-GQhNi Ny SGInNwOOYWVaeeruns NAWAN YP BIND LINV HOB LY NH-@sinpGInwerqyvevwbwrnxvr RH GhHNnVN
1609.02200#114
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
115
NAWAN YP BIND LINV HOB LY NH-@sinpGInwerqyvevwbwrnxvr RH GhHNnVN HIAVYVRDOENYNVEoeUnnr~ R—-&ORNYPP FINVSWLlOQYvew porn NLKDMIVYVOGSOSA Ke eTndnw Qwuns WMRLIGINOSCHOALCKKRASEARQAWUVHDH / s s § 6 l a ? a 2 (4 2 ! / & 4 7 \ 3 3 WWH-Ye&e ry eNO LYN P— FeEWOA~ WeeNeareNen L&yvy P- Fen yH~ IES DMOACYWYHWW —— DR wD~ NEP HWG AKRAHOOOBOSASCHGODVCSCHRARS YIISeb fwwW~-~H—r~r~NeQowW Are andToerunvr pre rdsh NEQereehywasw-~-~-~—-—~ “rv eavywy SDRBWoo eww we-—-~—-~~nNn eK WY AYR ewQtrwuew-—-~-—~ ~NwMQqwa f Ss $§ %% & be 14 AA 22 22 22 te 22 tt 7 & & 94 77
1609.02200#115
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]
1609.02200
117
Figure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained in a wholly unsupervised manner. # J SUPPLEMENTARY RESULTS To highlight the contribution of the various components of our generative model, we investigate performance on a selection of simplified models.21 First, we remove the continuous latent layers. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the smoothing variables ζ, and a factorial Bernoulli distribution over the observed variables x defined via a deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihood 85.2 with 200 RBM units. of − −
1609.02200#117
Discrete Variational Autoencoders
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data, and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
http://arxiv.org/pdf/1609.02200
Jason Tyler Rolfe
stat.ML, cs.LG
Published as a conference paper at ICLR 2017
null
stat.ML
20160907
20170422
[ { "id": "1602.08734" }, { "id": "1602.02311" }, { "id": "1511.06499" }, { "id": "1607.05690" }, { "id": "1511.05644" }, { "id": "1509.00519" }, { "id": "1506.04557" } ]