id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.01144#41
Categorical Reparameterization with Gumbel-Softmax
styleâ variable z and categorical class variable y. (b) Inference model qÏ (y, z|x) samples latent state y, z given x. Gaussian z can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when y is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel- Softmax reparameterizes y so that backpropagation is also possible through y without encountering stochastic nodes. # B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION Here we derive the probability density function of the Gumbel-Softmax distribution with proba- bilities Ï 1, ..., Ï k and temperature Ï . We ï¬ rst deï¬ ne the logits xi = log Ï i, and Gumbel samples
1611.01144#40
1611.01144#42
1611.01144
[ "1602.06725" ]
1611.01144#42
Categorical Reparameterization with Gumbel-Softmax
10 Published as a conference paper at ICLR 2017 (a) conv2 conv2 conv2 5x5 5x5 5x5 FC X [>| stride=2 |) stride=2 >} stride=2 >) 157) de(y | x) N=32 N=64 N=128 ReLU ReLU ReLU (b) conv2 conv2 conv2 5x5 5x5 5x5 Fo [x, y] >} stride=2 [> stride=2 |») stride=2 >) 457} de(z | x) N=32 N=64 N=128 ReLU ReLU ReLU () conv2_T conv2_T conv2_T conv2_T FC| 3x3 3x3 3x3 3x3 [Â¥2] ->16q7>) stride=2 |} stride=-2 |) stride=2 [>} stride=2 [>]FC] >) Po N=128 N=64 N=32 N=32 y y.2) Figure 7: Network architecture for (a) classiï¬ cation qÏ (y|x) (b) inference qÏ (z|x, y), and (c) gen- erative pθ(x|y, z) models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from. g1, ..., gk, where gi â ¼ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as: exp ((ai + gi)/T) va exp ((xj + 9;)/T) Yi fori =1,...,k (12) B.1 CENTERED GUMBEL DENSITY The mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the normalization of the softmax operation removes one degree of freedom.
1611.01144#41
1611.01144#43
1611.01144
[ "1602.06725" ]
1611.01144#43
Categorical Reparameterization with Gumbel-Softmax
To compensate for this, we deï¬ ne an equivalent sampling process that subtracts off the last element, (xk + gk)/Ï before the softmax: ye = OP Mi + 9 = (e+ 98))/7) fori =1,..,k (13) Dar xP ((ay + 9 â (we + 9e))/7) To derive the density of this equivalent sampling process, we ï¬ rst derive the density for the â cen- teredâ multivariate Gumbel density corresponding to: ui = xi + gi â (xk + gk) for i = 1, ..., k â
1611.01144#42
1611.01144#44
1611.01144
[ "1602.06725" ]
1611.01144#44
Categorical Reparameterization with Gumbel-Softmax
1 (14) where gi â ¼ Gumbel(0, 1). Note the probability density of a Gumbel distribution with scale param- eter β = 1 and mean µ at z is: f (z, µ) = eµâ zâ eµâ z . We can now compute the density of this distribution by marginalizing out the last Gumbel sample, gk: # oo oo Pitta) = [ dgy p(uy, ---, Uk|Gx)P( Ie) -_ oo k-1 = [dav (ax) T[ oui) 7 i=1 oo k-1 = [dae F010) TY flee + eas â ws) ied i=l oo k-1 = dg, e~9*-© ** erin Ui kG EME ET Ie [. I 11
1611.01144#43
1611.01144#45
1611.01144
[ "1602.06725" ]
1611.01144#45
Categorical Reparameterization with Gumbel-Softmax
Published as a conference paper at ICLR 2017 We perform a change of variables with v = eâ gk , so dv = â eâ gk dgk and dgk = â dv egk = dv/v, and deï¬ ne uk = 0 to simplify notation: k-1 p(U1,--;Uk,-1) = stun =0) | dy â A yet â Thee uj â a, â ve 2pâ ujâ ep, (15) exp (+ . vee Ui ) G => we") T(k) (16) i=l =T(k) oo (35 (aj â Ui ) (> e=")) (7) =T(k) (loo exp (¢ =) ( Dex (i - «) (18) i=l B.2 TRANSFORMING TO A GUMBEL-SOFTMAX Given samples u1, ..., uk,â 1 from the centered Gumbel distribution, we can apply a deterministic transformation h to yield the ï¬ rst k â 1 coordinates of the sample from the Gumbel-Softmax: exp(ui/T) 1+ D2) exp(uj/7) Yirkâ 1 = A(ur:k-1), hi (t1:kâ -1) Note that the final coordinate probability y;, is fixed given the first k â 1, as ean i=1 yi = 1: -1 k=l k-1 Ye = {1+ exp(u;/T) =1- Ss Uj (20) j=l j=l We can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the ï¬ rst k â 1 variables: ho" (yin P(Yi:k) = P(A" (yr:eâ 1)) det (oe) (21) Yi:k-1 Thus we need to compute two more pieces: the inverse of h and its Jacobian determinant. The inverse of h is: k-1 bh (yie1) =7 x | logy: â log {1â Sy; | | =7 x (ogy: â log yx) (22) j=l with Jacobian
1611.01144#44
1611.01144#46
1611.01144
[ "1602.06725" ]
1611.01144#46
Categorical Reparameterization with Gumbel-Softmax
1 1 1 1 wtoe ow uR Oh-"(y ) 1 1 a a+ +. + C Yl:ik=-1) _ Tx (cise ( ) + +) _ ue y2 : Yk ue (23) OYt:k-1 Yuck-1 Uk : : me : 1 1 1 ua Uk UR 7 yeaa" Uk Next, we compute the determinant of the Jacobian: -1 -1 det (ae wn) = 7*det ((r A cc diag (vs-1)) («ise ( , ))) (24) OYtK=1 Yk Yurk-1 1-4 k-1 â 7h (1 4 ) yy (25) Yeo J yy at I yy! (26) 12
1611.01144#45
1611.01144#47
1611.01144
[ "1602.06725" ]
1611.01144#47
Categorical Reparameterization with Gumbel-Softmax
Published as a conference paper at ICLR 2017 where e is a k â 1 dimensional vector of ones, and weâ ve used the identities: det(AB) = det(A)det(B), det(diag(x)) = [], xi, and det(J + uv?) = 1+ u7v. We can then plug into the change of variables formula (Eq. using the density of the centered Gumbel (Eq{15}, the inverse of h (Eq. [22) and its Jacobian determinant (Eq. [26): k , k yt P(Yis + Yk) =T(K) (1 exp (xi) tt) (> exp (x;) tt) i=1 i=l v Kk k ph-l I Q7) i=l =T(k)re} (x exp (2) iw) J] (e @a) /y7"") (28) i=l i=1
1611.01144#46
1611.01144#48
1611.01144
[ "1602.06725" ]
1611.01144#48
Categorical Reparameterization with Gumbel-Softmax
13
1611.01144#47
1611.01144
[ "1602.06725" ]
1611.00712#0
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
7 1 0 2 r a M 5 ] G L . s c [ 3 v 2 1 7 0 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 THE CONCRETE DISTRIBUTION: A CONTINUOUS RELAXATION OF DISCRETE RANDOM VARIABLES Chris J. Maddison1,2, Andriy Mnih1, & Yee Whye Teh1 1DeepMind, London, United Kingdom 2University of Oxford, Oxford, United Kingdom [email protected] # ABSTRACT The reparameterization trick enables optimizing large scale stochastic computa- tion graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random vari- able with ï¬ xed distribution. After refactoring, the gradients of the loss propa- gated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparame- terizations due to the discontinuous nature of discrete states. In this work we introduce CONCRETE random variablesâ CONtinuous relaxations of disCRETE random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit rep- resentation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objec- tives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
1611.00712#1
1611.00712
[ "1610.05683" ]
1611.00712#1
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
# INTRODUCTION Software libraries for automatic differentiation (AD) (Abadi et al., 2015; Theano Development Team, 2016) are enjoying broad use, spurred on by the success of neural networks on some of the most challenging problems of machine learning. The dominant mode of development in these libraries is to deï¬ ne a forward parametric computation, in the form of a directed acyclic graph, that computes the desired objective. If the components of the graph are differentiable, then a backwards computation for the gradient of the objective can be derived automatically with the chain rule. The ease of use and unreasonable effectiveness of gradient descent has led to an explosion in the di- versity of architectures and objective functions. Thus, expanding the range of useful continuous operations can have an outsized impact on the development of new models. For example, a topic of recent attention has been the optimization of stochastic computation graphs from samples of their states. Here, the observation that AD â just worksâ when stochastic nodes1 can be reparameterized into deterministic functions of their parameters and a ï¬ xed noise distribution (Kingma & Welling, 2013; Rezende et al., 2014), has liberated researchers in the development of large complex stochastic architectures (e.g. Gregor et al., 2015). Computing with discrete stochastic nodes still poses a signiï¬ cant challenge for AD libraries. Deter- ministic discreteness can be relaxed and approximated reasonably well with sigmoidal functions or the softmax (see e.g., Grefenstette et al., 2015; Graves et al., 2016), but, if a distribution over discrete states is needed, there is no clear solution. There are well known unbiased estimators for the gradi- 1For our purposes a stochastic node of a computation graph is just a random variable whose distribution depends in some deterministic way on the values of the parent nodes.
1611.00712#0
1611.00712#2
1611.00712
[ "1610.05683" ]
1611.00712#2
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
1 Published as a conference paper at ICLR 2017 ents of the parameters of a discrete stochastic node from samples. While these can be made to work with AD, they involve special casing and deï¬ ning surrogate objectives (Schulman et al., 2015), and even then they can have high variance. Still, reasoning about discrete computation comes naturally to humans, and so, despite the difï¬ culty associated, many modern architectures incorporate discrete stochasticity (Mnih et al., 2014; Xu et al., 2015; KoË cisk´y et al., 2016). This work is inspired by the observation that many architectures treat discrete nodes continuously, and gradients rich with counterfactual information are available for each of their possible states. We introduce a CONtinuous relaxation of disCRETE random variables, CONCRETE for short, which allow gradients to ï¬
1611.00712#1
1611.00712#3
1611.00712
[ "1610.05683" ]
1611.00712#3
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
ow through their states. The Concrete distribution is a new parametric family of continuous distributions on the simplex with closed form densities. Sampling from the Concrete distribution is as simple as taking the softmax of logits perturbed by ï¬ xed additive noise. This reparameterization means that Concrete stochastic nodes are quick to implement in a way that â just worksâ with AD. Crucially, every discrete random variable corresponds to the zero temperature limit of a Concrete one. In this view optimizing an objective over an architecture with discrete stochastic nodes can be accomplished by gradient descent on the samples of the corresponding Concrete relaxation. When the objective depends, as in variational inference, on the log-probability of discrete nodes, the Concrete density is used during training in place of the discrete mass. At test time, the graph with discrete nodes is evaluated.
1611.00712#2
1611.00712#4
1611.00712
[ "1610.05683" ]
1611.00712#4
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The paper is organized as follows. We provide a background on stochastic computation graphs and their optimization in Section 2. Section 3 reviews a reparameterization for discrete random vari- ables, introduces the Concrete distribution, and discusses its application as a relaxation. Section 4 reviews related work. In Section 5 we present results on a density estimation task and a structured prediction task on the MNIST and Omniglot datasets. In Appendices C and F we provide details on the practical implementation and use of Concrete random variables. When comparing the effec- tiveness of gradients obtained via Concrete relaxations to a state-of-the-art-method (VIMCO, Mnih & Rezende, 2016), we ï¬ nd that they are competitiveâ occasionally outperforming and occasionally underperformingâ all the while being implemented in an AD library without special casing. 2 BACKGROUND 2.1 OPTIMIZING STOCHASTIC COMPUTATION GRAPHS Stochastic computation graphs (SCGs) provide a formalism for specifying input-output mappings, potentially stochastic, with learnable parameters using directed acyclic graphs (see Schulman et al. (2015) for a review). The state of each non-input node in such a graph is obtained from the states of its parent nodes by either evaluating a deterministic function or sampling from a conditional distribution. Many training objectives in supervised, unsupervised, and reinforcement learning can be expressed in terms of SCGs. To optimize an objective represented as a SCG, we need estimates of its parameter gradients. We will concentrate on graphs with some stochastic nodes (backpropagation covers the rest). For simplicity, we restrict our attention to graphs with a single stochastic node X. We can interpret the forward pass in the graph as ï¬ rst sampling X from the conditional distribution pÏ (x) of the stochastic node given its parents, then evaluating a deterministic function fθ(x) at X. We can think of fθ(X) as a noisy objective, and we are interested in optimizing its expected value L(θ, Ï ) = E Xâ ¼pÏ (x)[fθ(X)] w.r.t. parameters θ, Ï . In general, both the objective and its gradients are intractable. We will side-step this issue by esti- mating them with samples from pÏ (x).
1611.00712#3
1611.00712#5
1611.00712
[ "1610.05683" ]
1611.00712#5
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The gradient w.r.t. to the parameters θ has the form # â θE Xâ ¼pÏ (x)[fθ(X)] = E Xâ ¼pÏ (x)[ (1) â θL(θ, Ï ) = # â θfθ(X)] and can be easily estimated using Monte Carlo sampling: f 1 8 8 VoL(0,8) ~ =) _, Volo(X*), (2) where X* ~ p(x) iid. The more challenging task is to compute the gradient @ of pg(x). The expression obtained by differentiating the expected objective, # w.r.t. the parameters â Ï L(θ, Ï ) = â Ï pÏ (x)fθ(x) dx = fθ(x) â Ï pÏ (x) dx, (3) 2
1611.00712#4
1611.00712#6
1611.00712
[ "1610.05683" ]
1611.00712#6
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Published as a conference paper at ICLR 2017 does not have the form of an expectation w.r.t. x and thus does not directly lead to a Monte Carlo gradient estimator. However, there are two ways of getting around this difï¬ culty which lead to the two classes of estimators we will now discuss. 2.2 SCORE FUNCTION ESTIMATORS The score function estimator (SFE, Fu, 2006), also known as the REINFORCE (Williams, 1992) or likelihood-ratio estimator (Glynn, 1990), is based on the identity â
1611.00712#5
1611.00712#7
1611.00712
[ "1610.05683" ]
1611.00712#7
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Ï log pÏ (x), which allows the gradient in Eq. 3 to be written as an expectation: â Ï L(θ, Ï ) = E â Ï log pÏ (X)] . Estimating this expectation using naive Monte Carlo gives the estimator Vol(,8) ~ =~, fol X*)Vg low po(X*), () where X s pÏ (x) i.i.d. This is a very general estimator that is applicable whenever log pÏ (x) is differentiable w.r.t. Ï . As it does not require fθ(x) to be differentiable or even continuous as a function of x, the SFE can be used with both discrete and continuous random variables. Though the basic version of the estimator can suffer from high variance, various variance reduc- tion techniques can be used to make the estimator much more effective (Greensmith et al., 2004). Baselines are the most important and widely used of these techniques (Williams, 1992). A number of score function estimators have been developed in machine learning (Paisley et al., 2012; Gregor et al., 2013; Ranganath et al., 2014; Mnih & Gregor, 2014; Titsias & L´azaro-Gredilla, 2015; Gu et al., 2016), which differ primarily in the variance reduction techniques used. 2.3 REPARAMETERIZATION TRICK In many cases we can sample from pÏ (x) by ï¬ rst sampling Z from some ï¬ xed distribution q(z) and then transforming the sample using some function gÏ (z). For example, a sample from Normal(µ, Ï 2) can be obtained by sampling Z from the standard form of the distribution Normal(0, 1) and then transforming it using gµ,Ï (Z) = µ + Ï Z. This two-stage reformulation of the sampling process, called the reparameterization trick, allows us to transfer the dependence on Ï from p into f by writing fθ(x) = fθ(gÏ (z)) for x = gÏ
1611.00712#6
1611.00712#8
1611.00712
[ "1610.05683" ]
1611.00712#8
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
(z), making it possible to reduce the problem of estimating the gradient w.r.t. parameters of a distribution to the simpler problem of estimating the gradient w.r.t. parameters of a deterministic function. Having reparameterized pÏ (x), we can now express the objective as an expectation w.r.t. q(z): Xâ ¼pÏ (x)[fθ(X)] = E As q(z) does not depend on Ï , we can estimate the gradient w.r.t. Ï in exactly the same way we estimated the gradient w.r.t. θ in Eq. 1. Assuming differentiability of fθ(x) w.r.t. x and of gÏ (z) w.r.t. Ï and using the chain rule gives â Ï L(θ, Ï ) = E # â Ï fθ(gÏ (Z))] = E â Ï gÏ (Z)] .
1611.00712#7
1611.00712#9
1611.00712
[ "1610.05683" ]
1611.00712#9
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The reparameterization trick, introduced in the context of variational inference independently by Kingma & Welling (2014), Rezende et al. (2014), and Titsias & L´azaro-Gredilla (2014), is usu- ally the estimator of choice when it is applicable. For continuous latent variables which are not directly reparameterizable, new hybrid estimators have also been developed, by combining partial reparameterizations with score function estimators (Ruiz et al., 2016; Naesseth et al., 2016).
1611.00712#8
1611.00712#10
1611.00712
[ "1610.05683" ]
1611.00712#10
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
2.4 APPLICATION: VARIATIONAL TRAINING OF LATENT VARIABLE MODELS We will now see how the task of training latent variable models can be formulated in the SCG framework. Such models assume that each observation x is obtained by first sampling a vector of latent variables Z from the prior pg(z) before sampling the observation itself from pg(x | z). Thus the probability of observation x is pg(x) = 3), po(z)pe(x | z). Maximum likelihood train- ing of such models is infeasible, because the log-likelihood (LL) objective L(@) = log pe(x) =
1611.00712#9
1611.00712#11
1611.00712
[ "1610.05683" ]
1611.00712#11
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
3 (6) Published as a conference paper at ICLR 2017 (a) Discrete(α) (b) Concrete(α, λ) Discrete(α) and 3-ary Con- Figure 1: Visualization of sampling graphs for 3-ary discrete D crete X Concrete(α, λ). White operations are deterministic, blue are stochastic, rounded are continuous, square discrete. The top node is an example state; brightness indicates a value in [0,1]. log E expectation being inside the log. The multi-sample variational objective (Burda et al., 2016), 1 po(Z', x) log | â â â â _]]. (8) (2 dX do(Z" |) Ln(0,¢)=. E Zingy (2|e) provides a convenient alternative which has precisely the form we considered in Section 2.1. This ap- x) with its own parameters, which serves proach relies on introducing an auxiliary distribution qÏ (z as approximation to the intractable posterior pθ(z x). The model is trained by jointly maximizing | the objective w.r.t. to the parameters of p and q. The number of samples used inside the objective m allows trading off the computational cost against the tightness of the bound. For m = 1, Lm(θ, Ï ) becomes is the widely used evidence lower bound (ELBO, Hoffman et al., 2013) on log pθ(x), while for m > 1, it is known as the importance weighted bound (Burda et al., 2016).
1611.00712#10
1611.00712#12
1611.00712
[ "1610.05683" ]
1611.00712#12
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
3 THE CONCRETE DISTRIBUTION 3.1 DISCRETE RANDOM VARIABLES AND THE GUMBEL-MAX TRICK To motivate the construction of Concrete random variables, we review a method for sampling from discrete distributions called the Gumbel-Max trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). We restrict ourselves to a representation of discrete states as vectors d k=1 dk = 1.
1611.00712#11
1611.00712#13
1611.00712
[ "1610.05683" ]
1611.00712#13
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
This is a ï¬ exible representation in a computation graph; to achieve an integral representation take the inner product of d with (1, . . . , n), and to achieve a point mass representation in Rm take W d where W Rmà n. Consider an unnormalized parameterization (α1, . . . , αn) where αk â tion D â ¼ Max trick proceeds as follows: sample Uk â ¼ log Uk) log αk â { â , set Dk = 1 and the remaining Di = 0 for i } â _ Ok Vie Gs (9)
1611.00712#12
1611.00712#14
1611.00712
[ "1610.05683" ]
1611.00712#14
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
In other words, the sampling of a discrete random variable can be refactored into a deterministic functionâ componentwise addition followed by argmaxâ of the parameters log αk and ï¬ xed dis- tribution â â The apparently arbitrary choice of noise gives the trick its name, as log U ) has a Gumbel distribution. This distribution features in extreme value theory (Gumbel, 1954) where it plays a central role similar to the Normal distribution: the Gumbel distribution is stable under max opera- tions, and for some distributions, the order statistics (suitably normalized) of i.i.d. draws approach the Gumbel in distribution. The Gumbel can also be recognized as a log-transformed exponen- tial random variable. So, the correctness of (9) also reduces to a well known result regarding the argmin of exponential random variables. See (Hazan et al., 2016) for a collection of related work, and particularly the chapter (Maddison, 2016) for a proof and generalization of this trick.
1611.00712#13
1611.00712#15
1611.00712
[ "1610.05683" ]
1611.00712#15
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
4 Published as a conference paper at ICLR 2017 (a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2 Figure 2: A discrete distribution with unnormalized probabilities (α1, α2, α3) = (2, 0.5, 1) and three corresponding Concrete densities at increasing temperatures λ. Each triangle represents the set of points (y1, y2, y3) in the simplex â 2 = . For λ = 0 the size of white circles represents the mass assigned to each vertex of the simplex under the the intensity of the shading represents the value of pα,λ(y). discrete distribution. For λ 2, 1, 0.5 } â {
1611.00712#14
1611.00712#16
1611.00712
[ "1610.05683" ]
1611.00712#16
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
3.2 CONCRETE RANDOM VARIABLES The derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined. For this reason the Gumbel-Max trick is not a suitable reparameterization for use in SCGs with AD. Here we introduce the Concrete distribution motivated by considering a graph, which is the same as Figure[Ialup to a continuous relaxation of the argmax computation, see Figure[Ib] This will ultimately allow the optimization of parameters a, via gradients. The argmax computation returns states on the vertices of the simplex Aâ -! = {x â ¬ R" | x, â ¬ (0, 1], \o¢_, ve = 1}. The idea behind Concrete random variables is to relax the state of a discrete variable from the vertices into the interior where it is a random probability vectorâ a vector of numbers between 0 and | that sum to 1. To sample a Concrete random variable X â ¬ A"! at temperature \ â ¬ (0,00) with parameters a, â ¬ (0, 00), sample G,, ~ Gumbel i.i.d. and set # Rn â â ) with parameters αk â Xk = # ), sample Gk â ¼ . # â exp((log αk + Gk)/λ) i=1 exp((log αi + Gi)/λ) exp((log ag + Gx)/A) YUL, exp((log a; + Gi)/d) Xk (10) The softmax computation of (10) smoothly approaches the discrete argmax computation as λ 0 while preserving the relative order of the Gumbels log αk + Gk. So, imagine making a series of forward passes on the graphs of Figure 1. Both graphs return a stochastic value for each forward pass, but for smaller temperatures the outputs of Figure 1b become more discrete and eventually indistinguishable from a typical forward pass of Figure 1a. The distribution of X sampled via (10) has a closed form density on the simplex. Because there may be other ways to sample a Concrete random variable, we take the density to be its deï¬
1611.00712#15
1611.00712#17
1611.00712
[ "1610.05683" ]
1611.00712#17
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
nition. Deï¬ nition 1 (Concrete Random Variables). Let α Concrete distribution X â ¼ Po.A(t) = (n= 1)!" TT (=) ; an k=1 a py VT; Proposition 1 lists a few properties of the Concrete distribution. (a) is conï¬ rmation that our def- inition corresponds to the sampling routine (10). (b) conï¬ rms that rounding a Concrete random variable results in the discrete random variable whose distribution is described by the logits log αk, (c) conï¬ rms that taking the zero temperature limit of a Concrete random variable is the same as rounding. Finally, (d) is a convexity result on the density. We prove these results in Appendix A. Proposition 1 (Some Properties of Concrete Random Variables). Let X location parameters α (a) (Reparameterization) If Gy, ~ Gumbel i.i.d., then (b) (Rounding) P(X, > X; fori #k) = ax/(X7}_, (c) (Zero temperature) P (limy.9 Xz = 1) = ax/(S0j_1 â â â
1611.00712#16
1611.00712#18
1611.00712
[ "1610.05683" ]
1611.00712#18
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
d= exp((log αk+Gk)/λ) # Gumbel i.i.d., then Xk i=1 exp((log αi+Gi)/λ) , Gumbel i.i.d., then Xk n #k) = ax/(X7}_, i=1 αi), i=1 αi), 5 Published as a conference paper at ICLR 2017 (a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2 Figure 3: A visualization of the binary special case. (a) shows the discrete trick, which works by passing a noisy logit through the unit step function. (b), (c), (d) show Concrete relaxations; the horizontal blue densities show the density of the input distribution and the vertical densities show the corresponding Binary Concrete density on (0, 1) for varying λ. (d) (Convex eventually) If λ (n 1)â 1, then pα,λ(x) is log-convex in x. â ¤ â
1611.00712#17
1611.00712#19
1611.00712
[ "1610.05683" ]
1611.00712#19
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The binary case of the Gumbel-Max trick simpliï¬ es to passing additive noise through a step func- tion. The corresponding Concrete relaxation is implemented by passing additive noise through a sigmoidâ see Figure 3. We cover this more thoroughly in Appendix B, along with a cheat sheet (Appendix F) on the density and implementation of all the random variables discussed in this work. 3.3 CONCRETE RELAXATIONS Concrete random variables may have some intrinsic value, but we investigate them simply as surro- gates for optimizing a SCG with discrete nodes. When it is computationally feasible to integrate over the discreteness, that will always be a better choice. Thus, we consider the use case of optimizing a large graph with discrete stochastic nodes from samples. First, we outline our proposal for how to use Concrete relaxations by considering a variational autoencoder with a single discrete latent variable. Let P,(d) be the mass function of some n- dimensional one-hot discrete random variable with unnormalized probabilities a â
1611.00712#18
1611.00712#20
1611.00712
[ "1610.05683" ]
1611.00712#20
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
¬ (0,00)â and po(z|d) some distribution over a data point x given d â ¬ (0, 1)" one-hot. The generative model is then po ,a(x,d) = po(2|d)P.(d). Let Qa(d|2) be an approximating posterior over d â ¬ (0, 1)" one- hot whose unnormalized probabilities a(x) â ¬ (0,00)" depend on x. All together the variational lowerbound we care about stochastically optimizing is pθ(x D)Pa(D) x) | E Dâ ¼Qα(d|x) L1(θ, a, α) = log | Qα(D , (12) with respect to θ, a, and any parameters of α. First, we relax the stochastic computation D Concrete(α(x), λ1) 12 will re- with density qα,λ1(z sult in a non-interpretable objective, which does not necessarily lowerbound log p(x), because E x)/Pa(Z)] is not a KL divergence. Thus we propose â relaxingâ the terms Pa(d) and Qα(d x) to reï¬ ect the true sampling distribution. Thus, the relaxed objective is: | pθ(x L1(θ, a, α) | where pa,λ2(z) is a Concrete density with location a and temperature λ2. At test time we evaluate the discrete lowerbound L1(θ, a, α). Naively implementing Eq. 13 will result in numerical issues. We discuss this and other details in Appendix C. Thus, the basic paradigm we propose is the following: during training replace every discrete node with a Concrete node at some ï¬ xed temperature (or with an annealing schedule). The graphs are identical up to the softmax / argmax computations, so the parameters of the relaxed graph and discrete graph are the same. When an objective depends on the log-probability of discrete variables in the SCG, as the variational lowerbound does, we propose that the log-probability terms are also â relaxedâ to represent the true distribution of the relaxed node. At test time the original discrete loss is evaluated.
1611.00712#19
1611.00712#21
1611.00712
[ "1610.05683" ]
1611.00712#21
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
This is possible, because the discretization of any Concrete distribution has a closed form mass function, and the relaxation of any discrete distribution into a Concrete distribution has a closed form density. This is not always possible. For example, the multinomial probit modelâ the Gumbel-Max trick with Gaussians replacing Gumbelsâ does not have a closed form mass. The success of Concrete relaxations will depend on the choice of temperature during training. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior
1611.00712#20
1611.00712#22
1611.00712
[ "1610.05683" ]
1611.00712#22
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
6 (13) Published as a conference paper at ICLR 2017 of the simplex as in Figure 2d. If this is the case, it is possible for the relaxed random variable to communicate much more than log2(n) bits of information about its α parameters. This might lead the relaxation to prefer the interior of the simplex to the vertices, and as a result there will be a large integrality gap in the overall performance of the discrete graph. Therefore Proposition 1 (d) is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than )n. We discuss (n the subtleties of choosing the temperatures in more detail in Appendix C. Ultimately the best choice of λ and the performance of the relaxation for any speciï¬ c n will be an empirical question. # 4 RELATED WORK Perhaps the most common distribution over the simplex is the Dirichlet with density pa(x) « hel rest on z â ¬ Aâ ~!.
1611.00712#21
1611.00712#23
1611.00712
[ "1610.05683" ]
1611.00712#23
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The Dirichlet can be characterized by strong independence proper- ties, and a great deal of work has been done to generalize it [1985] {1994} Favaro et al.|[2011). Of note is the Logistic Normal distribution (Atchison & Shen]]1980), which can be simulated by taking the softmax of n â 1 normal random variables and an nth logit that is deterministically zero. The Logistic Normal is an important dis- tribution, because it can effectively model correlations within the simplex (Blei & Lafferty] 2006). To our knowledge the Concrete distribution does not fall completely into any family of distribu- tions previously described. For A < 1 the Concrete is in a class of normalized infinitely divisible distributions (S. Favaro, personal communication), and the results of [Favaro et al.|(2011) apply. The idea of using a softmax of Gumbels as a relaxation for a discrete random variable was concur- rently considered by (Jang et al., 2016), where it was called the Gumbel-Softmax. They do not use the density in the relaxed objective, opting instead to compute all aspects of the graph, including discrete log-probability computations, with the relaxed stochastic state of the graph. In the case of variational inference, this relaxed objective is not a lower bound on the marginal likelihood of the observations, and care needs to be taken when optimizing it. The idea of using sigmoidal functions with additive input noise to approximate discreteness is also not a new idea. (Frey, 1997) introduced nonlinear Gaussian units which computed their activation by passing Gaussian noise with the mean and variance speciï¬ ed by the input to the unit through a nonlinearity, such as the logistic function. Salakhutdinov & Hinton (2009) binarized real-valued codes of an autoencoder by adding (Gaussian) noise to the logits before passing them through the logistic function.
1611.00712#22
1611.00712#24
1611.00712
[ "1610.05683" ]
1611.00712#24
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Most recently, to avoid the dif- ï¬ culty associated with likelihood-ratio methods (KoË cisk´y et al., 2016) relaxed the discrete sampling operation by sampling a vector of Gaussians instead and passing those through a softmax. There is another family of gradient estimators that have been studied in the context of training neural networks with discrete units. These are usually collected under the umbrella of straight- through estimators (Bengio et al., 2013; Raiko et al., 2014). The basic idea they use is passing forward discrete values, but taking gradients through the expected value. They have good empirical performance, but have not been shown to be the estimators of any loss function. This is in contrast to gradients from Concrete relaxations, which are biased with respect to the discrete graph, but unbiased with respect to the continuous one.
1611.00712#23
1611.00712#25
1611.00712
[ "1610.05683" ]
1611.00712#25
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
# 5 EXPERIMENTS 5.1 PROTOCOL The aim of our experiments was to evaluate the effectiveness of the gradients of Concrete relax- ations for optimizing SCGs with discrete nodes. We considered the tasks in (Mnih & Rezende, 2016): structured output prediction and density estimation. Both tasks are difï¬ cult optimization problems involving ï¬ tting probability distributions with hundreds of latent discrete nodes. We compared the performance of Concrete reparameterizations to two state-of-the-art score function estimators: VIMCO (Mnih & Rezende, 2016) for optimizing the multisample variational objec- tive (m > 1) and NVIL (Mnih & Gregor, 2014) for optimizing the single-sample one (m = 1). We performed the experiments using the MNIST and Omniglot datasets. These are datasets of 28 images of handwritten digits (MNIST) or letters (Omniglot). For MNIST we used the ï¬ xed 28 binarization of Salakhutdinov & Murray (2008) and the standard 50,000/10,000/10,000 split into 7 Published as a conference paper at ICLR 2017 MNIST NLL Omniglot NLL binary model (200H â 784V) Test Train Test Train m Concrete VIMCO Concrete VIMCO Concrete VIMCO Concrete VIMCO 1 5 50 107.3 104.9 104.3 104.4 101.9 98.8 107.5 104.9 104.2 104.2 101.5 98.3 118.7 118.0 118.9 115.7 113.5 113.0 117.0 115.8 115.8 112.2 110.8 110.0 (200H â 200H â
1611.00712#24
1611.00712#26
1611.00712
[ "1610.05683" ]
1611.00712#26
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
784V) 1 5 50 102.1 99.9 99.5 92.9 91.7 90.7 102.3 100.0 99.4 91.7 90.8 89.7 116.3 116.0 117.0 109.2 107.5 108.1 114.4 113.5 113.9 104.8 103.6 103.6 (200H â ¼784V) 1 5 50 92.1 89.5 88.5 93.8 91.4 89.3 91.2 88.1 86.4 91.5 88.6 86.5 108.4 107.5 108.1 116.4 118.2 116.0 103.6 101.4 100.5 110.3 102.3 100.8 (200H â ¼200H â ¼784V) 1 5 50 87.9 86.3 85.7 88.4 86.4 85.5 86.5 84.1 83.1 85.8 82.5 81.8 105.9 105.8 106.8 111.7 108.2 113.2 100.2 98.6 97.5 105.7 101.1 95.2 Table 1: Density estimation with binary latent variables. When m = 1, VIMCO stands for NVIL. training/validation/testing sets. For Omniglot we sampled a ï¬ xed binarization and used the stan- dard 24,345/8,070 split into training/testing sets. We report the negative log-likelihood (NLL) of the discrete graph on the test data as the performance metric. All of our models were neural networks with layers of n-ary discrete stochastic nodes with values log2(n). The distributions were parameterized by n real val- on the corners of the hypercube } {â ues log αk â Discrete(α) with n states. Model descriptions are of the form â (200Vâ 200H 784V)â
1611.00712#25
1611.00712#27
1611.00712
[ "1610.05683" ]
1611.00712#27
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
, read from left to right. This describes the order of conditional sampling, again from left to right, with each integer repre- senting the number of stochastic units in a layer. The letters V and H represent observed and latent variables, respectively. If the leftmost layer is H, then it was sampled unconditionally from some parameters. Conditioning functions are described by , where â â â means a linear function of the previous layer and â â means a non-linear function. A â layerâ of these units is simply the concatenation of some number of independent nodes whose parameters are determined as a function 240 the previous layer. For example a 240 binary layer is a factored distribution over the } hypercube. Whereas a 240 8-ary layer can be seen as a distribution over the same hypercube where each of the 80 triples of units are sampled independently from an 8 way discrete distribution over 3. All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized {â } using Adam (Kingma & Ba, 2014). All temperatures were ï¬ xed throughout training. Appendix D for hyperparameter details. 5.2 DENSITY ESTIMATION Density estimation, or generative modelling, is the problem of ï¬ tting the distribution of data. We took the latent variable approach described in Section 2.4 and trained the models by optimizing the Lm(θ, Ï ) given by Eq. 8 averaged uniformly over minibatches of data points variational objective x) were parameterized x. Both our generative models pθ(z, x) and variational distributions qÏ (z with neural networks as described above. We trained models with and approximated the NLL with
1611.00712#26
1611.00712#28
1611.00712
[ "1610.05683" ]
1611.00712#28
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
â { L50,000(θ, Ï ) averaged uniformly over the whole dataset. The results are shown in Table 1. In general, VIMCO outperformed Concrete relaxations for linear models and Concrete relaxations outperformed VIMCO for non-linear models. We also tested the effectiveness of Concrete relaxations on generative models with n-ary layers on the L5(θ, Ï ) ob- jective. The best 4-ary model achieved test/train NLL 86.7/83.3, the best 8-ary achieved 87.4/84.6 with Concrete relaxations, more complete results in Appendix E. The relatively poor performance of the 8-ary model may be because moving from 4 to 8 results in a more difï¬ cult objective without much added capacity. As a control we trained n-ary models using logistic normals as relaxations of discrete distributions (with retuned temperature hyperparameters). Because the discrete zero tem- perature limit of logistic Normals is a multinomial probit whose mass function is not known, we evaluated the discrete model by sampling from the discrete distribution parameterized by the logits
1611.00712#27
1611.00712#29
1611.00712
[ "1610.05683" ]
1611.00712#29
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
8 Published as a conference paper at ICLR 2017 binary model (392Vâ 240H â 240Hâ 392V) Test NLL Train NLL m Concrete VIMCO Concrete VIMCO 1 5 50 58.5 54.3 53.4 61.4 54.5 51.8 54.2 49.2 48.2 59.3 52.7 49.6 (392Vâ 240H â 240Hâ 240H â 392V) 1 5 50 56.3 52.7 52.0 59.7 53.5 50.2 51.6 46.9 45.9 58.4 51.6 47.9 # Xr Figure 4: Results for structured prediction on MNIST comparing Concrete relaxations to VIMCO. When m = 1 VIMCO stands for NVIL. The plot on the right shows the objective (lower is better) for the continuous and discrete graph trained at temperatures λ. In the shaded region, units prefer to communicate real values in the interior of ( â learned during training. The best 4-ary model achieved test/train NLL of 88.7/85.0, the best 8-ary model achieved 89.1/85.1. 5.3 STRUCTURED OUTPUT PREDICTION Structured output prediction is concerned with modelling the high-dimensional distribution of the observation given a context and can be seen as conditional density estimation. We considered the task of predicting the bottom half x1 of an image of an MNIST digit given its top half x2, as introduced by Raiko et al. (2014). We followed Raiko et al. (2014) in using a model with layers of discrete stochastic units between the context and the observation. Conditioned on the top half x2 the network samples from a distribution pÏ (z x2) over layers of stochastic units z then predicts x1 by sampling from a distribution pθ(x1 | SP m (θ, Ï ) = ; 1 LEP (0,d)=, E log { â x |Z)}|. OO) = fle (Gp Deol | 20) 1 m Lm(θ, Ï
1611.00712#28
1611.00712#30
1611.00712
[ "1610.05683" ]
1611.00712#30
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
) (Eq. 8) where we use the prior pÏ (z This objective is a special case of distribution. Thus, the objective is a lower bound on log pθ,Ï (x1 | averaged uniformly over mini- We trained the models by optimizing 1, 5, 50 SP 100(θ, Ï ) averaged uniformly over the entire dataset. The batches and evaluated them by computing results are shown in Figure 4. Concrete relaxations more uniformly outperformed VIMCO in this instance. We also trained n-ary (392Vâ 240Hâ 240Hâ 240Hâ 392V) models on the (θ, Ï ) objec- tive using the best temperature hyperparameters from density estimation. 4-ary achieved a test/train NLL of 55.4/46.0 and 8-ary achieved 54.7/44.8. As opposed to density estimation, increasing arity uniformly improved the models. We also investigated the hypothesis that for higher temperatures Concrete relaxations might prefer the interior of the interval to the boundary points . Figure 1, 1 } (θ, Ï ). 4 was generated with binary (392Vâ 240Hâ 240Hâ 240Hâ
1611.00712#29
1611.00712#31
1611.00712
[ "1610.05683" ]
1611.00712#31
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
392V) model trained on # L # 6 CONCLUSION We introduced the Concrete distribution, a continuous relaxation of discrete random variables. The Concrete distribution is a new distribution on the simplex with a closed form density parameterized by a vector of positive location parameters and a positive temperature. Crucially, the zero temper- ature limit of every Concrete distribution corresponds to a discrete distribution, and any discrete distribution can be seen as the discretization of a Concrete one. The application we considered was training stochastic computation graphs with discrete stochastic nodes. The gradients of Concrete relaxations are biased with respect to the original discrete objective, but they are low variance un- biased estimators of a continuous surrogate objective. We showed in a series of experiments that stochastic nodes with Concrete distributions can be used effectively to optimize the parameters of a stochastic computation graph with discrete stochastic nodes.
1611.00712#30
1611.00712#32
1611.00712
[ "1610.05683" ]
1611.00712#32
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
We did not ï¬ nd that annealing or automatically tuning the temperature was important for these experiments, but it remains interesting and possibly valuable future work. 9 Published as a conference paper at ICLR 2017 ACKNOWLEDGMENTS We thank Jimmy Ba for the excitement and ideas in the early days, Stefano Favarro for some analysis of the distribution. We also thank Gabriel Barth-Maron and Roger Grosse. REFERENCES Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.
1611.00712#31
1611.00712#33
1611.00712
[ "1610.05683" ]
1611.00712#33
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ ow.org. J Aitchison. A general class of distributions on the simplex. Journal of the Royal Statistical Society. Series B (Methodological), pp. 136â 146, 1985. J Atchison and Sheng M Shen. Logistic-normal distributions: Some properties and uses. Biometrika, 67(2):261â 272, 1980.
1611.00712#32
1611.00712#34
1611.00712
[ "1610.05683" ]
1611.00712#34
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. David Blei and John Lafferty. Correlated topic models. 2006. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR, 2016. Robert J Connor and James E Mosimann.
1611.00712#33
1611.00712#35
1611.00712
[ "1610.05683" ]
1611.00712#35
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Concepts of independence for proportions with a gener- alization of the dirichlet distribution. Journal of the American Statistical Association, 64(325): 194â 206, 1969. Stefano Favaro, Georgia Hadjicharalambous, and Igor Pr¨unster. On a class of distributions on the simplex. Journal of Statistical Planning and Inference, 141(9):2987 â 3004, 2011. Brendan Frey.
1611.00712#34
1611.00712#36
1611.00712
[ "1610.05683" ]
1611.00712#36
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Continuous sigmoidal belief networks trained using slice sampling. In NIPS, 1997. Michael C Fu. Gradient estimation. Handbooks in operations research and management science, 13:575â 616, 2006. Xavier Glorot and Yoshua Bengio. Understanding the difï¬ culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â 256, 2010. Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â 84, 1990.
1611.00712#35
1611.00712#37
1611.00712
[ "1610.05683" ]
1611.00712#37
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471â 476, 2016. Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter.
1611.00712#36
1611.00712#38
1611.00712
[ "1610.05683" ]
1611.00712#38
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Variance reduction techniques for gradient estimates in reinforcement learning. JMLR, 5, 2004. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â 1836, 2015. Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. arXiv preprint arXiv:1310.8499, 2013. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. MuProp: Unbiased backpropagation for stochastic neural networks. ICLR, 2016.
1611.00712#37
1611.00712#39
1611.00712
[ "1610.05683" ]
1611.00712#39
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ ce, 1954. Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum a-posteriori perturbations. In ICML, 2012. 10 Published as a conference paper at ICLR 2017 Tamir Hazan, George Papandreou, and Daniel Tarlow. Perturbation, Optimization, and Statistics. MIT Press, 2016. Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. JMLR, 14(1):1303â 1347, 2013. E. Jang, S. Gu, and B. Poole.
1611.00712#38
1611.00712#40
1611.00712
[ "1610.05683" ]
1611.00712#40
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Categorical Reparameterization with Gumbel-Softmax. ArXiv e-prints, November 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014.
1611.00712#39
1611.00712#41
1611.00712
[ "1610.05683" ]
1611.00712#41
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Tom´aË s KoË cisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and In Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. EMNLP, 2016. R. Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley, 1959. Chris J Maddison. A Poisson process model for Monte Carlo. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbation, Optimization, and Statistics, chapter 7. MIT Press, 2016. Chris J Maddison, Daniel Tarlow, and Tom Minka.
1611.00712#40
1611.00712#42
1611.00712
[ "1610.05683" ]
1611.00712#42
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Aâ Sampling. In NIPS, 2014. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014. Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. In ICML, 2016. Volodymyr Mnih, Nicolas Heess, Alex Graves, and koray kavukcuoglu. Recurrent Models of Visual Attention. In NIPS, 2014. Christian A Naesseth, Francisco JR Ruiz, Scott W Linderman, and David M Blei. Rejection sam- pling variational inference. arXiv preprint arXiv:1610.05683, 2016. John William Paisley, David M. Blei, and Michael I. Jordan.
1611.00712#41
1611.00712#43
1611.00712
[ "1610.05683" ]
1611.00712#43
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Variational bayesian inference with stochastic search. In ICML, 2012. George Papandreou and Alan L Yuille. Perturb-and-map random ï¬ elds: Using discrete optimization to learn and sample from energy models. In ICCV, 2011. Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014. Rajesh Ranganath, Sean Gerrish, and David M.
1611.00712#42
1611.00712#44
1611.00712
[ "1610.05683" ]
1611.00712#44
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Blei. Black box variational inference. In AISTATS, 2014. William S Rayens and Cidambi Srinivasan. Dependence properties of generalized liouville distri- butions on the simplex. Journal of the American Statistical Association, 89(428):1465â 1470, 1994. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. Francisco JR Ruiz, Michalis K Titsias, and David M Blei. The generalized reparameterization gradient. arXiv preprint arXiv:1610.02287, 2016. Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approxi- mate Reasoning, 50(7):969â 978, 2009.
1611.00712#43
1611.00712#45
1611.00712
[ "1610.05683" ]
1611.00712#45
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In ICML, 2008. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688. Michalis Titsias and Miguel L´azaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Tony Jebara and Eric P. Xing (eds.), ICML, 2014.
1611.00712#44
1611.00712#46
1611.00712
[ "1610.05683" ]
1611.00712#46
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
11 Published as a conference paper at ICLR 2017 Michalis Titsias and Miguel L´azaro-Gredilla. Local expectation gradients for black box variational inference. In NIPS, 2015. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio.
1611.00712#45
1611.00712#47
1611.00712
[ "1610.05683" ]
1611.00712#47
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. John I Yellott. The relationship between luceâ s choice axiom, thurstoneâ s theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 15(2): 109â 144, 1977. # A PROOF OF PROPOSITION 1 Let X Concrete(α, λ) with location parameters α (0, )n and temperature λ (0, ). # Let X Concrete(α, λ) with location parameters α # â ¼ 1. Let Gk â ¼ â â â
1611.00712#46
1611.00712#48
1611.00712
[ "1610.05683" ]
1611.00712#48
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Gumbel i.i.d., consider â exp((log ax + Gx)/A) DiL1 exp((log ai + Gi)/d) Yi Let Zk = log αk + Gk, which has density αk exp( zk) exp( αk exp( zk)) â â â We will consider the invertible transformation F (z1, . . . , zn) = (y1, . . . , ynâ 1, c) where ye = exp(zn/A)e7* n c= Dexplsi/2) i=1 then F â 1(y1, . . . , ynâ 1, c) = (λ(log y1 + log c), . . . , λ(log ynâ 1 + log c), λ(log yn + log c)) n-1 >; where yn = 1 i=1 yi. This has Jacobian â â λyâ 1 1 0 0 λyâ 1 n 0 λyâ 1 2 0 λyâ 1 n 0 0 λyâ 1 3 ... λyâ 1 n 0 0 0 λyâ 1 n . . . . . . . . . . . . 0 0 0 λyâ 1 n λcâ 1 λcâ 1 λcâ 1 λcâ 1 â â â â â â
1611.00712#47
1611.00712#49
1611.00712
[ "1610.05683" ]
1611.00712#49
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
1 rows to the bottom row we see that this Jacobian by adding yi/yn times each of the top n has the same determinant as λyâ 1 1 0 0 0 λyâ 1 2 0 0 0 0 0 λyâ 1 3 ... 0 0 . . . 0 . . . . . . 0 0 . . . 0 0 0 λcâ 1 λcâ 1 λcâ 1 0 λ(cyn)â 1 and thus the determinant is equal to yr oe eT] in Yi
1611.00712#48
1611.00712#50
1611.00712
[ "1610.05683" ]
1611.00712#50
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
12 Published as a conference paper at ICLR 2017 all together we have the density # Aâ TI Aâ TI pea Oe exp(â A log ys, â A log c) exp(â ay, exp(â A log yx â A log c)) Tina yi λ log c) exp( i=1 yi with r = log c change of variables we have density Aâ TT, Aâ TT, Oe exp(â Ar) exp(â ax exp(â A log yx â Ar)) Ty? at exp(â nAr) exp(â > a; exp(â Alog y; â Ar)) = letting y = log(oy 4 any,) # n=1 αkyâ λ k ) k=1 αk Te muro) exp(â nAr +7) exp(â exp(â Ar + 7) integrating out r Aâ TT Oe (ao + vr) Thay 2 ePO) r ety Me aT = 1)ly"- 1 T= LOKYR (Shan Rv)â * (exp(â yn)F(n)) = -1 # Thus Y d= X. 2. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 3. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 4. Let λ 1)â 1. The density of X can be rewritten as â ¤ â n -r-1 ORY Po, r(@) & k=1 wit ay; * -Il a, Lune 1)-1 par Thai}
1611.00712#49
1611.00712#51
1611.00712
[ "1610.05683" ]
1611.00712#51
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Thus, the log density is up to an additive constant C 1 n log pa,r(x) = S0(A(n = 1) = LD log yx â nlog | S> ax T] kal k=1 jfk If λ log is convex. For the 1)â 1. last term, Thus, their composition is convex. The sum of convex terms is convex, ï¬ nishing the proof. # B THE BINARY SPECIAL CASE Bernoulli random variables are an important special case of discrete distributions taking states in . Here we consider the binary special case of the Gumbel-Max trick from Figure 1a along 0, 1 } )2 be a two state discrete random variable on Let D â D1 + D2 = 1, parameterized as in Figure 1a by α1, α2 > 0: Discrete(α) for α (0, â ¼ â 0, 1 { 2 such that } P(D1 = 1) = α1 α1 + α2 (14) 13
1611.00712#50
1611.00712#52
1611.00712
[ "1610.05683" ]
1611.00712#52
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Published as a conference paper at ICLR 2017 The distribution is degenerate, because D1 = 1 the Gumbel-Max reparameterization, the event that D1 = 1 is the event that G2 + log α2} G2 â ¼ G1 â where U â ¼ D2. Therefore we consider just D1. Under G1 + log α1 > Gumbel i.i.d. The difference of two Gumbels is a Logistic distribution U ) â { d= log U where Gk â ¼ Logistic, which can be sampled in the following way, G1 â Uniform(0, 1). So, if α = α1/α2, then we have G2 log(1 â â P(D1 = 1) = P(G1 + log α1 > G2 + log α2) = P(log U log(1 U ) + log α > 0) (15) â â Thus, D1 d= H(log α + log U log(1 U )), where H is the unit step function. â â
1611.00712#51
1611.00712#53
1611.00712
[ "1610.05683" ]
1611.00712#53
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Correspondingly, we can consider the Binary Concrete relaxation that results from this process. As in the n-ary case, we consider the sampling routine for a Binary Concrete random variable X â â ¼ X = 1 + exp( 1 (log α + L)/λ) (16) â We deï¬ ne the Binary Concrete random variable X by its density on the unit interval. Deï¬ nition 2 (Binary Concrete Random Variables). Let α has a Binary Concrete distribution X its density is: # â ¬ (0,1) temperature A, if # X pα,λ(x) = λαxâ λâ 1(1 (αxâ λ + (1 x)â λâ 1 x)â λ)2 . (17) â â
1611.00712#52
1611.00712#54
1611.00712
[ "1610.05683" ]
1611.00712#54
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
We state without proof the special case of Proposition 1 for Binary Concrete distributions Proposition 2 (Some Properties of Binary Concrete Random Variables). Let X BinConcrete(α, λ) with location parameter α ~ â ¼ # â Logistic, then X d= â â 1 1+exp(â (log α+L)/λ) , (a) (Reparameterization) If L â ¼ â (b) (Rounding) P (X > 0.5) = α/(1 + α), (c) (Zero temperature) P (limλâ 0 X = 1) = α/(1 + α), (d) (Convex eventually) If λ 1, then pα,λ(x) is log-convex in x. â ¤
1611.00712#53
1611.00712#55
1611.00712
[ "1610.05683" ]
1611.00712#55
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
We can generalize the binary circuit beyond Logistic random variables. Consider an arbitrary ran- dom variable X with inï¬ nite support on R. If Φ : R â P(H(X) = 1) = 1 Φ(0) â If we want this to have a Bernoulli distribution with probability α/(1 + α), then we should solve the equation 1 â Φ(0) = α 1 + α . This gives Φ(0) = 1/(1 + α), which can be accomplished by relocating the random variable Y with CDF Φ to be X = Y â # C USING CONCRETE RELAXATIONS In this section we include some tips for implementing and using the Concrete distribution as a relaxation. We use the following notation # nm Ï (x) = 1 1 + exp( x) n LΣE k=1 { xk} = log k=1 exp(xk) â Both sigmoid and log-sum-exp are common operations in libraries like TensorFlow or theano.
1611.00712#54
1611.00712#56
1611.00712
[ "1610.05683" ]
1611.00712#56
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
14 Published as a conference paper at ICLR 2017 # C.1 THE BASIC PROBLEM For the sake of exposition, we consider a simple variational autoencoder with a single discrete random variable and objective L1(θ, a, α) given by Eq. 8 for a single data point x. This scenario will allow us to discuss all of the decisions one might make when using Concrete relaxations. In particular, )n, let pθ(x Discrete(a) with a network), which is a continuous function of d and parameters θ, let D ⠼ hot discrete random variable in (0, 1)n whose unnormalized probabilities α(x) function (possible a neural net with its own parameters) of x. Let Qα(d | D. Then, we care about optimizing L1(θ, a, α) = E D⠼Qα(d|x) log pθ(x D)Pa(D) x) | | Qα(D (18) with respect to θ, a, and any parameters in α from samples of the SCG required to simulate an estimator of L1(θ, a, α).
1611.00712#55
1611.00712#57
1611.00712
[ "1610.05683" ]
1611.00712#57
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
# C.2 WHAT YOU MIGHT RELAX AND WHY The ï¬ rst consideration when relaxing an estimator of Eq. 18 is how to relax the stochastic computa- tion. The only sampling required to simulate Discrete(α(x)). The correspond- L1(θ, a, α) is D Concrete(α(x), λ1) with temperature λ1 and location ing Concrete relaxation is to sample Z â ¼ parameters are the the unnormalized probabilities α(x) of D. Let density qα,λ1(z x) be the density | of Z. We get a relaxed objective of the form: E Dâ ¼Qα(d|x) [ · ] â E Zâ ¼qα,λ1 (z|x) [ · ] (19)
1611.00712#56
1611.00712#58
1611.00712
[ "1610.05683" ]
1611.00712#58
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
This choice allows us to take derivatives through the stochastic computaitons of the graph. The second consideration is which objective to put in place of [ ] in Eq. 19. We will consider the ideal scenario irrespective of numerical issues. In Subsection C.3 we address those numerical x) (which is issues. The central question is how to treat the expectation of the ratio Pa(D)/Qα(D | the KL component of the loss) when Z replaces D. There are at least three options for how to modify the objective. They are, (20) replace the discrete mass with Concrete densities, (21) relax the computation of the discrete log mass, (22) replace it with the analytic discrete KL. Pa,ro(Z) E log po (a|Z) + log â â =â
1611.00712#57
1611.00712#59
1611.00712
[ "1610.05683" ]
1611.00712#59
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
20 soak ayy [lot volelZ) + log PAK) 20) n i P,(d) E log pe (|Z) + Z; log â â _._â 21 zogann, (ln) | 8 Po(a|Z) > 8 O, (dO]x) (21) # n E Zâ ¼qα,λ1 (z|x) [log pθ(x Z)] + | i=1 Qα(d(i) x) log | Pa(d(i)) Qα(d(i) x) (22) | where d(i) is a one-hot binary vector with d(i) i = 1 and pa,λ2 (z) is the density of some Concrete random variable with temperature λ2 with location parameters a. Although (22) or (21) is tempting, we emphasize that these are NOT necessarily lower bounds on log p(x) in the relaxed model. (20) is the only objective guaranteed to be a lower bound: ; - Pa,d2(Z) ; . soaE oy [oePolel2) + toe 2 oy] <toe | polale)Paas(2) dr. 23) For this reason we consider objectives of the form (20). Choosing (22) or (21) is possible, but the value of these objectives is not interpretable and one should early stop otherwise it will overï¬
1611.00712#58
1611.00712#60
1611.00712
[ "1610.05683" ]
1611.00712#60
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
t to the spurious â KLâ component of the loss. We now consider practical issues with (20) and how to address them. All together we can interpret qα,λ1(z x) as the Concrete relaxation of the variational | posterior and pa,λ2 (z) the relaxation of the prior. 15 Published as a conference paper at ICLR 2017 C.3 WHICH RANDOM VARIABLE TO TREAT AS THE STOCHASTIC NODE When implementing a SCG like the variational autoencoder example, we need to compute log- probabilities of Concrete random variables. This computation can suffer from underï¬ ow, so where possible itâ s better to take a different node on the relaxed graph as the stochastic node on which log- likelihood terms are computed. For example, itâ s tempting in the case of Concrete random variables to treat the Gumbels as the stochastic node on which the log-likelihood terms are evaluated and the softmax as downstream computation. This will be a looser bound in the context of variational inference than the corresponding bound when treating the Concrete relaxed states as the node. The solution we found to work well was to work with Concrete random variables in log-space. Consider the following vector in Rn for location parameters α ) and Gk â
1611.00712#59
1611.00712#61
1611.00712
[ "1610.05683" ]
1611.00712#61
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
¼ # loga; + Gi x log αk + Gk λ n LΣE i=1 Yk = â therefore we call Y an Y â ¼ ExpConcrete(α, λ). The advantage of this reparameterization is that the KL terms of a varia- tional loss are invariant under invertible transformation. exp is invertible, so the KL between two ExpConcrete random variables is the same as the KL between two Concrete random variables. The log-density log κα,λ(y) of an ExpConcrete(α, λ) is also simple to compute: n n log Ka,,(y) = log((n â 1)!) + (n â 1) log 4 (Spree - an) â nLXE {log ax â Ayn} k=1 Rn such that LΣEn for y tribution is still interpretable in the zero temperature limit. In the limit of λ â random variables become discrete random variables over the one-hot vectors of d where LΣEn 0, 1 } { = 0. Note that the sample space of the ExpConcrete dis- 0 ExpConcrete n } yk} k=1{ â
1611.00712#60
1611.00712#62
1611.00712
[ "1610.05683" ]
1611.00712#62
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
â {â â n. = 0. exp(Y ) in this case results in the one-hot vectors in dk} , 0 # k=1{ C.3.1 n-ARY CONCRETE Returning to our initial task of relaxing £1(0,a, a), let Y ~ ExpConcrete(a(x), 1) Ke,, (y|x) be the ExpConcrete latent variable corresponding to the Concrete relaxation of the variational posterior Q. (d|x). Let pa,y, (y) be the density of an ExpConcrete random corresponding to the Concrete relaxation pa,,,(z) of P,(d). All together we can see that Pa,d2(Z)_] # with density qu,x, (z|x) variable Pa,d2(Z)_] log po(a|Z) + log 2 | = E ow pote exp(Y)) + log Zar (2|@) da,d,(Z|t) | Â¥~rme,; (ule) Ke, (Y |x) (24) Pa,d2(Â¥) Therefore, we used ExpConcrete random variables as the stochastic nodes and treated exp as a downstream computation. The relaxation is then, relax L£1(0,a,a) Y og po(z| exp(Y)) + log oa | ; (25) Y~Ra,d, (ylx) Kadi (Y|x) and the objective on the RHS is fully reparameterizable and what we chose to optimize. # C.3.2 BINARY CONCRETE In the binary case, the logistic function is invertible, so it makes most sense to treat the logit plus noise as the stochastic node. In particular, the binary random node was sample from: Y = log α + log U â λ log(1 â U ) (26) Uniform(0, 1) and always followed by Ï as downstream computation. log U where U â U ) is a Logistic random variable, details in the cheat sheet, and so the log-density log gα,λ(y) of this node (before applying Ï ) is log gα,λ(y) = log λ λy + log α 2 log(1 + exp( λy + log α)) â â â 16 |
1611.00712#61
1611.00712#63
1611.00712
[ "1610.05683" ]
1611.00712#63
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Published as a conference paper at ICLR 2017 All together the relaxation in the binary special case would be £:(6,a,a)" EB [logpo(x|a(Â¥)) + 10g 242) ; 27 Â¥~ga,a, (y|®) Ja, (Â¥|2) e where fa,λ2(y) is the density of a Logistic random variable sampled via Eq. 26 with location a and temperature λ2. This section had a dense array of densities, so we summarize the relevant ones, along with how to sample from them, in Appendix F. C.4 CHOOSING THE TEMPERATURE The success of Concrete relaxations will depend heavily on the choice of temperature during train- ing. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior of the simplex as in Figure For example, choosing additive Gaussian noise e ~ Normal(0, 1) with the logistic function o(x) to get relaxed Bernoullis of the form o(â ¬ + 1) will result in a large mode in the centre of the interval. This is because the tails of the Gaussian distribution drop off much faster than the rate at which o squashes. Even including a temperature parameter does not completely solve this problem; the density of o((â ¬ + 4)/A) at any temperature still goes to 0 as its approaches the boundaries 0 and 1 of the unit interval. Therefore |(D]of Proposi- tion|I]is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than (n â 1)~! we are guaranteed not to have any modes in the interior for any a â ¬ (0, 00)â .
1611.00712#62
1611.00712#64
1611.00712
[ "1610.05683" ]
1611.00712#64
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
In the case of the Binary Concrete distribution, the tails of the Logistic additive noise are balanced with the logistic squashing function and for temperatures \ < 1 the density of the Binary Concrete distribu- tion is log-convex for all parameters a, see Figure[3b] Still, practice will often disagree with theory here. The peakiness of the Concrete distribution increases with n, so much higher temperatures are tolerated (usually necessary). For n = 1 temperatures A < (n â 1)~1 is a good guideline. For n > 1 taking A < (n â 1)~1 is not necessarily a good guideline, although it will depend on n and the specific application. As n â > oo the Concrete distribution becomes peakier, because the random normalizing constant ee exp((log ax + Gx)/A) grows. This means that practically speaking the optimization can tolerate much higher temperatures than (n â 1)~!. We found in the cases n = 4 that \ = 1 was the best temperature and in n = 8, A = 2/3 was the best. Yet A = 2/3 was the best single perform- ing temperature across the n â ¬ {2,4,8} cases that we considered. We recommend starting in that ball-park and exploring for any specific application. When the loss depends on a KL divergence between two Concrete nodes, itâ s possible to give the nodes distinct temperatures.
1611.00712#63
1611.00712#65
1611.00712
[ "1610.05683" ]
1611.00712#65
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
We found this to improve results quite dramatically. In the context of our original problem and itâ s relaxation: Y) L£1(0,a, a) = E log po(2| exp(Y)) + lo Por) > 1(0,, «) vn e te) ¢ pe(z| exp(Y)) 8 aa, Ve) |? (28) Both λ1 for the posterior temperature and λ2 for the prior temperature are tunable hyperparameters. # D EXPERIMENTAL DETAILS The basic model architectures we considered are exactly analogous to those in Burda et al. (2016) with Concrete/discrete random variables replacing Gaussians. # D.1 â VS â ¼
1611.00712#64
1611.00712#66
1611.00712
[ "1610.05683" ]
1611.00712#66
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
The conditioning functions we used were either linear or non-linear. Non-linear consisted of two tanh layers of the same size as the preceding stochastic layer in the computation graph. # D.2 n-ARY LAYERS All our models are neural networks with layers of n-ary discrete stochastic nodes with log2(n)- log2(n). For a generic n-ary node dimensional states on the corners of the hypercube 1, 1 } {â 17 Published as a conference paper at ICLR 2017 Discrete(α) for sampling proceeds as follows. Sample a n-ary discrete random variable D log2(n) α } {â as columns, then we took Y = CD as downstream computation on D. The corresponding Con- crete relaxation is to take X ) and set (0, Ë Y = CX. For the binary case, this amounts to simply sampling U Uniform(0, 1) and taking â ¼ 1. The corresponding Binary Concrete relaxation is Y = 2H(log U U ) + log α) â Ë Y = 2Ï ((log U 1. U ) + log α)/λ) â â â â â
1611.00712#65
1611.00712#67
1611.00712
[ "1610.05683" ]
1611.00712#67
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
# D.3 BIAS INITIALIZATION All biases were initialized to 0 with the exception of the biases in the prior decoder distribution over the 784 or 392 observed units. These were initialized to the logit of the base rate averaged over the respective dataset (MNIST or Omniglot). # D.4 CENTERING We also found it beneï¬ cial to center the layers of the inference network during training. The activity 1, 1)d of each stochastic layer was centered during training by maintaining a exponentially in ( decaying average with rate 0.9 over minibatches. This running average was subtracted from the activity of the layer before it was updated. Gradients did not ï¬ ow throw this computation, so it simply amounted to a dynamic offset. The averages were not updated during the evaluation. D.5 HYPERPARAMETER SELECTION All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized using Adam (Kingma & Ba, 2014) with parameters β1 = 0.9, β2 = 0.999 for 107 steps on minibatches of size 64. Hyperparameters were selected on the MNIST dataset by grid search taking the values that performed best on the validation set. Learning rates were chosen from and weight decay from . Two sets of hyperparameters were selected, one for linear models and one for non-linear models. The linear modelsâ hyperparameters were selected with L5(θ, Ï ) objective. The non-linear modelsâ hyperpa- the 200Hâ 200Hâ 784V density model on the rameters were selected with the 200H L5(θ, Ï ) objective. For 784V density model on the 200H â ¼ density estimation, the Concrete relaxation hyperparameters were (weight decay = 0, learning rate 10â 4) for linear and (weight decay = 0, learning rate = 10â 4) for non-linear. For structured = 3 prediction Concrete relaxations used (weight decay = 10â 3, learning rate = 3 In addition to tuning learning rate and weight decay, we tuned temperatures for the Concrete relax- ations on the density estimation task. We found it valuable to have different values for the prior and posterior distributions, see Eq. 28.
1611.00712#66
1611.00712#68
1611.00712
[ "1610.05683" ]
1611.00712#68
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
In particular, for binary we found that (prior λ2 = 1/2, posterior λ1 = 2/3) was best, for 4-ary we found (prior λ2 = 2/3, posterior λ1 = 1) was best, and (prior λ2 = 2/5, posterior λ1 = 2/3) for 8-ary. No temperature annealing was used. For structured prediction we used just the corresponding posterior λ1 as the temperature for the whole graph, as there was no variational posterior. We performed early stopping when training with the score function estimators (VIMCO/NVIL) as they were much more prone to overï¬
1611.00712#67
1611.00712#69
1611.00712
[ "1610.05683" ]
1611.00712#69
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
tting. 18 Published as a conference paper at ICLR 2017 # E EXTRA RESULTS binary (240H â ¼784V) 4-ary (240H â ¼784V) 8-ary (240H â ¼784V) binary (240Hâ ¼240H â ¼784V) 4-ary (240Hâ ¼240H â ¼784V) 8-ary (240Hâ ¼240H â ¼784V) m Test 91.9 1 89.0 5 88.4 50 1 5 50 91.4 89.4 89.7 1 5 50 92.5 90.5 90.5 1 5 50 87.9 86.6 86.0 1 5 50 87.4 86.7 86.7 1 5 50 88.2 87.4 87.2 Train 90.7 87.1 85.7 89.7 87.0 86.5 89.9 87.0 86.7 86.0 83.7 82.7 85.0 83.3 83.0 85.9 84.6 84.0 Test 108.0 107.7 109.0 110.7 110.5 113.0 119.61 120.7 121.7 106.6 106.9 108.7 106.6 108.3 109.4 111.3 110.5 111.1 Train 102.2 100.0 99.1 1002.7 100.2 100.0 105.3 102.7 101.0 99.0 97.1 95.9 97.8 97.3 96.8 102.5 100.5 99.5 Table 2: Density estimation using Concrete relaxations with distinct arity of layers.
1611.00712#68
1611.00712#70
1611.00712
[ "1610.05683" ]
1611.00712#70
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
19 Published as a conference paper at ICLR 2017 # F CHEAT SHEET 1 = 1+ exp(â 2) LEE {xx} = log (> a) k=1 log Anâ ! = {© â ¬ R" | xz â ¬ (â c, 0), LEE{ex} = = of Distribution and Domains Reparameterization/How To Sample # Mass/Density G G Gumbel R â ¼ â # G d= â 10g(~log(U)) # log( # log(U )) â â # exp( exp(â g â exp(â 9)) # g # exp( g)) â â â # L L # Logistic R ~ â ¼ â # LeR # L d= log(U ) â â # log(1 â â U ) # exp( â (1 + exp( # l) # expl-)? l))2 â # X µ λ Logistic(µ, λ) R (0, ~ â ¼ â â # neR ) â # X d= # L + µ λ # λ exp( (1 + exp( λx + µ) λx + µ))2 â â # exp(â Azx # X X α # Bernoulli(α) 0, 1 # ~ che â ¼ â { (0, â } ) â # X d= 1 {i # if L + log α otherwise â ¥ 0 α 1 + α if x = 1 # X X α λ BinConcrete(α, λ) (0, 1) ) (0, â ) (0, â ~ â ¼ â â â # X d= Ï ((L + log α)/λ) λαxâ λâ 1(1 (αxâ λ + (1 â â â x)â λâ 1 x)â λ)2 X X â ¬ Discrete(α) â ¼ n 0, 1 } â { k=1 Xk = 1 # d= # Xk # Xp= # fl 0 if log αk + Gk > log αi + Gi for i otherwise # = k # αk i=1 αi if xk = 1
1611.00712#69
1611.00712#71
1611.00712
[ "1610.05683" ]
1611.00712#71
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
# α â ¬ â (0, # )n 00)â â # X X α λ X ~ Concrete(a, \) n -y- XeAr-l x, £ _2xp((log ax + Ge)/) (nâ 1)! Il ag â ¬ (0, 00)â x SUL, exp((log ax, + Gi)/A) A~(mI) hey Diet air; * â ¬ (0, 00) # X X α λ ~ ExpConcrete(a, \) â ¬log A"! d logan + Gr n loga; + Gi (nâ 1)! Qn exp( = â ¬ (0, 00)â Xn = r ~ TEE r A~(mI) rl Foie Gi eXP(â Azi) â ¬ (0, 00) Table 3: Cheat sheet for the random variables we use in this work. Note that some of these are atypical parameterizations, particularly the Bernoulli and Logistic random variables. The table only Uniform(0, 1). From there on it may assumes that you can sample uniform random numbers U Logistic is deï¬ ned in the deï¬ ne random variables and reuse them later on. For example, L second row, and after that point L represents a Logistic random variable that can be replaced by U ). Whenever random variables are indexed, e.g. Gk, they represent separate log U independent calls to a random number generator.
1611.00712#70
1611.00712#72
1611.00712
[ "1610.05683" ]
1611.00712#72
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
20 # λxi)
1611.00712#71
1611.00712
[ "1610.05683" ]
1611.00625#0
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
6 1 0 2 v o N 3 ] G L . s c [ 2 v 5 2 6 0 0 . 1 1 6 1 : v i X r a # TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier [email protected], [email protected]
1611.00625#1
1611.00625
[ "1606.01540" ]
1611.00625#1
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
March 2, 2022 # Abstract We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch [9]. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft. # Introduction Deep Learning techniques [13] have recently enabled researchers to successfully tackle low-level perception problems in a supervised learning fashion.
1611.00625#0
1611.00625#2
1611.00625
[ "1606.01540" ]
1611.00625#2
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
In the ï¬ eld of Reinforcement Learning this has transferred into the ability to develop agents able to learn to act in high-dimensional input spaces. In particular, deep neural networks have been used to help reinforcement learning scale to environments with visual inputs, allowing them to learn policies in testbeds that previously were completely intractable. For instance, algorithms such as Deep Q-Network (DQN) [14] have been shown to reach human-level performances on most of the classic ATARI 2600 games by learning a controller directly from raw pixels, and without any additional supervision beside the score. Most of the work spawned in this new area has however tackled environments where the state is fully observable, the reward function has no or low delay, and the action set is relatively small. To solve the great majority of real life problems agents must instead be able to handle partial observability, structured and complex dynamics, and noisy and high-dimensional control interfaces. To provide the community with useful research environments, work was done towards building platforms based on videogames such as Torcs [27], Mario AI [20], Unrealâ s BotPrize [10], the Atari Learning Environment [3], VizDoom [12], and Minecraft [11], all of which have allowed researchers to train deep learning models with imitation learning, reinforcement learning and various decision making algorithms on increasingly diï¬
1611.00625#1
1611.00625#3
1611.00625
[ "1606.01540" ]
1611.00625#3
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
cult problems. Recently there have also been eï¬ orts to unite those and many other such environments in one platform to provide a standard interface for interacting with them [4]. We propose a bridge between StarCraft: Brood War, an RTS game with an active AI research community and annual AI competitions [16, 6, 1], and Lua, with examples in Torch [9] (a machine learning library). 1 # 2 Real-Time Strategy for Games AI Real-time strategy (RTS) games have historically been a domain of interest of the planning and decision making research communities [5, 2, 6, 16, 17]. This type of games aims to simulate the control of multiple units in a military setting at diï¬ erent scales and level of complexity, usually in a ï¬ xed-size 2D map, in duel or in small teams. The goal of the player is to collect resources which can be used to expand their control on the map, create buildings and units to ï¬ ght oï¬ enemy deployments, and ultimately destroy the opponents. These games exhibit durative moves (with complex game dynamics) with simultaneous actions (all players can give commands to any of their units at any time), and very often partial observability (a â fog of warâ : opponent units not in the vicinity of a playerâ s units are not shown). RTS gameplay: Components RTS game play are economy and battles (â
1611.00625#2
1611.00625#4
1611.00625
[ "1606.01540" ]
1611.00625#4
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
macroâ and â microâ respectively): players need to gather resources to build military units and defeat their opponents. To that end, they often have worker units (or extraction structures) that can gather resources needed to build workers, buildings, military units and research upgrades. Workers are often also builders (as in StarCraft), and are weak in ï¬ ghts compared to military units. Resources may be of varying degrees of abundance and importance. For instance, in StarCraft minerals are used for everything, whereas gas is only required for advanced buildings or military units, and technology upgrades. Buildings and research deï¬ ne technology trees (directed acyclic graphs) and each state of a â tech treeâ allow for the production of diï¬ erent unit types and the training of new unit abilities. Each unit and building has a range of sight that provides the player with a view of the map. Parts of the map not in the sight range of the playerâ s units are under fog of war and the player cannot observe what happens there. A considerable part of the strategy and the tactics lies in which armies to deploy and where. Military units in RTS games have multiple properties which diï¬ er between unit types, such as: attack range (including melee), damage types, armor, speed, area of eï¬ ects, invisibility, ï¬ ight, and special abilities. Units can have attacks and defenses that counter each others in a rock-paper-scissors fashion, making planning armies a extremely challenging and strategically rich process.
1611.00625#3
1611.00625#5
1611.00625
[ "1606.01540" ]
1611.00625#5
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
An â openingâ denotes the same thing as in Chess: an early game plan for which the player has to make choices. That is the case in Chess because one can move only one piece at a time (each turn), and in RTS games because, during the development phase, one is economically limited and has to choose which tech paths to pursue. Available resources constrain the technology advancement and the number of units one can produce. As producing buildings and units also take time, the arbitrage between investing in the economy, in technological advancement, and in units production is the crux of the strategy during the whole game. Related work: Classical AI approaches normally involving planning and search [2, 15, 24, 7] are extremely challenged by the combinatorial action space and the complex dynamics of RTS games, making simulation (and thus Monte Carlo tree search) diï¬ cult [8, 22]. Other characteristics such as partial observability, the non-obvious quantiï¬ cation of the value of the state, and the problem of featurizing a dynamic and structured state contribute to making them an interesting problem, which altogether ultimately also make them an excellent benchmark for AI. As the scope of this paper is not to give a review of RTS AI research, we refer the reader to these surveys about existing research on RTS and StarCraft AI [16, 17]. It is currently tedious to do machine learning research in this domain. Most previous reinforcement learning research involve simple models or limited experimental settings [26, 23]. Other models are trained on oï¬ ine datasets of highly skilled players [25, 18, 19, 21]. Contrary to most Atari games [3], RTS games have much higher action spaces and much more structured states. Thus, we advocate here to have not only the pixels as input and keyboard/mouse for commands, as in [3, 4, 12], but also a structured representation of the game state, as in
1611.00625#4
1611.00625#6
1611.00625
[ "1606.01540" ]
1611.00625#6
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
2 -- main game engine loop: while true do game.receive_player_actions() game.compute_dynamics() -- our injected code: torchcraft.send_state() torchcraft.receive_actions() featurize, model = init() tc = require â torchcraftâ tc:connect(port) while not tc.state.game_ended do tc:receive() features = featurize(tc.state) actions = model:forward(features) tc:send(tc:tocommand(actions)) # end # end Figure 1: Simpliï¬ ed client/server code that runs in the game engine (server, on the left) and the library for the machine learning library or framework (client, on the right). [11]. This makes it easier to try a broad variety of models, and may be useful in shaping loss functions for pixel-based models. Finally, StarCraft: Brood War is a highly popular game (more than 9.5 million copies sold) with professional players, which provides interesting datasets, human feedback, and a good benchmark of what is possible to achieve within the game. There also exists an active academic community that organizes AI competitions.
1611.00625#5
1611.00625#7
1611.00625
[ "1606.01540" ]
1611.00625#7
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
# 3 Design The simplistic design of TorchCraft is applicable to any video game and any machine learning library or framework. Our current implementation connects Torch to a low level interface [1] to StarCraft: Brood War. TorchCraftâ s approach is to dynamically inject a piece of code in the game engine that will be a server. This server sends the state of the game to a client (our machine learning code), and receives commands to send to the game. This is illustrated in Figure 1. The two modules are entirely synchronous, but the we provide two modalities of execution based on how we interact with the game: Game-controlled - we inject a DLL that provides the game interface to the bots, and one that includes all the instructions to communicate with the machine learning client, interpreted by the game as a player (or bot AI). In this mode, the server starts at the beginning of the match and shuts down when that ends. In-between matches it is therefore necessary to re-establish the connection with the client, however this allows for the setting of multiple learning instances extremely easily. Game-attached - we inject a DLL that provides the game interface to the bots, and we interact with it by attaching to the game process and communicating via pipes. In this mode there is no need to re-establish the connection with the game every time, and the control of the game is completely automatized out of the box, however itâ s currently impossible to create multiple learning instances on the same guest OS. Whatever mode one chooses to use, TorchCraft is seen by the AI programmer as a library that provides: connect(), receive() (to get the state), send(commands), and some helper functions about speciï¬ cs of StarCraftâ s rules and state representation. TorchCraft also provides an eï¬ cient way to store game frames data from past (played or observed) games so that existing state (â replaysâ , â tracesâ ) can be re-examined. 3 # 4 Conclusion
1611.00625#6
1611.00625#8
1611.00625
[ "1606.01540" ]
1611.00625#8
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
We presented several work that established RTS games as a source of interesting and relevant problems for the AI research community to work on. We believe that an eï¬ cient bridge between low level existing APIs and machine learning frameworks/libraries would enable and foster research on such games. We presented TorchCraft: a library that enables state-of-the-art machine learning research on real game data by interfacing Torch with StarCraft: BroodWar. TorchCraft has already been used in reinforcement learning experiments on StarCraft, which led to the results in [23] (soon to be open-sourced too and included within TorchCraft).
1611.00625#7
1611.00625#9
1611.00625
[ "1606.01540" ]
1611.00625#9
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
# 5 Acknowledgements We would like to thank Yann LeCun, Léon Bottou, Pushmeet Kohli, Subramanian Ramamoorthy, and Phil Torr for the continuous feedback and help with various aspects of this work. Many thanks to David Churchill for proofreading early versions of this paper. # References [1] BWAPI: Brood war api, an api for interacting with starcraft: Broodwar (1.16.1). https://bwapi. github.io/, 2009â 2015. [2] Aha, D. W., Molineaux, M., and Ponsen, M.
1611.00625#8
1611.00625#10
1611.00625
[ "1606.01540" ]
1611.00625#10
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Learning to win: Case-based plan selection in a real-time strategy game. In International Conference on Case-Based Reasoning (2005), Springer, pp. 5â 20. [3] Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬ cial Intelligence Research (2012). [4] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016). [5] Buro, M., and Furtak, T. Rts games and real-time ai research. In Proceedings of the Behavior Representation in Modeling and Simulation Conference (BRIMS) (2004), vol. 6370. # [6] Churchill, D. [6] Churchill, D. Starcraft ai competition. http://www.cs.mun.ca/~dchurchill/ starcraftaicomp/, 2011â 2016. [7] Churchill, D.
1611.00625#9
1611.00625#11
1611.00625
[ "1606.01540" ]
1611.00625#11
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Heuristic Search Techniques for Real-Time Strategy Games. PhD thesis, University of Alberta, 2016. [8] Churchill, D., Saffidine, A., and Buro, M. Fast heuristic search for rts game combat scenarios. In AIIDE (2012). [9] Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop (2011), no. EPFL-CONF-192376. [10] Hingston, P.
1611.00625#10
1611.00625#12
1611.00625
[ "1606.01540" ]
1611.00625#12
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
A turing test for computer game bots. IEEE Transactions on Computational Intelligence and AI in Games 1, 3 (2009), 169â 186. [11] Johnson, M., Hofmann, K., Hutton, T., and Bignell, D. The malmo platform for artiï¬ cial intelligence experimentation. In International joint conference on artiï¬ cial intelligence (IJCAI) (2016). [12] Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and JaÅ kowski, W. Vizdoom: A doom- based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097 (2016). [13] LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521, 7553 (2015), 436â 444. [14] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al.
1611.00625#11
1611.00625#13
1611.00625
[ "1606.01540" ]
1611.00625#13
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529â 533. 4 [15] Ontañón, S., Mishra, K., Sugandh, N., and Ram, A. Case-based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning (2007), Springer Berlin Heidelberg, pp. 164â 178. [16] Ontanón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., and Preuss, M.
1611.00625#12
1611.00625#14
1611.00625
[ "1606.01540" ]
1611.00625#14
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and AI in Games, IEEE Transactions on 5, 4 (2013), 293â 311. [17] Robertson, G., and Watson, I. A review of real-time strategy game ai. AI Magazine 35, 4 (2014), 75â 104. [18] Synnaeve, G. Bayesian programming and learning for multi-player video games: application to RTS AI. PhD thesis, PhD thesis, Institut National Polytechnique de Grenobleâ INPG, 2012. [19] Synnaeve, G., and Bessiere, P.
1611.00625#13
1611.00625#15
1611.00625
[ "1606.01540" ]
1611.00625#15
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
A dataset for starcraft ai & an example of armies clustering. arXiv preprint arXiv:1211.4552 (2012). [20] Togelius, J., Karakovskiy, S., and Baumgarten, R. The 2009 mario ai competition. In IEEE Congress on Evolutionary Computation (2010), IEEE, pp. 1â 8. [21] Uriarte, A. Starcraft brood war data mining. http://nova.wolfwork.com/dataMining.html, 2015. [22] Uriarte, A., and Ontañón, S.
1611.00625#14
1611.00625#16
1611.00625
[ "1606.01540" ]
1611.00625#16
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Game-tree search over high-level game states in rts games. In Tenth Artiï¬ cial Intelligence and Interactive Digital Entertainment Conference (2014). [23] Usunier, N., Synnaeve, G., Lin, Z., and Chintala, S. Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. arXiv preprint arXiv:1609.02993 (2016). [24] Weber, B. Reactive planning for micromanagement in rts games. Department of Computer Science, University of California, Santa Cruz (2014). [25] Weber, B. G., and Mateas, M.
1611.00625#15
1611.00625#17
1611.00625
[ "1606.01540" ]
1611.00625#17
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
A data mining approach to strategy prediction. In 2009 IEEE Symposium on Computational Intelligence and Games (2009), IEEE, pp. 140â 147. [26] Wender, S., and Watson, I. Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: broodwar. In Computational Intelligence and Games (CIG), 2012 IEEE Conference on (2012), IEEE, pp. 402â 408. [27] Wymann, B., Espié, E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net (2000).
1611.00625#16
1611.00625#18
1611.00625
[ "1606.01540" ]
1611.00625#18
TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
5 # A Frame data In addition to the visual data, the TorchCraft server extracts certain information for the game state and sends it over to the connected clients in a structured â frameâ . The frame is formatted in a table in roughly the following structure: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 R e c e i v e d u p d a t e : { // Number o f // NB : a â game â
1611.00625#17
1611.00625#19
1611.00625
[ "1606.01540" ]