doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.00712 | 26 | 6
(13)
Published as a conference paper at ICLR 2017
of the simplex as in Figure 2d. If this is the case, it is possible for the relaxed random variable to communicate much more than log2(n) bits of information about its α parameters. This might lead the relaxation to prefer the interior of the simplex to the vertices, and as a result there will be a large integrality gap in the overall performance of the discrete graph. Therefore Proposition 1 (d) is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than )n. We discuss (n the subtleties of choosing the temperatures in more detail in Appendix C. Ultimately the best choice of λ and the performance of the relaxation for any speciï¬c n will be an empirical question.
# 4 RELATED WORK | 1611.00712#26 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 27 | # 4 RELATED WORK
Perhaps the most common distribution over the simplex is the Dirichlet with density pa(x) « hel rest on z ⬠Aâ~!. The Dirichlet can be characterized by strong independence proper- ties, and a great deal of work has been done to generalize it [1985] {1994} Favaro et al.|[2011). Of note is the Logistic Normal distribution (Atchison & Shen]]1980), which can be simulated by taking the softmax of n â 1 normal random variables and an nth logit that is deterministically zero. The Logistic Normal is an important dis- tribution, because it can effectively model correlations within the simplex (Blei & Lafferty] 2006). To our knowledge the Concrete distribution does not fall completely into any family of distribu- tions previously described. For A < 1 the Concrete is in a class of normalized infinitely divisible distributions (S. Favaro, personal communication), and the results of [Favaro et al.|(2011) apply. | 1611.00712#27 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 28 | The idea of using a softmax of Gumbels as a relaxation for a discrete random variable was concur- rently considered by (Jang et al., 2016), where it was called the Gumbel-Softmax. They do not use the density in the relaxed objective, opting instead to compute all aspects of the graph, including discrete log-probability computations, with the relaxed stochastic state of the graph. In the case of variational inference, this relaxed objective is not a lower bound on the marginal likelihood of the observations, and care needs to be taken when optimizing it. The idea of using sigmoidal functions with additive input noise to approximate discreteness is also not a new idea. (Frey, 1997) introduced nonlinear Gaussian units which computed their activation by passing Gaussian noise with the mean and variance speciï¬ed by the input to the unit through a nonlinearity, such as the logistic function. Salakhutdinov & Hinton (2009) binarized real-valued codes of an autoencoder by adding (Gaussian) noise to the logits before passing them through the logistic function. Most recently, to avoid the dif- ï¬culty associated with likelihood-ratio methods (KoËcisk´y et al., 2016) relaxed the discrete sampling operation by sampling a vector of Gaussians instead and passing those through a softmax. | 1611.00712#28 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 29 | There is another family of gradient estimators that have been studied in the context of training neural networks with discrete units. These are usually collected under the umbrella of straight- through estimators (Bengio et al., 2013; Raiko et al., 2014). The basic idea they use is passing forward discrete values, but taking gradients through the expected value. They have good empirical performance, but have not been shown to be the estimators of any loss function. This is in contrast to gradients from Concrete relaxations, which are biased with respect to the discrete graph, but unbiased with respect to the continuous one.
# 5 EXPERIMENTS
5.1 PROTOCOL | 1611.00712#29 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 30 | # 5 EXPERIMENTS
5.1 PROTOCOL
The aim of our experiments was to evaluate the effectiveness of the gradients of Concrete relax- ations for optimizing SCGs with discrete nodes. We considered the tasks in (Mnih & Rezende, 2016): structured output prediction and density estimation. Both tasks are difï¬cult optimization problems involving ï¬tting probability distributions with hundreds of latent discrete nodes. We compared the performance of Concrete reparameterizations to two state-of-the-art score function estimators: VIMCO (Mnih & Rezende, 2016) for optimizing the multisample variational objec- tive (m > 1) and NVIL (Mnih & Gregor, 2014) for optimizing the single-sample one (m = 1). We performed the experiments using the MNIST and Omniglot datasets. These are datasets of 28 images of handwritten digits (MNIST) or letters (Omniglot). For MNIST we used the ï¬xed 28 binarization of Salakhutdinov & Murray (2008) and the standard 50,000/10,000/10,000 split into
7
Published as a conference paper at ICLR 2017 | 1611.00712#30 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 31 | MNIST NLL Omniglot NLL binary model (200H â 784V) Test Train Test Train m Concrete VIMCO Concrete VIMCO Concrete VIMCO Concrete VIMCO 1 5 50 107.3 104.9 104.3 104.4 101.9 98.8 107.5 104.9 104.2 104.2 101.5 98.3 118.7 118.0 118.9 115.7 113.5 113.0 117.0 115.8 115.8 112.2 110.8 110.0 (200H â 200H â 784V) 1 5 50 102.1 99.9 99.5 92.9 91.7 90.7 102.3 100.0 99.4 91.7 90.8 89.7 116.3 116.0 117.0 109.2 107.5 108.1 114.4 113.5 113.9 104.8 103.6 103.6 (200H â¼784V) 1 5 50 92.1 89.5 88.5 93.8 91.4 89.3 91.2 88.1 86.4 91.5 88.6 86.5 108.4 107.5 108.1 116.4 118.2 116.0 103.6 101.4 100.5 110.3 102.3 100.8 (200H â¼200H â¼784V) | 1611.00712#31 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 34 | All of our models were neural networks with layers of n-ary discrete stochastic nodes with values log2(n). The distributions were parameterized by n real val- on the corners of the hypercube } {â ues log αk â Discrete(α) with n states. Model descriptions are of the form â(200Vâ200H 784V)â, read from left to right. This describes the order of conditional sampling, again from left to right, with each integer repre- senting the number of stochastic units in a layer. The letters V and H represent observed and latent variables, respectively. If the leftmost layer is H, then it was sampled unconditionally from some parameters. Conditioning functions are described by , where âââ means a linear function of the previous layer and â â means a non-linear function. A âlayerâ of these units is simply the concatenation of some number of independent nodes whose parameters are determined as a function 240 the previous layer. For example a 240 binary layer is a factored distribution over the } hypercube. Whereas a 240 8-ary layer can be seen as a distribution over the same hypercube where each of the 80 triples of units are sampled independently from | 1611.00712#34 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 35 | the } hypercube. Whereas a 240 8-ary layer can be seen as a distribution over the same hypercube where each of the 80 triples of units are sampled independently from an 8 way discrete distribution over 3. All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized {â } using Adam (Kingma & Ba, 2014). All temperatures were ï¬xed throughout training. Appendix D for hyperparameter details. | 1611.00712#35 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 36 | 5.2 DENSITY ESTIMATION
Density estimation, or generative modelling, is the problem of ï¬tting the distribution of data. We took the latent variable approach described in Section 2.4 and trained the models by optimizing the Lm(θ, Ï) given by Eq. 8 averaged uniformly over minibatches of data points variational objective x) were parameterized x. Both our generative models pθ(z, x) and variational distributions qÏ(z with neural networks as described above. We trained models with and approximated the NLL with
â { L50,000(θ, Ï) averaged uniformly over the whole dataset. | 1611.00712#36 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 37 | â { L50,000(θ, Ï) averaged uniformly over the whole dataset.
The results are shown in Table 1. In general, VIMCO outperformed Concrete relaxations for linear models and Concrete relaxations outperformed VIMCO for non-linear models. We also tested the effectiveness of Concrete relaxations on generative models with n-ary layers on the L5(θ, Ï) ob- jective. The best 4-ary model achieved test/train NLL 86.7/83.3, the best 8-ary achieved 87.4/84.6 with Concrete relaxations, more complete results in Appendix E. The relatively poor performance of the 8-ary model may be because moving from 4 to 8 results in a more difï¬cult objective without much added capacity. As a control we trained n-ary models using logistic normals as relaxations of discrete distributions (with retuned temperature hyperparameters). Because the discrete zero tem- perature limit of logistic Normals is a multinomial probit whose mass function is not known, we evaluated the discrete model by sampling from the discrete distribution parameterized by the logits
8
Published as a conference paper at ICLR 2017 | 1611.00712#37 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 38 | 8
Published as a conference paper at ICLR 2017
binary model (392Vâ240H â240Hâ392V) Test NLL Train NLL m Concrete VIMCO Concrete VIMCO 1 5 50 58.5 54.3 53.4 61.4 54.5 51.8 54.2 49.2 48.2 59.3 52.7 49.6 (392Vâ240H â240Hâ240H â392V) 1 5 50 56.3 52.7 52.0 59.7 53.5 50.2 51.6 46.9 45.9 58.4 51.6 47.9
# Xr
Figure 4: Results for structured prediction on MNIST comparing Concrete relaxations to VIMCO. When m = 1 VIMCO stands for NVIL. The plot on the right shows the objective (lower is better) for the continuous and discrete graph trained at temperatures λ. In the shaded region, units prefer to communicate real values in the interior of (
â
learned during training. The best 4-ary model achieved test/train NLL of 88.7/85.0, the best 8-ary model achieved 89.1/85.1.
5.3 STRUCTURED OUTPUT PREDICTION | 1611.00712#38 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 39 | 5.3 STRUCTURED OUTPUT PREDICTION
Structured output prediction is concerned with modelling the high-dimensional distribution of the observation given a context and can be seen as conditional density estimation. We considered the task of predicting the bottom half x1 of an image of an MNIST digit given its top half x2, as introduced by Raiko et al. (2014). We followed Raiko et al. (2014) in using a model with layers of discrete stochastic units between the context and the observation. Conditioned on the top half x2 the network samples from a distribution pÏ(z x2) over layers of stochastic units z then predicts x1 by sampling from a distribution pθ(x1 | SP m (θ, Ï) =
; 1 LEP (0,d)=, E log { â x |Z)}|. OO) = fle (Gp Deol | 20)
1 m Lm(θ, Ï) (Eq. 8) where we use the prior pÏ(z | 1611.00712#39 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 40 | 1 m Lm(θ, Ï) (Eq. 8) where we use the prior pÏ(z
This objective is a special case of distribution. Thus, the objective is a lower bound on log pθ,Ï(x1 | averaged uniformly over mini- We trained the models by optimizing 1, 5, 50 SP 100(θ, Ï) averaged uniformly over the entire dataset. The batches and evaluated them by computing results are shown in Figure 4. Concrete relaxations more uniformly outperformed VIMCO in this instance. We also trained n-ary (392Vâ240Hâ240Hâ240Hâ392V) models on the (θ, Ï) objec- tive using the best temperature hyperparameters from density estimation. 4-ary achieved a test/train NLL of 55.4/46.0 and 8-ary achieved 54.7/44.8. As opposed to density estimation, increasing arity uniformly improved the models. We also investigated the hypothesis that for higher temperatures Concrete relaxations might prefer the interior of the interval to the boundary points . Figure 1, 1 } (θ, Ï). 4 was generated with binary (392Vâ240Hâ240Hâ240Hâ392V) model trained on
# L
# 6 CONCLUSION | 1611.00712#40 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 41 | # L
# 6 CONCLUSION
We introduced the Concrete distribution, a continuous relaxation of discrete random variables. The Concrete distribution is a new distribution on the simplex with a closed form density parameterized by a vector of positive location parameters and a positive temperature. Crucially, the zero temper- ature limit of every Concrete distribution corresponds to a discrete distribution, and any discrete distribution can be seen as the discretization of a Concrete one. The application we considered was training stochastic computation graphs with discrete stochastic nodes. The gradients of Concrete relaxations are biased with respect to the original discrete objective, but they are low variance un- biased estimators of a continuous surrogate objective. We showed in a series of experiments that stochastic nodes with Concrete distributions can be used effectively to optimize the parameters of a stochastic computation graph with discrete stochastic nodes. We did not ï¬nd that annealing or automatically tuning the temperature was important for these experiments, but it remains interesting and possibly valuable future work.
9
Published as a conference paper at ICLR 2017
ACKNOWLEDGMENTS
We thank Jimmy Ba for the excitement and ideas in the early days, Stefano Favarro for some analysis of the distribution. We also thank Gabriel Barth-Maron and Roger Grosse. REFERENCES | 1611.00712#41 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 42 | Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org.
J Aitchison. A general class of distributions on the simplex. Journal of the Royal Statistical Society. Series B (Methodological), pp. 136â146, 1985. | 1611.00712#42 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 43 | J Aitchison. A general class of distributions on the simplex. Journal of the Royal Statistical Society. Series B (Methodological), pp. 136â146, 1985.
J Atchison and Sheng M Shen. Logistic-normal distributions: Some properties and uses. Biometrika, 67(2):261â272, 1980.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
David Blei and John Lafferty. Correlated topic models. 2006. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR,
2016.
Robert J Connor and James E Mosimann. Concepts of independence for proportions with a gener- alization of the dirichlet distribution. Journal of the American Statistical Association, 64(325): 194â206, 1969.
Stefano Favaro, Georgia Hadjicharalambous, and Igor Pr¨unster. On a class of distributions on the simplex. Journal of Statistical Planning and Inference, 141(9):2987 â 3004, 2011. | 1611.00712#43 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 44 | Brendan Frey. Continuous sigmoidal belief networks trained using slice sampling. In NIPS, 1997. Michael C Fu. Gradient estimation. Handbooks in operations research and management science,
13:575â616, 2006.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249â256, 2010.
Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â84, 1990.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471â476, 2016.
Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. JMLR, 5, 2004.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828â1836, 2015. | 1611.00712#44 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 45 | Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregres- sive networks. arXiv preprint arXiv:1310.8499, 2013.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. MuProp: Unbiased backpropagation for stochastic neural networks. ICLR, 2016.
Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ce, 1954.
Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum a-posteriori perturbations. In ICML, 2012.
10
Published as a conference paper at ICLR 2017
Tamir Hazan, George Papandreou, and Daniel Tarlow. Perturbation, Optimization, and Statistics. MIT Press, 2016. | 1611.00712#45 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 46 | Tamir Hazan, George Papandreou, and Daniel Tarlow. Perturbation, Optimization, and Statistics. MIT Press, 2016.
Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. JMLR, 14(1):1303â1347, 2013.
E. Jang, S. Gu, and B. Poole. Categorical Reparameterization with Gumbel-Softmax. ArXiv e-prints, November 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014. Tom´aËs KoËcisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and In Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. EMNLP, 2016. | 1611.00712#46 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 47 | R. Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley, 1959. Chris J Maddison. A Poisson process model for Monte Carlo. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbation, Optimization, and Statistics, chapter 7. MIT Press, 2016.
Chris J Maddison, Daniel Tarlow, and Tom Minka. Aâ Sampling. In NIPS, 2014. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In
ICML, 2014.
Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
Volodymyr Mnih, Nicolas Heess, Alex Graves, and koray kavukcuoglu. Recurrent Models of Visual Attention. In NIPS, 2014.
Christian A Naesseth, Francisco JR Ruiz, Scott W Linderman, and David M Blei. Rejection sam- pling variational inference. arXiv preprint arXiv:1610.05683, 2016.
John William Paisley, David M. Blei, and Michael I. Jordan. Variational bayesian inference with stochastic search. In ICML, 2012. | 1611.00712#47 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 48 | John William Paisley, David M. Blei, and Michael I. Jordan. Variational bayesian inference with stochastic search. In ICML, 2012.
George Papandreou and Alan L Yuille. Perturb-and-map random ï¬elds: Using discrete optimization to learn and sample from energy models. In ICCV, 2011.
Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. In AISTATS, 2014.
William S Rayens and Cidambi Srinivasan. Dependence properties of generalized liouville distri- butions on the simplex. Journal of the American Statistical Association, 89(428):1465â1470, 1994.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
Francisco JR Ruiz, Michalis K Titsias, and David M Blei. The generalized reparameterization gradient. arXiv preprint arXiv:1610.02287, 2016. | 1611.00712#48 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 49 | Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approxi- mate Reasoning, 50(7):969â978, 2009.
Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In ICML, 2008.
John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015.
Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688.
Michalis Titsias and Miguel L´azaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In Tony Jebara and Eric P. Xing (eds.), ICML, 2014.
11
Published as a conference paper at ICLR 2017
Michalis Titsias and Miguel L´azaro-Gredilla. Local expectation gradients for black box variational inference. In NIPS, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. | 1611.00712#49 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 50 | Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
John I Yellott. The relationship between luceâs choice axiom, thurstoneâs theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 15(2): 109â144, 1977.
# A PROOF OF PROPOSITION 1
Let X Concrete(α, λ) with location parameters α (0, )n and temperature λ (0, ).
# Let X
Concrete(α, λ) with location parameters α
# â¼ 1. Let Gk â¼
â
â
â
Gumbel i.i.d., consider
â
exp((log ax + Gx)/A) DiL1 exp((log ai + Gi)/d) Yi
Let Zk = log αk + Gk, which has density
αk exp( zk) exp( αk exp( zk))
â
â
â
We will consider the invertible transformation | 1611.00712#50 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 51 | αk exp( zk) exp( αk exp( zk))
â
â
â
We will consider the invertible transformation
F (z1, . . . , zn) = (y1, . . . , ynâ1, c)
where
ye = exp(zn/A)e7* n c= Dexplsi/2) i=1
then F â1(y1, . . . , ynâ1, c) = (λ(log y1 + log c), . . . , λ(log ynâ1 + log c), λ(log yn + log c))
n-1 >;
where yn = 1
i=1 yi. This has Jacobian
â
â
λyâ1 1 0 0 λyâ1 n 0 λyâ1 2 0 λyâ1 n 0 0 λyâ1 3 ... λyâ1 n 0 0 0 λyâ1 n . . . . . . . . . . . . 0 0 0 λyâ1 n λcâ1 λcâ1 λcâ1 λcâ1
â
â
â
â
â
â 1 rows to the bottom row we see that this Jacobian
by adding yi/yn times each of the top n has the same determinant as λyâ1 1 0 0 | 1611.00712#51 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 52 | â
â 1 rows to the bottom row we see that this Jacobian
by adding yi/yn times each of the top n has the same determinant as λyâ1 1 0 0
0 λyâ1 2 0 0 0 0 0 λyâ1 3 ... 0 0 . . . 0 . . . . . . 0 0 . . . 0 0 0 λcâ1 λcâ1 λcâ1 0 λ(cyn)â1
and thus the determinant is equal to
yr oe eT] in Yi
12
Published as a conference paper at ICLR 2017
all together we have the density
# Aâ TI
Aâ TI pea Oe exp(âA log ys, â A log c) exp(âay, exp(âA log yx â A log c)) Tina yi
λ log c) exp( i=1 yi with r = log c change of variables we have density
Aâ TT,
Aâ TT, Oe exp(âAr) exp(âax exp(âA log yx â Ar)) Ty? at exp(ânAr) exp(â > a; exp(âAlog y; â Ar)) =
letting y = log(oy 4 any,)
# n=1 αkyâλ k ) k=1 αk | 1611.00712#52 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 53 | letting y = log(oy 4 any,)
# n=1 αkyâλ k ) k=1 αk
Te muro) exp(ânAr +7) exp(â exp(âAr + 7)
integrating out r
Aâ TT Oe (ao + vr) Thay 2 ePO) r ety Me aT = 1)ly"- 1 T= LOKYR (Shan Rv)â * (exp(âyn)F(n)) = -1
# Thus Y d= X.
2. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 3. Follows directly from (a) and the Gumbel-Max trick (Maddison, 2016). 4. Let λ
1)â1. The density of X can be rewritten as
â¤
â
n -r-1 ORY Po, r(@) & k=1 wit ay; * -Il a, Lune 1)-1 par Thai}
Thus, the log density is up to an additive constant C
1 n log pa,r(x) = S0(A(n = 1) = LD log yx â nlog | S> ax T] kal k=1 jfk | 1611.00712#53 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 54 | If λ log is convex. For the 1)â1. last term, Thus, their composition is convex. The sum of convex terms is convex, ï¬nishing the proof.
# B THE BINARY SPECIAL CASE
Bernoulli random variables are an important special case of discrete distributions taking states in . Here we consider the binary special case of the Gumbel-Max trick from Figure 1a along 0, 1 }
)2 be a two state discrete random variable on Let D â D1 + D2 = 1, parameterized as in Figure 1a by α1, α2 > 0: Discrete(α) for α (0, â¼ â 0, 1 { 2 such that }
P(D1 = 1) = α1 α1 + α2 (14)
13
Published as a conference paper at ICLR 2017 | 1611.00712#54 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 55 | P(D1 = 1) = α1 α1 + α2 (14)
13
Published as a conference paper at ICLR 2017
The distribution is degenerate, because D1 = 1 the Gumbel-Max reparameterization, the event that D1 = 1 is the event that G2 + log α2} G2 â¼ G1 â where U â¼ D2. Therefore we consider just D1. Under G1 + log α1 > Gumbel i.i.d. The difference of two Gumbels is a Logistic distribution U ) â { d= log U where Gk â¼ Logistic, which can be sampled in the following way, G1 â Uniform(0, 1). So, if α = α1/α2, then we have G2 log(1 â â
P(D1 = 1) = P(G1 + log α1 > G2 + log α2) = P(log U
log(1 U ) + log α > 0) (15)
â
â
Thus, D1 d= H(log α + log U log(1 U )), where H is the unit step function.
â
â
Correspondingly, we can consider the Binary Concrete relaxation that results from this process. As in the n-ary case, we consider the sampling routine for a Binary Concrete random variable X
â
â¼ | 1611.00712#55 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 56 | â
â¼
X = 1 + exp( 1 (log α + L)/λ) (16)
â
We deï¬ne the Binary Concrete random variable X by its density on the unit interval. Deï¬nition 2 (Binary Concrete Random Variables). Let α has a Binary Concrete distribution X its density is:
# ⬠(0,1) temperature A, if
# X
pα,λ(x) = λαxâλâ1(1 (αxâλ + (1 x)âλâ1 x)âλ)2 . (17)
â â
We state without proof the special case of Proposition 1 for Binary Concrete distributions Proposition 2 (Some Properties of Binary Concrete Random Variables). Let X BinConcrete(α, λ) with location parameter α
~
â¼
# â Logistic, then X d=
â
â
1 1+exp(â(log α+L)/λ) ,
(a) (Reparameterization) If L
â¼
â
(b) (Rounding) P (X > 0.5) = α/(1 + α),
(c) (Zero temperature) P (limλâ0 X = 1) = α/(1 + α),
(d) (Convex eventually) If λ 1, then pα,λ(x) is log-convex in x.
⤠| 1611.00712#56 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 57 | (d) (Convex eventually) If λ 1, then pα,λ(x) is log-convex in x.
â¤
We can generalize the binary circuit beyond Logistic random variables. Consider an arbitrary ran- dom variable X with inï¬nite support on R. If Φ : R
â P(H(X) = 1) = 1
Φ(0)
â
If we want this to have a Bernoulli distribution with probability α/(1 + α), then we should solve the equation
1 â Φ(0) = α 1 + α .
This gives Φ(0) = 1/(1 + α), which can be accomplished by relocating the random variable Y with CDF Φ to be X = Y
â
# C USING CONCRETE RELAXATIONS
In this section we include some tips for implementing and using the Concrete distribution as a relaxation. We use the following notation
# nm
Ï(x) = 1 1 + exp( x) n LΣE k=1 { xk} = log k=1 exp(xk)
â
Both sigmoid and log-sum-exp are common operations in libraries like TensorFlow or theano.
14
Published as a conference paper at ICLR 2017
# C.1 THE BASIC PROBLEM | 1611.00712#57 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 58 | 14
Published as a conference paper at ICLR 2017
# C.1 THE BASIC PROBLEM
For the sake of exposition, we consider a simple variational autoencoder with a single discrete random variable and objective L1(θ, a, α) given by Eq. 8 for a single data point x. This scenario will allow us to discuss all of the decisions one might make when using Concrete relaxations.
In particular, )n, let pθ(x Discrete(a) with a network), which is a continuous function of d and parameters θ, let D ⼠hot discrete random variable in (0, 1)n whose unnormalized probabilities α(x) function (possible a neural net with its own parameters) of x. Let Qα(d | D. Then, we care about optimizing
L1(θ, a, α) = E Dâ¼Qα(d|x) log pθ(x D)Pa(D) x) | | Qα(D (18)
with respect to θ, a, and any parameters in α from samples of the SCG required to simulate an estimator of
L1(θ, a, α).
# C.2 WHAT YOU MIGHT RELAX AND WHY | 1611.00712#58 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 59 | L1(θ, a, α).
# C.2 WHAT YOU MIGHT RELAX AND WHY
The ï¬rst consideration when relaxing an estimator of Eq. 18 is how to relax the stochastic computa- tion. The only sampling required to simulate Discrete(α(x)). The correspond- L1(θ, a, α) is D Concrete(α(x), λ1) with temperature λ1 and location ing Concrete relaxation is to sample Z â¼ parameters are the the unnormalized probabilities α(x) of D. Let density qα,λ1(z x) be the density | of Z. We get a relaxed objective of the form:
E Dâ¼Qα(d|x) [ · ] â E Zâ¼qα,λ1 (z|x) [ · ] (19)
This choice allows us to take derivatives through the stochastic computaitons of the graph.
The second consideration is which objective to put in place of [ ] in Eq. 19. We will consider the ideal scenario irrespective of numerical issues. In Subsection C.3 we address those numerical x) (which is issues. The central question is how to treat the expectation of the ratio Pa(D)/Qα(D | the KL component of the loss) when Z replaces D. | 1611.00712#59 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 60 | There are at least three options for how to modify the objective. They are, (20) replace the discrete mass with Concrete densities, (21) relax the computation of the discrete log mass, (22) replace it with the analytic discrete KL.
Pa,ro(Z) E log po (a|Z) + log ââ=â 20 soak ayy [lot volelZ) + log PAK) 20)
n i P,(d) E log pe (|Z) + Z; log ââ_._â 21 zogann, (ln) | 8 Po(a|Z) > 8 O, (dO]x) (21)
# n
E Zâ¼qα,λ1 (z|x) [log pθ(x Z)] + | i=1 Qα(d(i) x) log | Pa(d(i)) Qα(d(i) x) (22)
|
where d(i) is a one-hot binary vector with d(i) i = 1 and pa,λ2 (z) is the density of some Concrete random variable with temperature λ2 with location parameters a. Although (22) or (21) is tempting, we emphasize that these are NOT necessarily lower bounds on log p(x) in the relaxed model. (20) is the only objective guaranteed to be a lower bound: | 1611.00712#60 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 61 | ; - Pa,d2(Z) ; . soaE oy [oePolel2) + toe 2 oy] <toe | polale)Paas(2) dr. 23)
For this reason we consider objectives of the form (20). Choosing (22) or (21) is possible, but the value of these objectives is not interpretable and one should early stop otherwise it will overï¬t to the spurious âKLâ component of the loss. We now consider practical issues with (20) and how to address them. All together we can interpret qα,λ1(z x) as the Concrete relaxation of the variational | posterior and pa,λ2 (z) the relaxation of the prior.
15
Published as a conference paper at ICLR 2017
C.3 WHICH RANDOM VARIABLE TO TREAT AS THE STOCHASTIC NODE | 1611.00712#61 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 62 | 15
Published as a conference paper at ICLR 2017
C.3 WHICH RANDOM VARIABLE TO TREAT AS THE STOCHASTIC NODE
When implementing a SCG like the variational autoencoder example, we need to compute log- probabilities of Concrete random variables. This computation can suffer from underï¬ow, so where possible itâs better to take a different node on the relaxed graph as the stochastic node on which log- likelihood terms are computed. For example, itâs tempting in the case of Concrete random variables to treat the Gumbels as the stochastic node on which the log-likelihood terms are evaluated and the softmax as downstream computation. This will be a looser bound in the context of variational inference than the corresponding bound when treating the Concrete relaxed states as the node.
The solution we found to work well was to work with Concrete random variables in log-space. Consider the following vector in Rn for location parameters α ) and Gk â¼
# loga; + Gi x
log αk + Gk λ n LΣE i=1 Yk = â | 1611.00712#62 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 63 | # loga; + Gi x
log αk + Gk λ n LΣE i=1 Yk = â
therefore we call Y an Y ⼠ExpConcrete(α, λ). The advantage of this reparameterization is that the KL terms of a varia- tional loss are invariant under invertible transformation. exp is invertible, so the KL between two ExpConcrete random variables is the same as the KL between two Concrete random variables. The log-density log κα,λ(y) of an ExpConcrete(α, λ) is also simple to compute:
n n log Ka,,(y) = log((n â 1)!) + (n â 1) log 4 (Spree - an) â nLXE {log ax â Ayn} k=1
Rn such that LΣEn for y tribution is still interpretable in the zero temperature limit. In the limit of λ â random variables become discrete random variables over the one-hot vectors of d where LΣEn 0, 1 } { = 0. Note that the sample space of the ExpConcrete dis- 0 ExpConcrete n } yk} k=1{ â â {ââ n. = 0. exp(Y ) in this case results in the one-hot vectors in dk} , 0 | 1611.00712#63 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 64 | # k=1{ C.3.1 n-ARY CONCRETE
Returning to our initial task of relaxing £1(0,a, a), let Y ~ ExpConcrete(a(x), 1) Ke,, (y|x) be the ExpConcrete latent variable corresponding to the Concrete relaxation of the variational posterior Q. (d|x). Let pa,y, (y) be the density of an ExpConcrete random corresponding to the Concrete relaxation pa,,,(z) of P,(d). All together we can see that Pa,d2(Z)_]
# with density qu,x, (z|x) variable
Pa,d2(Z)_] log po(a|Z) + log 2 | = E ow pote exp(Y)) + log Zar (2|@) da,d,(Z|t) | ¥~rme,; (ule) Ke, (Y |x) (24) Pa,d2(¥)
Therefore, we used ExpConcrete random variables as the stochastic nodes and treated exp as a downstream computation. The relaxation is then,
relax L£1(0,a,a) Y og po(z| exp(Y)) + log oa | ; (25) Y~Ra,d, (ylx) Kadi (Y|x) | 1611.00712#64 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 65 | and the objective on the RHS is fully reparameterizable and what we chose to optimize.
# C.3.2 BINARY CONCRETE
In the binary case, the logistic function is invertible, so it makes most sense to treat the logit plus noise as the stochastic node. In particular, the binary random node was sample from:
Y = log α + log U â λ log(1 â U ) (26)
Uniform(0, 1) and always followed by Ï as downstream computation. log U where U â U ) is a Logistic random variable, details in the cheat sheet, and so the log-density log gα,λ(y) of this node (before applying Ï) is
log gα,λ(y) = log λ λy + log α 2 log(1 + exp( λy + log α))
â
â
â
16
|
Published as a conference paper at ICLR 2017
All together the relaxation in the binary special case would be
£:(6,a,a)" EB [logpo(x|a(¥)) + 10g 242) ; 27 ¥~ga,a, (y|®) Ja, (¥|2) e
where fa,λ2(y) is the density of a Logistic random variable sampled via Eq. 26 with location a and temperature λ2. | 1611.00712#65 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 67 | The success of Concrete relaxations will depend heavily on the choice of temperature during train- ing. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior of the simplex as in Figure For example, choosing additive Gaussian noise e ~ Normal(0, 1) with the logistic function o(x) to get relaxed Bernoullis of the form o(⬠+ 1) will result in a large mode in the centre of the interval. This is because the tails of the Gaussian distribution drop off much faster than the rate at which o squashes. Even including a temperature parameter does not completely solve this problem; the density of o((⬠+ 4)/A) at any temperature still goes to 0 as its approaches the boundaries 0 and 1 of the unit interval. Therefore |(D]of Proposi- tion|I]is a conservative guideline for generic n-ary Concrete relaxations; at temperatures lower than (n â1)~! we are guaranteed not to have any modes in the interior for any a ⬠(0, 00)â. In the case of the Binary Concrete distribution, the tails of the Logistic additive noise are balanced with the logistic squashing function and for temperatures | 1611.00712#67 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 68 | In the case of the Binary Concrete distribution, the tails of the Logistic additive noise are balanced with the logistic squashing function and for temperatures \ < 1 the density of the Binary Concrete distribu- tion is log-convex for all parameters a, see Figure[3b] Still, practice will often disagree with theory here. The peakiness of the Concrete distribution increases with n, so much higher temperatures are tolerated (usually necessary). | 1611.00712#68 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 69 | For n = 1 temperatures A < (n â 1)~1 is a good guideline. For n > 1 taking A < (n â 1)~1 is not necessarily a good guideline, although it will depend on n and the specific application. As n â> oo the Concrete distribution becomes peakier, because the random normalizing constant ee exp((log ax + Gx)/A) grows. This means that practically speaking the optimization can tolerate much higher temperatures than (n â 1)~!. We found in the cases n = 4 that \ = 1 was the best temperature and in n = 8, A = 2/3 was the best. Yet A = 2/3 was the best single perform- ing temperature across the n ⬠{2,4,8} cases that we considered. We recommend starting in that ball-park and exploring for any specific application.
When the loss depends on a KL divergence between two Concrete nodes, itâs possible to give the nodes distinct temperatures. We found this to improve results quite dramatically. In the context of our original problem and itâs relaxation:
Y) L£1(0,a, a) = E log po(2| exp(Y)) + lo Por) > 1(0,, «) vn e te) ¢ pe(z| exp(Y)) 8 aa, Ve) |? (28) | 1611.00712#69 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 70 | Both λ1 for the posterior temperature and λ2 for the prior temperature are tunable hyperparameters.
# D EXPERIMENTAL DETAILS
The basic model architectures we considered are exactly analogous to those in Burda et al. (2016) with Concrete/discrete random variables replacing Gaussians.
# D.1 â VS
â¼
The conditioning functions we used were either linear or non-linear. Non-linear consisted of two tanh layers of the same size as the preceding stochastic layer in the computation graph.
# D.2 n-ARY LAYERS
All our models are neural networks with layers of n-ary discrete stochastic nodes with log2(n)- log2(n). For a generic n-ary node dimensional states on the corners of the hypercube
1, 1 }
{â
17
Published as a conference paper at ICLR 2017 | 1611.00712#70 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 71 | 1, 1 }
{â
17
Published as a conference paper at ICLR 2017
Discrete(α) for sampling proceeds as follows. Sample a n-ary discrete random variable D log2(n) α } {â as columns, then we took Y = CD as downstream computation on D. The corresponding Con- crete relaxation is to take X ) and set (0, ËY = CX. For the binary case, this amounts to simply sampling U Uniform(0, 1) and taking â¼ 1. The corresponding Binary Concrete relaxation is Y = 2H(log U U ) + log α) â ËY = 2Ï((log U 1. U ) + log α)/λ)
â â
â â
â
# D.3 BIAS INITIALIZATION
All biases were initialized to 0 with the exception of the biases in the prior decoder distribution over the 784 or 392 observed units. These were initialized to the logit of the base rate averaged over the respective dataset (MNIST or Omniglot).
# D.4 CENTERING | 1611.00712#71 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 72 | # D.4 CENTERING
We also found it beneï¬cial to center the layers of the inference network during training. The activity 1, 1)d of each stochastic layer was centered during training by maintaining a exponentially in ( decaying average with rate 0.9 over minibatches. This running average was subtracted from the activity of the layer before it was updated. Gradients did not ï¬ow throw this computation, so it simply amounted to a dynamic offset. The averages were not updated during the evaluation.
D.5 HYPERPARAMETER SELECTION | 1611.00712#72 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 73 | D.5 HYPERPARAMETER SELECTION
All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized using Adam (Kingma & Ba, 2014) with parameters β1 = 0.9, β2 = 0.999 for 107 steps on minibatches of size 64. Hyperparameters were selected on the MNIST dataset by grid search taking the values that performed best on the validation set. Learning rates were chosen from and weight decay from . Two sets of hyperparameters were selected, one for linear models and one for non-linear models. The linear modelsâ hyperparameters were selected with L5(θ, Ï) objective. The non-linear modelsâ hyperpa- the 200Hâ200Hâ784V density model on the rameters were selected with the 200H L5(θ, Ï) objective. For 784V density model on the 200H â¼ density estimation, the Concrete relaxation hyperparameters were (weight decay = 0, learning rate 10â4) for linear and (weight decay = 0, learning rate = 10â4) for non-linear. For structured = 3 prediction Concrete relaxations used (weight decay = 10â3, learning rate = 3 | 1611.00712#73 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 74 | In addition to tuning learning rate and weight decay, we tuned temperatures for the Concrete relax- ations on the density estimation task. We found it valuable to have different values for the prior and posterior distributions, see Eq. 28. In particular, for binary we found that (prior λ2 = 1/2, posterior λ1 = 2/3) was best, for 4-ary we found (prior λ2 = 2/3, posterior λ1 = 1) was best, and (prior λ2 = 2/5, posterior λ1 = 2/3) for 8-ary. No temperature annealing was used. For structured prediction we used just the corresponding posterior λ1 as the temperature for the whole graph, as there was no variational posterior.
We performed early stopping when training with the score function estimators (VIMCO/NVIL) as they were much more prone to overï¬tting.
18
Published as a conference paper at ICLR 2017
# E EXTRA RESULTS | 1611.00712#74 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 75 | binary (240H â¼784V) 4-ary (240H â¼784V) 8-ary (240H â¼784V) binary (240Hâ¼240H â¼784V) 4-ary (240Hâ¼240H â¼784V) 8-ary (240Hâ¼240H â¼784V) m Test 91.9 1 89.0 5 88.4 50 1 5 50 91.4 89.4 89.7 1 5 50 92.5 90.5 90.5 1 5 50 87.9 86.6 86.0 1 5 50 87.4 86.7 86.7 1 5 50 88.2 87.4 87.2 Train 90.7 87.1 85.7 89.7 87.0 86.5 89.9 87.0 86.7 86.0 83.7 82.7 85.0 83.3 83.0 85.9 84.6 84.0 Test 108.0 107.7 109.0 110.7 110.5 113.0 119.61 120.7 121.7 106.6 106.9 108.7 106.6 108.3 109.4 111.3 110.5 111.1 Train 102.2 100.0 99.1 1002.7 100.2 100.0 105.3 102.7 101.0 99.0 97.1 95.9 97.8 | 1611.00712#75 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 77 | Table 2: Density estimation using Concrete relaxations with distinct arity of layers.
19
Published as a conference paper at ICLR 2017
# F CHEAT SHEET
1 = 1+ exp(â2) LEE {xx} = log (> a) k=1 log Anâ! = {© ⬠R" | xz ⬠(âc, 0), LEE{ex} = = of
Distribution and Domains Reparameterization/How To Sample
# Mass/Density
G G Gumbel R
â¼ â
# G d=
â10g(~log(U))
# log(
# log(U ))
â
â
# exp(
exp(âg â exp(â9))
# g
# exp(
g))
â
â
â
# L L
# Logistic R
~
â¼ â
# LeR
# L d= log(U )
â
â
# log(1
â
â
U )
# exp( â (1 + exp(
# l)
# expl-)?
l))2
â
# X µ λ
Logistic(µ, λ) R (0,
~
â¼ â â
# neR
) â
# X d=
# L + µ λ
# λ exp( (1 + exp(
λx + µ) λx + µ))2
â â
# exp(âAzx
# X X α
# Bernoulli(α) 0, 1 | 1611.00712#77 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 78 | λx + µ) λx + µ))2
â â
# exp(âAzx
# X X α
# Bernoulli(α) 0, 1
# ~ che
â¼ â { (0, â
} ) â
# X d=
1 {i
# if L + log α otherwise
â¥
0
α 1 + α
if x = 1
# X X α λ
BinConcrete(α, λ) (0, 1) ) (0, â ) (0, â
~
â¼ â â â
# X d= Ï((L + log α)/λ)
λαxâλâ1(1 (αxâλ + (1
â
â â
x)âλâ1 x)âλ)2
X X â¬
Discrete(α) â¼ n 0, 1 } â { k=1 Xk = 1
# d=
# Xk
# Xp=
# fl 0
if log αk + Gk > log αi + Gi for i otherwise
# = k
# αk i=1 αi
if xk = 1
# α
â¬
â
(0,
# )n
00)â
â
# X X α λ | 1611.00712#78 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 80 | Table 3: Cheat sheet for the random variables we use in this work. Note that some of these are atypical parameterizations, particularly the Bernoulli and Logistic random variables. The table only Uniform(0, 1). From there on it may assumes that you can sample uniform random numbers U Logistic is deï¬ned in the deï¬ne random variables and reuse them later on. For example, L second row, and after that point L represents a Logistic random variable that can be replaced by U ). Whenever random variables are indexed, e.g. Gk, they represent separate log U independent calls to a random number generator.
20
# λxi) | 1611.00712#80 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00625 | 0 | 6 1 0 2
v o N 3 ] G L . s c [
2 v 5 2 6 0 0 . 1 1 6 1 : v i X r a
# TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games
Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier [email protected], [email protected]
March 2, 2022
# Abstract
We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch [9]. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.
# Introduction | 1611.00625#0 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 1 | # Introduction
Deep Learning techniques [13] have recently enabled researchers to successfully tackle low-level perception problems in a supervised learning fashion. In the ï¬eld of Reinforcement Learning this has transferred into the ability to develop agents able to learn to act in high-dimensional input spaces. In particular, deep neural networks have been used to help reinforcement learning scale to environments with visual inputs, allowing them to learn policies in testbeds that previously were completely intractable. For instance, algorithms such as Deep Q-Network (DQN) [14] have been shown to reach human-level performances on most of the classic ATARI 2600 games by learning a controller directly from raw pixels, and without any additional supervision beside the score. Most of the work spawned in this new area has however tackled environments where the state is fully observable, the reward function has no or low delay, and the action set is relatively small. To solve the great majority of real life problems agents must instead be able to handle partial observability, structured and complex dynamics, and noisy and high-dimensional control interfaces. | 1611.00625#1 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 2 | To provide the community with useful research environments, work was done towards building platforms based on videogames such as Torcs [27], Mario AI [20], Unrealâs BotPrize [10], the Atari Learning Environment [3], VizDoom [12], and Minecraft [11], all of which have allowed researchers to train deep learning models with imitation learning, reinforcement learning and various decision making algorithms on increasingly diï¬cult problems. Recently there have also been eï¬orts to unite those and many other such environments in one platform to provide a standard interface for interacting with them [4]. We propose a bridge between StarCraft: Brood War, an RTS game with an active AI research community and annual AI competitions [16, 6, 1], and Lua, with examples in Torch [9] (a machine learning library).
1
# 2 Real-Time Strategy for Games AI | 1611.00625#2 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 3 | 1
# 2 Real-Time Strategy for Games AI
Real-time strategy (RTS) games have historically been a domain of interest of the planning and decision making research communities [5, 2, 6, 16, 17]. This type of games aims to simulate the control of multiple units in a military setting at diï¬erent scales and level of complexity, usually in a ï¬xed-size 2D map, in duel or in small teams. The goal of the player is to collect resources which can be used to expand their control on the map, create buildings and units to ï¬ght oï¬ enemy deployments, and ultimately destroy the opponents. These games exhibit durative moves (with complex game dynamics) with simultaneous actions (all players can give commands to any of their units at any time), and very often partial observability (a âfog of warâ: opponent units not in the vicinity of a playerâs units are not shown). | 1611.00625#3 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 4 | RTS gameplay: Components RTS game play are economy and battles (âmacroâ and âmicroâ respectively): players need to gather resources to build military units and defeat their opponents. To that end, they often have worker units (or extraction structures) that can gather resources needed to build workers, buildings, military units and research upgrades. Workers are often also builders (as in StarCraft), and are weak in ï¬ghts compared to military units. Resources may be of varying degrees of abundance and importance. For instance, in StarCraft minerals are used for everything, whereas gas is only required for advanced buildings or military units, and technology upgrades. Buildings and research deï¬ne technology trees (directed acyclic graphs) and each state of a âtech treeâ allow for the production of diï¬erent unit types and the training of new unit abilities. Each unit and building has a range of sight that provides the player with a view of the map. Parts of the map not in the sight range of the playerâs units are under fog of war and the player cannot observe what happens there. A considerable part of the strategy and the tactics lies in which armies to deploy and where. | 1611.00625#4 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 5 | Military units in RTS games have multiple properties which diï¬er between unit types, such as: attack range (including melee), damage types, armor, speed, area of eï¬ects, invisibility, ï¬ight, and special abilities. Units can have attacks and defenses that counter each others in a rock-paper-scissors fashion, making planning armies a extremely challenging and strategically rich process. An âopeningâ denotes the same thing as in Chess: an early game plan for which the player has to make choices. That is the case in Chess because one can move only one piece at a time (each turn), and in RTS games because, during the development phase, one is economically limited and has to choose which tech paths to pursue. Available resources constrain the technology advancement and the number of units one can produce. As producing buildings and units also take time, the arbitrage between investing in the economy, in technological advancement, and in units production is the crux of the strategy during the whole game. | 1611.00625#5 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 6 | Related work: Classical AI approaches normally involving planning and search [2, 15, 24, 7] are extremely challenged by the combinatorial action space and the complex dynamics of RTS games, making simulation (and thus Monte Carlo tree search) diï¬cult [8, 22]. Other characteristics such as partial observability, the non-obvious quantiï¬cation of the value of the state, and the problem of featurizing a dynamic and structured state contribute to making them an interesting problem, which altogether ultimately also make them an excellent benchmark for AI. As the scope of this paper is not to give a review of RTS AI research, we refer the reader to these surveys about existing research on RTS and StarCraft AI [16, 17].
It is currently tedious to do machine learning research in this domain. Most previous reinforcement learning research involve simple models or limited experimental settings [26, 23]. Other models are trained on oï¬ine datasets of highly skilled players [25, 18, 19, 21]. Contrary to most Atari games [3], RTS games have much higher action spaces and much more structured states. Thus, we advocate here to have not only the pixels as input and keyboard/mouse for commands, as in [3, 4, 12], but also a structured representation of the game state, as in
2 | 1611.00625#6 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 7 | 2
-- main game engine loop: while true do game.receive_player_actions() game.compute_dynamics() -- our injected code: torchcraft.send_state() torchcraft.receive_actions()
featurize, model = init() tc = require âtorchcraftâ tc:connect(port) while not tc.state.game_ended do tc:receive() features = featurize(tc.state) actions = model:forward(features) tc:send(tc:tocommand(actions))
# end
# end
Figure 1: Simpliï¬ed client/server code that runs in the game engine (server, on the left) and the library for the machine learning library or framework (client, on the right).
[11]. This makes it easier to try a broad variety of models, and may be useful in shaping loss functions for pixel-based models.
Finally, StarCraft: Brood War is a highly popular game (more than 9.5 million copies sold) with professional players, which provides interesting datasets, human feedback, and a good benchmark of what is possible to achieve within the game. There also exists an active academic community that organizes AI competitions.
# 3 Design | 1611.00625#7 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 8 | # 3 Design
The simplistic design of TorchCraft is applicable to any video game and any machine learning library or framework. Our current implementation connects Torch to a low level interface [1] to StarCraft: Brood War. TorchCraftâs approach is to dynamically inject a piece of code in the game engine that will be a server. This server sends the state of the game to a client (our machine learning code), and receives commands to send to the game. This is illustrated in Figure 1. The two modules are entirely synchronous, but the we provide two modalities of execution based on how we interact with the game:
Game-controlled - we inject a DLL that provides the game interface to the bots, and one that includes all the instructions to communicate with the machine learning client, interpreted by the game as a player (or bot AI). In this mode, the server starts at the beginning of the match and shuts down when that ends. In-between matches it is therefore necessary to re-establish the connection with the client, however this allows for the setting of multiple learning instances extremely easily. | 1611.00625#8 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 9 | Game-attached - we inject a DLL that provides the game interface to the bots, and we interact with it by attaching to the game process and communicating via pipes. In this mode there is no need to re-establish the connection with the game every time, and the control of the game is completely automatized out of the box, however itâs currently impossible to create multiple learning instances on the same guest OS.
Whatever mode one chooses to use, TorchCraft is seen by the AI programmer as a library that provides: connect(), receive() (to get the state), send(commands), and some helper functions about speciï¬cs of StarCraftâs rules and state representation. TorchCraft also provides an eï¬cient way to store game frames data from past (played or observed) games so that existing state (âreplaysâ, âtracesâ) can be re-examined.
3
# 4 Conclusion | 1611.00625#9 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 10 | 3
# 4 Conclusion
We presented several work that established RTS games as a source of interesting and relevant problems for the AI research community to work on. We believe that an eï¬cient bridge between low level existing APIs and machine learning frameworks/libraries would enable and foster research on such games. We presented TorchCraft: a library that enables state-of-the-art machine learning research on real game data by interfacing Torch with StarCraft: BroodWar. TorchCraft has already been used in reinforcement learning experiments on StarCraft, which led to the results in [23] (soon to be open-sourced too and included within TorchCraft).
# 5 Acknowledgements
We would like to thank Yann LeCun, Léon Bottou, Pushmeet Kohli, Subramanian Ramamoorthy, and Phil Torr for the continuous feedback and help with various aspects of this work. Many thanks to David Churchill for proofreading early versions of this paper.
# References
[1] BWAPI: Brood war api, an api for interacting with starcraft: Broodwar (1.16.1). https://bwapi. github.io/, 2009â2015. | 1611.00625#10 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 11 | [2] Aha, D. W., Molineaux, M., and Ponsen, M. Learning to win: Case-based plan selection in a real-time strategy game. In International Conference on Case-Based Reasoning (2005), Springer, pp. 5â20.
[3] Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research (2012).
[4] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J.,
and Zaremba, W. Openai gym. arXiv preprint arXiv:1606.01540 (2016).
[5] Buro, M., and Furtak, T. Rts games and real-time ai research. In Proceedings of the Behavior
Representation in Modeling and Simulation Conference (BRIMS) (2004), vol. 6370.
# [6] Churchill, D.
[6] Churchill, D. Starcraft ai competition. http://www.cs.mun.ca/~dchurchill/ starcraftaicomp/, 2011â2016.
[7] Churchill, D. Heuristic Search Techniques for Real-Time Strategy Games. PhD thesis, University
of Alberta, 2016. | 1611.00625#11 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 12 | [7] Churchill, D. Heuristic Search Techniques for Real-Time Strategy Games. PhD thesis, University
of Alberta, 2016.
[8] Churchill, D., Saffidine, A., and Buro, M. Fast heuristic search for rts game combat
scenarios. In AIIDE (2012).
[9] Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop (2011), no. EPFL-CONF-192376.
[10] Hingston, P. A turing test for computer game bots. IEEE Transactions on Computational
Intelligence and AI in Games 1, 3 (2009), 169â186.
[11] Johnson, M., Hofmann, K., Hutton, T., and Bignell, D. The malmo platform for artiï¬cial intelligence experimentation. In International joint conference on artiï¬cial intelligence (IJCAI) (2016).
[12] Kempka, M., Wydmuch, M., Runc, G., Toczek, J., and JaÅkowski, W. Vizdoom: A doom- based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097 (2016). | 1611.00625#12 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 13 | [13] LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521, 7553 (2015), 436â444. [14] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529â533.
4
[15] Ontañón, S., Mishra, K., Sugandh, N., and Ram, A. Case-based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning (2007), Springer Berlin Heidelberg, pp. 164â178.
[16] Ontanón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., and Preuss, M. A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and AI in Games, IEEE Transactions on 5, 4 (2013), 293â311.
[17] Robertson, G., and Watson, I. A review of real-time strategy game ai. AI Magazine 35, 4 | 1611.00625#13 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 14 | [17] Robertson, G., and Watson, I. A review of real-time strategy game ai. AI Magazine 35, 4
(2014), 75â104.
[18] Synnaeve, G. Bayesian programming and learning for multi-player video games: application to RTS AI. PhD thesis, PhD thesis, Institut National Polytechnique de GrenobleâINPG, 2012. [19] Synnaeve, G., and Bessiere, P. A dataset for starcraft ai & an example of armies clustering.
arXiv preprint arXiv:1211.4552 (2012).
[20] Togelius, J., Karakovskiy, S., and Baumgarten, R. The 2009 mario ai competition. In
IEEE Congress on Evolutionary Computation (2010), IEEE, pp. 1â8.
[21] Uriarte, A. Starcraft brood war data mining. http://nova.wolfwork.com/dataMining.html,
2015.
[22] Uriarte, A., and Ontañón, S. Game-tree search over high-level game states in rts games. In
Tenth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference (2014). | 1611.00625#14 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 15 | Tenth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference (2014).
[23] Usunier, N., Synnaeve, G., Lin, Z., and Chintala, S. Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. arXiv preprint arXiv:1609.02993 (2016).
[24] Weber, B. Reactive planning for micromanagement in rts games. Department of Computer
Science, University of California, Santa Cruz (2014).
[25] Weber, B. G., and Mateas, M. A data mining approach to strategy prediction. In 2009 IEEE
Symposium on Computational Intelligence and Games (2009), IEEE, pp. 140â147.
[26] Wender, S., and Watson, I. Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: broodwar. In Computational Intelligence and Games (CIG), 2012 IEEE Conference on (2012), IEEE, pp. 402â408.
[27] Wymann, B., Espié, E., Guionneau, C., Dimitrakakis, C., Coulom, R., and Sumner, A. Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net (2000).
5
# A Frame data | 1611.00625#15 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 17 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 R e c e i v e d u p d a t e : { // Number o f // NB : a â game â can be composed o f : frame_from_bwapi f r a m e s i n t h e c u r r e n t game s e v e r a l b a t t l e s i n t u n i t s _ m y s e l f : { // U n i t i n t : { // U n i t t a r g e t t a r g e t p o s ID ID : i n t : { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Type o f a i r weapon a w t y p e : i n t // Type o f g r o u n d weapon g wt yp e : i n t // Number o f awcd : // Number o f h i t p o i n t s hp : // Number o f e n e r g y / mana p o i | 1611.00625#17 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 18 | n t // Number o f awcd : // Number o f h i t p o i n t s hp : // Number o f e n e r g y / mana p o i n t s , e n e r g y : // U n i t i n t t y p e : p o s i t i o n : f r a m e s b e f o r e n e x t a i r weapon p o s s i b l e a t t a c k i n t i n t i f any i n t t y p e { // A b s o l u t e x 1 : // A b s o l u t e y 2 : } i n t i n t // Number o f ar mor p o i n t s ar mor : // Number o f gwcd : // Ground weapon a t t a c k damage g w a t t a c k : // P r o t o s s s h i e l d : // A i r weapon a t t a c k damage a w a t t a c k : // S i z e o f s i z e i n t : // Whether u n i t enemy : b o o l // Whether u n i t i d l e : b o o l // Ground weapon max r a n g e g w r a n g e : i n t // A i r | 1611.00625#18 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1611.00625 | 19 | l // Whether u n i t i d l e : b o o l // Ground weapon max r a n g e g w r a n g e : i n t // A i r weapon max r a n g e i n t a w r a n g e : i n t f r a m e s b e f o r e n e x t g r o u n d weapon p o s s i b l e a t t a c k i n t i n t s h i e l d p o i n t s ( l i k e HP , b u t w i t h s p e c i a l p r o p e r t i e s ) i n t i n t t h e u n i t ( a i r weapon a t t a c k damage ) i s an enemy o r n o t i s i d l e , i . e . n o t f o l l o w i n g any o r d e r s c u r r e n t l y } } // Same f o r m a t a s " u n i t s _ m y s e l f " . . . u n i t s _ e n e m y : } | 1611.00625#19 | TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games | We present TorchCraft, a library that enables deep learning research on
Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it
easier to control these games from a machine learning framework, here Torch.
This white paper argues for using RTS games as a benchmark for AI research, and
describes the design and components of TorchCraft. | http://arxiv.org/pdf/1611.00625 | Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier | cs.LG, cs.AI, I.2.1 | null | null | cs.LG | 20161101 | 20161103 | [
{
"id": "1606.01540"
},
{
"id": "1605.02097"
},
{
"id": "1609.02993"
}
] |
1610.10099 | 1 | Abstract We present a novel neural network for process- ing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the de- coder on top of the encoder and preserving the temporal resolution of the sequences. To ad- dress the differing lengths of the source and the target, we introduce an efï¬cient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to in- crease its receptive ï¬eld. The resulting network it runs in time that has two core properties: is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art per- formance on character-level language modelling and outperforms the previous best results ob- tained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, sur- passing comparable neural translation models that are based on recurrent networks with atten- tional | 1610.10099#1 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 3 | # 1. Introduction
In neural language modelling, a neural network estimates a distribution over sequences of words or characters that belong to a given language (Bengio et al., 2003). In neu- ral machine translation, the network estimates a distribu- tion over sequences in the target language conditioned on a given sequence in the source language. The network can be thought of as composed of two parts: a source network (the encoder) that encodes the source sequence into a rep- resentation and a target network (the decoder) that uses the
to S80 $1 $2 83 S4 85 86 $7 $8 $9 S10 S11 $12 $13 S14 $15 $16
Figure 1. The architecture of the ByteNet. The target decoder (blue) is stacked on top of the source encoder (red). The decoder generates the variable-length target sequence using dynamic un- folding.
representation of the source encoder to generate the target sequence (Kalchbrenner & Blunsom, 2013). | 1610.10099#3 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 4 | representation of the source encoder to generate the target sequence (Kalchbrenner & Blunsom, 2013).
Recurrent neural networks (RNN) are powerful sequence models (Hochreiter & Schmidhuber, 1997) and are widely used in language modelling (Mikolov et al., 2010), yet they have a potential drawback. RNNs have an inherently se- rial structure that prevents them from being run in parallel along the sequence length during training and evaluation. Forward and backward signals in a RNN also need to tra- verse the full distance of the serial path to reach from one token in the sequence to another. The larger the distance, the harder it is to learn the dependencies between the tokens (Hochreiter et al., 2001).
A number of neural architectures have been proposed for modelling translation, such as encoder-decoder net- works (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Kaiser & Bengio, 2016), networks with attentional pooling (Bahdanau et al., 2014) and two- dimensional networks (Kalchbrenner et al., 2016a). De- spite the generally good performance, the proposed models
Neural Machine Translation in Linear Time
EOS EOS ; (ttt. kl kt ttt t+ todd qaane t-$-â-}--+ | 1610.10099#4 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 5 | Neural Machine Translation in Linear Time
EOS EOS ; (ttt. kl kt ttt t+ todd qaane t-$-â-}--+
Figure 2. Dynamic unfolding in the ByteNet architecture. At each step the decoder is conditioned on the source representation produced by the encoder for that step, or simply on no representation for steps beyond the extended length |Ët|. The decoding ends when the target network produces an end-of-sequence (EOS) symbol.
either have running time that is super-linear in the length of the source and target sequences, or they process the source sequence into a constant size representation, burdening the model with a memorization step. Both of these drawbacks grow more severe as the length of the sequences increases. | 1610.10099#5 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 6 | We present a family of encoder-decoder neural networks that are characterized by two architectural mechanisms aimed to address the drawbacks of the conventional ap- proaches mentioned above. The ï¬rst mechanism involves the stacking of the decoder on top of the representation of the encoder in a manner that preserves the temporal res- olution of the sequences; this is in contrast with architec- tures that encode the source into a ï¬xed-size representation (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014). The second mechanism is the dynamic unfolding mecha- nism that allows the network to process in a simple and ef- ï¬cient way source and target sequences of different lengths (Sect. 3.2).
tance between the tokens. Dependencies over large dis- tances are connected by short paths and can be learnt more easily. | 1610.10099#6 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 7 | tance between the tokens. Dependencies over large dis- tances are connected by short paths and can be learnt more easily.
We apply the ByteNet model to strings of characters for character-level language modelling and character-to- character machine translation. We evaluate the decoder network on the Hutter Prize Wikipedia task (Hutter, 2012) where it achieves the state-of-the-art performance of 1.31 bits/character. We further evaluate the encoder- decoder network on character-to-character machine trans- lation on the English-to-German WMT benchmark where it achieves a state-of-the-art BLEU score of 22.85 (0.380 bits/character) and 25.53 (0.389 bits/character) on the 2014 and 2015 test sets, respectively. On the character-level ma- chine translation task, ByteNet betters a comparable ver- sion of GNMT (Wu et al., 2016a) that is a state-of-the-art system. These results show that deep CNNs are simple, scalable and effective architectures for challenging linguis- tic processing tasks. | 1610.10099#7 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 8 | The ByteNet is the instance within this family of models that uses one-dimensional convolutional neural networks (CNN) of ï¬xed depth for both the encoder and the decoder (Fig. 1). The two CNNs use increasing factors of dilation to rapidly grow the receptive ï¬elds; a similar technique is also used in (van den Oord et al., 2016a). The convolutions in the decoder CNN are masked to prevent the network from seeing future tokens in the target sequence (van den Oord et al., 2016b). | 1610.10099#8 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 9 | The network has beneï¬cial computational and learning properties. From a computational perspective, the network has a running time that is linear in the length of the source log d where and target sequences (up to a constant c d is the size of the desired dependency ï¬eld). The com- putation in the encoder during training and decoding and in the decoder during training can also be run efï¬ciently in parallel along the sequences (Sect. 2). From a learn- ing perspective, the representation of the source sequence in the ByteNet is resolution preserving; the representation sidesteps the need for memorization and allows for maxi- mal bandwidth between encoder and decoder. In addition, the distance traversed by forward and backward signals be- tween any input and output tokens corresponds to the ï¬xed depth of the networks and is largely independent of the disThe paper is organized as follows. Section 2 lays out the background and some desiderata for neural architectures underlying translation models. Section 3 deï¬nes the pro- posed family of architectures and the speciï¬c convolutional instance (ByteNet) used in the experiments. Section 4 anal- yses ByteNet as well as existing neural translation models based on the desiderata set out in Section 2. Section 5 re- ports the experiments on language modelling and Section 6 reports the experiments on character-to-character machine translation. | 1610.10099#9 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 10 | # 2. Neural Translation Model
Given a string s from a source language, a neural transla- tion model estimates a distribution p(t s) over strings t of | a target language. The distribution indicates the probability of a string t being a translation of s. A product of condi- tionals over the tokens in the target t = t0, ..., tN leads to a tractable formulation of the distribution:
N p(t|s) = | [ p(tilt<i.s) (1) i=0
Neural Machine Translation in Linear Time
Each conditional factor expresses complex and long-range dependencies among the source and target tokens. The strings are usually sentences of the respective languages; the tokens are words or, as in the our case, characters. The network that models p(t s) is composed of two parts: | a source network (the encoder) that processes the source string into a representation and a target network (the de- coder) that uses the source representation to generate the target string (Kalchbrenner & Blunsom, 2013). The de- coder functions as a language model for the target lan- guage. | 1610.10099#10 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 11 | A neural translation model has some basic properties. The decoder is autoregressive in the target tokens and the model is sensitive to the ordering of the tokens in the source and target strings. It is also useful for the model to be able to assign a non-zero probability to any string in the target language and retain an open vocabulary.
# 2.1. Desiderata
2d SF ~ 2d ⬠4 I Layer-Norm Ta Mu I I Masked 1 x & Masked 1x & MU Layer-Norm Layer-Norm d d 1x1 1x1 Layer-Norm [ Tayer-Norm [ 2d 2d 6 é L
Figure 3. Left: Residual block with ReLUs (He et al., 2016) adapted for decoders. Right: Residual Multiplicative Block adapted for decoders and corresponding expansion of the MU (Kalchbrenner et al., 2016b).
Beyond these basic properties the deï¬nition of a neural translation model does not determine a unique neural ar- chitecture, so we aim at identifying some desiderata.
First, the running time of the network should be linear in the length of the source and target strings. This ensures that the model is scalable to longer strings, which is the case when using characters as tokens. | 1610.10099#11 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 12 | (Sect. 3.2). The decoder is a language model that is formed of one-dimensional convolutional layers that are masked (Sect. 3.4) and use dilation (Sect. 3.5). The encoder pro- cesses the source string into a representation and is formed of one-dimensional convolutional layers that use dilation but are not masked. Figure 1 depicts the two networks and their combination.
The use of operations that run in parallel along the se- quence length can also be beneï¬cial for reducing compu- tation time.
Second, the size of the source representation should be lin- ear in the length of the source string, i.e. it should be res- olution preserving, and not have constant size. This is to avoid burdening the model with an additional memoriza- tion step before translation. In more general terms, the size of a representation should be proportional to the amount of information it represents or predicts.
# 3.1. Encoder-Decoder Stacking | 1610.10099#12 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 13 | # 3.1. Encoder-Decoder Stacking
A notable feature of the proposed family of architectures is the way the encoder and the decoder are connected. To maximize the representational bandwidth between the en- coder and the decoder, we place the decoder on top of the representation computed by the encoder. This is in contrast to models that compress the source representation into a ï¬xed-size vector (Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014) or that pool over the source rep- resentation with a mechanism such as attentional pooling (Bahdanau et al., 2014).
Third, the path traversed by forward and backward signals in the network (between input and ouput tokens) should be short. Shorter paths whose length is largely decoupled from the sequence distance between the two tokens have the po- tential to better propagate the signals (Hochreiter et al., 2001) and to let the network learn long-range dependencies more easily.
# 3.2. Dynamic Unfolding
An encoder and a decoder network that process sequences of different lengths cannot be directly connected due to the different sizes of the computed representations. We cir- cumvent this issue via a mechanism which we call dynamic unfolding, which works as follows.
# 3. ByteNet | 1610.10099#13 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 14 | # 3. ByteNet
We aim at building neural language and translation mod- els that capture the desiderata set out in Sect. 2.1. The proposed ByteNet architecture is composed of a de- is stacked on an encoder (Sect. 3.1) and coder that generates variable-length outputs via dynamic unfolding
Given source and target sequences s and t with respective , one ï¬rst chooses a sufï¬ciently tight up- and lengths | per bound Ë as a linear function of t | | the source length
s | |
# Ë t | |
+ b (2)
= a |
# s |
Neural Machine Translation in Linear Time
ti to tz ta ts to tr tg tg ta ts te
500 4004 300 4 04 English 0 0 0 100 200° 300 400500 German p = 968
Figure 4. Recurrent ByteNet variants of the ByteNet architecture. Left: Recurrent ByteNet with convolutional source network and recurrent target network. Right: Recurrent ByteNet with bidirec- tional recurrent source network and recurrent target network. The latter architecture is a strict generalization of the RNN Enc-Dec network. | 1610.10099#14 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 16 | The tight upper bound Ë is chosen in such a way that, on t | | the one hand, it is greater than the actual length in almost all cases and, on the other hand, it does not increase exces- sively the amount of computation that is required. Once a linear relationship is chosen, one designs the source en- , the coder so that, given a source sequence of length s | encoder outputs a representation of the established length Ë . In our case, we let a = 1.20 and b = 0 when translating t | | from English into German, as German sentences tend to be somewhat longer than their English counterparts (Fig. 5). In this manner the representation produced by the encoder can be efï¬ciently computed, while maintaining high band- width and being resolution-preserving. Once the encoder representation is computed, we let the decoder unfold step- by-step over the encoder representation until the decoder it- self outputs an end-of-sequence symbol; the unfolding pro- cess may freely proceed beyond the estimated length Ë of t | | the encoder representation. Figure 2 gives an example of dynamic unfolding.
# 3.3. Input Embedding Tensor | 1610.10099#16 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 17 | # 3.3. Input Embedding Tensor
Given the target sequence t = t0, ..., tn the ByteNet de- coder embeds each of the ï¬rst n tokens t0, ..., tnâ1 via a look-up table (the n tokens t1, ..., tn serve as targets for the predictions). The resulting embeddings are concatenated 2d where d is the number of inner into a tensor of size n à channels in the network.
# 3.5. Dilation
The masked convolutions use dilation to increase the re- ceptive ï¬eld of the target network (Chen et al., 2014; Yu & Koltun, 2015). Dilation makes the receptive ï¬eld grow exponentially in terms of the depth of the networks, as op- posed to linearly. We use a dilation scheme whereby the di- lation rates are doubled every layer up to a maximum rate r (for our experiments r = 16). The scheme is repeated mul- tiple times in the network always starting from a dilation rate of 1 (van den Oord et al., 2016a; Kalchbrenner et al., 2016b).
# 3.6. Residual Blocks | 1610.10099#17 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 18 | # 3.6. Residual Blocks
Each layer is wrapped in a residual block that contains additional convolutional layers with ï¬lters of size 1 1 (He et al., 2016). We adopt two variants of the residual blocks: one with ReLUs, which is used in the machine translation experiments, and one with Multiplicative Units (Kalchbrenner et al., 2016b), which is used in the language modelling experiments. Figure 3 diagrams the two vari- In both cases, we use layer normal- ants of the blocks. ization (Ba et al., 2016) before the activation function, as it is well suited to sequence processing where computing the activation statistics over the following future tokens (as would be done by batch normalization) must be avoided. After a series of residual blocks of increased dilation, the network applies one more convolution and ReLU followed by a convolution and a ï¬nal softmax layer.
# 3.4. Masked One-dimensional Convolutions | 1610.10099#18 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 19 | # 3.4. Masked One-dimensional Convolutions
The decoder applies masked one-dimensional convolutions (van den Oord et al., 2016b) to the input embedding ten- sor that have a masked kernel of size k. The masking en- sures that information from future tokens does not affect the prediction of the current token. The operation can be implemented either by zeroing out some of the weights of 1 or by padding the input map. a wider kernel of size 2k
# 4. Model Comparison
In this section we analyze the properties of various previ- ously introduced neural translation models as well as the ByteNet family of models. For the sake of a more com- plete analysis, we include two recurrent ByteNet variants (which we do not evaluate in the experiments).
â
Neural Machine Translation in Linear Time | 1610.10099#19 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 20 | â
Neural Machine Translation in Linear Time
Model NetS NetT Time RP PathS PathT RCTM 1 CNN RNN |S||S| + |T | no |S| |T | RCTM 2 CNN RNN |S||S| + |T | yes |S| |T | RNN Enc-Dec RNN RNN |S| + |T | no |S| + |T | |T | RNN Enc-Dec Att RNN RNN |S||T | yes 1 |T | Grid LSTM RNN RNN |S||T | yes |S| + |T | |S| + |T | Extended Neural GPU cRNN cRNN |S||S| + |S||T | yes |S| |T | Recurrent ByteNet RNN RNN |S| + |T | yes max(|S|, |T |) |T | Recurrent ByteNet CNN RNN c|S| + |T | yes c |T | ByteNet CNN CNN c|S| + c|T | yes c c
Table 1. Properties of various neural translation models.
# 4.1. Recurrent ByteNets | 1610.10099#20 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 21 | Table 1. Properties of various neural translation models.
# 4.1. Recurrent ByteNets
The ByteNet is composed of two stacked encoder and de- coder networks where the decoder network dynamically adapts to the output length. This way of combining the networks is not tied to the networks being strictly convolu- tional. We may consider two variants of the ByteNet that use recurrent networks for one or both of the networks (see Figure 4). The ï¬rst variant replaces the convolutional de- coder with a recurrent one that is similarly stacked and dy- namically unfolded. The second variant also replaces the convolutional encoder with a recurrent encoder, e.g. a bidi- rectional RNN. The target RNN is then placed on top of the source RNN. Considering the latter Recurrent ByteNet, we can see that the RNN Enc-Dec network (Sutskever et al., 2014; Cho et al., 2014) is a Recurrent ByteNet where all connections between source and target â except for the ï¬rst one that connects s0 and t0 â have been severed. The Re- current ByteNet is a generalization of the RNN Enc-Dec and, modulo the type of weight-sharing scheme, so is the convolutional ByteNet. | 1610.10099#21 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 22 | atum into three columns. The ï¬rst column indicates the time complexity of the network as a function of the length of the sequences and is denoted by Time. The other two columns NetS and NetT indicate, respectively, whether the source and the target network use a convolutional struc- ture (CNN) or a recurrent one (RNN); a CNN structure has the advantage that it can be run in parallel along the length of the sequence. The second (resolution preserva- tion) desideratum corresponds to the RP column, which indicates whether the source representation in the network is resolution preserving. Finally, the third desideratum (short forward and backward ï¬ow paths) is reï¬ected by two columns. The PathS column corresponds to the length in layer steps of the shortest path between a source token and any output target token. Similarly, the PathT column cor- responds to the length of the shortest path between an input target token and any output target token. Shorter paths lead to better forward and backward signal propagation.
# 4.2. Comparison of Properties | 1610.10099#22 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 23 | # 4.2. Comparison of Properties
In our comparison we consider the following neural translation models: the Recurrent Continuous Translation Model (RCTM) 1 and 2 (Kalchbrenner & Blunsom, 2013); the RNN Enc-Dec (Sutskever et al., 2014; Cho et al., 2014); the RNN Enc-Dec Att with the attentional pooling mecha- nism (Bahdanau et al., 2014) of which there are a few vari- ations (Luong et al., 2015; Chung et al., 2016a); the Grid LSTM translation model (Kalchbrenner et al., 2016a) that uses a multi-dimensional architecture; the Extended Neural GPU model (Kaiser & Bengio, 2016) that has a convolu- tional RNN architecture; the ByteNet and the two Recur- rent ByteNet variants. | 1610.10099#23 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 24 | Table 1 summarizes the properties of the models. The ByteNet, the Recurrent ByteNets and the RNN Enc-Dec are the only networks that have linear running time (up to the constant c). The RNN Enc-Dec, however, does not preserve the source sequence resolution, a feature that ag- gravates learning for long sequences such as those that ap- pear in character-to-character machine translation (Luong & Manning, 2016). The RCTM 2, the RNN Enc-Dec Att, the Grid LSTM and the Extended Neural GPU do preserve the resolution, but at a cost of a quadratic running time. The ByteNet stands out also for its Path properties. The dilated structure of the convolutions connects any two source or target tokens in the sequences by way of a small number of network layers corresponding to the depth of the source or target networks. For character sequences where learning long-range dependencies is important, paths that are sub- linear in the distance are advantageous.
Our comparison criteria reï¬ect the desiderata set out in Sect. 2.1. We separate the ï¬rst (computation time) desiderNeural Machine Translation in Linear Time | 1610.10099#24 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 25 | Model Inputs Outputs Phrase Based MT (Freitag et al., 2014; Williams et al., 2015) phrases phrases 20.7 24.0 RNN Enc-Dec (Luong et al., 2015) Reverse RNN Enc-Dec (Luong et al., 2015) RNN Enc-Dec Att (Zhou et al., 2016) RNN Enc-Dec Att (Luong et al., 2015) GNMT (RNN Enc-Dec Att) (Wu et al., 2016a) words words words words words words words words word-pieces word-pieces 11.3 14.0 20.6 20.9 24.61 RNN Enc-Dec Att (Chung et al., 2016b) RNN Enc-Dec Att (Chung et al., 2016b) GNMT (RNN Enc-Dec Att) (Wu et al., 2016a) ByteNet BPE BPE char char BPE char char char 19.98 21.33 22.62 23.75 21.72 23.45 26.26
Table 2. BLEU scores on En-De WMT NewsTest 2014 and 2015 test sets. | 1610.10099#25 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 26 | Table 2. BLEU scores on En-De WMT NewsTest 2014 and 2015 test sets.
1.67 Stacked LSTM (Graves, 2013) 1.58 GF-LSTM (Chung et al., 2015) 1.47 Grid-LSTM (Kalchbrenner et al., 2016a) 1.46 Layer-normalized LSTM (Chung et al., 2016a) 1.44 MI-LSTM (Wu et al., 2016b) 1.40 Recurrent Memory Array Structures (Rocki, 2016) 1.40 HM-LSTM (Chung et al., 2016a) 1.38 Layer Norm HyperLSTM (Ha et al., 2016) Large Layer Norm HyperLSTM (Ha et al., 2016) 1.34 Recurrent Highway Networks (Srivastava et al., 2015) 1.32 1.31 ByteNet Decoder
WMT Test â14 WMT Test â15 Bits/character 0.521 0.532 BLEU 23.75 26.26
Table 4. Bits/character with respective BLEU score achieved by the ByteNet translation model on the English-to-German WMT translation task.
Table 3. Negative log-likelihood results in bits/byte on the Hutter Prize Wikipedia benchmark. | 1610.10099#26 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 27 | Table 3. Negative log-likelihood results in bits/byte on the Hutter Prize Wikipedia benchmark.
Table 3 lists recent results of various neural sequence models on the Wikipedia dataset. All the results ex- cept for the ByteNet result are obtained using some vari- ant of the LSTM recurrent neural network (Hochreiter & Schmidhuber, 1997). The ByteNet decoder achieves 1.31 bits/character on the test set.
# 5. Character Prediction
We ï¬rst evaluate the ByteNet Decoder separately on a character-level language modelling benchmark. We use the Hutter Prize version of the Wikipedia dataset and follow the standard split where the ï¬rst 90 million bytes are used for training, the next 5 million bytes are used for validation and the last 5 million bytes are used for testing (Chung et al., 2015). The total number of characters in the vocabulary is 205. | 1610.10099#27 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 28 | The ByteNet Decoder that we use for the result has 30 residual blocks split into six sets of ï¬ve blocks each; for the ï¬ve blocks in each set the dilation rates are, respec- tively, 1, 2, 4, 8 and 16. The masked kernel has size 3. This gives a receptive ï¬eld of 315 characters. The number of hidden units d is 512. For this task we use residual multi- plicative blocks (Fig. 3 Right). For the optimization we use Adam (Kingma & Ba, 2014) with a learning rate of 0.0003 and a weight decay term of 0.0001. We apply dropout to the last ReLU layer before the softmax dropping units with a probability of 0.1. We do not reduce the learning rate dur- ing training. At each step we sample a batch of sequences of 500 characters each, use the ï¬rst 100 characters as the minimum context and predict the latter 400 characters.
# 6. Character-Level Machine Translation | 1610.10099#28 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 29 | # 6. Character-Level Machine Translation
We evaluate the full ByteNet on the WMT English to Ger- man translation task. We use NewsTest 2013 for validation and NewsTest 2014 and 2015 for testing. The English and German strings are encoded as sequences of characters; no explicit segmentation into words or morphemes is applied to the strings. The outputs of the network are strings of characters in the target language. We keep 323 characters in the German vocabulary and 296 in the English vocabu- lary.
The ByteNet used in the experiments has 30 residual blocks in the encoder and 30 residual blocks in the decoder. As in the ByteNet Decoder, the residual blocks are arranged in sets of ï¬ve with corresponding dilation rates of 1, 2, 4, 8 and 16. For this task we use the residual blocks with ReLUs (Fig. 3 Left). The number of hidden units d is 800. The size of the kernel in the source network is 3, whereas the size of the masked kernel in the target network is 3. For the optimization we use Adam with a learning rate of 0.0003.
Each sentence is padded with special characters to the near- est greater multiple of 50; 20% of further padding is apNeural Machine Translation in Linear Time | 1610.10099#29 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 30 | Each sentence is padded with special characters to the near- est greater multiple of 50; 20% of further padding is apNeural Machine Translation in Linear Time
Director Jon Favreau, who is currently working on Disneyâs forthcoming Jungle Book ï¬lm, told the website Hollywood Reporter: âI think times are changing.â
Regisseur Jon Favreau, der derzeit an Disneys bald erscheinenden Dschungelbuch-Film arbeitet, sagte gegenber der Webseite Hollywood Reporter: âIch glaube, die Zeiten ¨andern sich.â
Regisseur Jon Favreau, der zur Zeit an Disneys kommendem Jungle Book Film arbeitet, hat der Website Hollywood Reporter gesagt: âIch denke, die Zeiten ¨andern sichâ.
Matt Casaday, 25, a senior at Brigham Young University, says he had paid 42 cents on Amazon.com for a used copy of âStrategic Media Decisions: Understanding The Business End Of The Advertising Business.â | 1610.10099#30 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 31 | Matt Casaday, 25, Abschlussstudent an der Brigham Young University, sagt, dass er auf Amazon.com 42 Cents ausgegeben hat f¨ur eine gebrauchte Ausgabe von âStrategic Media Decisions: Understanding The Business End Of The Advertising Business.â
Matt Casaday, 25, ein Senior an der Brigham Young University, sagte, er habe 42 Cent auf Amazon.com f¨ur eine gebrauchte Kopie von âStrategic Media Decisions: Understanding The Business End Of The Advertising Businessâ.
Table 5. Raw output translations generated from the ByteNet that highlight interesting reordering and transliteration phenomena. For each group, the ï¬rst row is the English source, the second row is the ground truth German target, and the third row is the ByteNet translation. | 1610.10099#31 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 32 | plied to each source sentence as a part of dynamic unfold- ing (eq. 2). Each pair of sentences is mapped to a bucket based on the pair of padded lengths for efï¬cient batching during training. We use vanilla beam search according to the total likelihood of the generated candidate and accept only candidates which end in a end-of-sentence token. We use a beam of size 12. We do not use length normalization, nor do we keep score of which parts of the source sentence have been translated (Wu et al., 2016a).
Table 2 and Table 4 contain the results of the experiments. On NewsTest 2014 the ByteNet achieves the highest perfor- mance in character-level and subword-level neural machine translation, and compared to the word-level systems it is second only to the version of GNMT that uses word-pieces. On NewsTest 2015, to our knowledge, ByteNet achieves the best published results to date.
# 7. Conclusion | 1610.10099#32 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
1610.10099 | 33 | # 7. Conclusion
We have introduced the ByteNet, a neural translation model that has linear running time, decouples translation from memorization and has short signal propagation paths for tokens in sequences. We have shown that the ByteNet de- coder is a state-of-the-art character-level language model based on a convolutional neural network that outperforms recurrent neural language models. We have also shown that the ByteNet generalizes the RNN Enc-Dec architecture and achieves state-of-the-art results for character-to-character machine translation and excellent results in general, while maintaining linear running time complexity. We have re- vealed the latent structure learnt by the ByteNet and found it to mirror the expected alignment between the tokens in the sentences. | 1610.10099#33 | Neural Machine Translation in Linear Time | We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | http://arxiv.org/pdf/1610.10099 | Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu | cs.CL, cs.LG | 9 pages | null | cs.CL | 20161031 | 20170315 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.