doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01144 | 24 | We trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model qÏ(y|x) and inference model qÏ(z|x, y) are each im- plemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model pθ(x|y, z) is a 4-layer convolutional-transpose network with ReLU activations. Experimental details are provided in Appendix A.
Estimators were trained and evaluated against several values of α = {0.1, 0.2, 0.3, 0.8, 1.0} and the best unlabeled classiï¬cation results for test sets were selected for each estimator and reported
7
Published as a conference paper at ICLR 2017
in Table 2. We used an annealing schedule of Ï = max(0.5, exp(â3eâ5 · t)), updated every 2000 steps. | 1611.01144#24 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 24 | Cart-Pole In this classic RL environment, an agent balances a pole atop a cart (Figure 1(b)). Qualitatively, the game exhibits four distinct catastrophe modes. The pole could fall down to the right or fall down to the left. Additionally, the cart could run off the right boundary of the screen or run off the left. Formally, at each time, the agent observes a four-dimensional state vector (x, v, θ, Ï) consisting respectively of the cart position, cart velocity, pole angle, and the poleâs angular velocity. At each time step, the agent chooses an action, applying a force of either â1 or +1. For every time step that the pole remains upright and the cart remains on the screen, the agent receives a reward of 1. If the pole falls, the episode terminates, giving a return of 0 from the penultimate state. In experiments, we use the implementation CartPole-v0 contained in the openAI gym [6]. Like Adventure Seeker, this problem admits an analytic solution. A perfect policy should never drop the pole. But, as with Adventure Seeker, a DQN converges to a constant rate of catastrophes per turn. | 1611.01211#24 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 24 | 6
Published as a conference paper at ICLR 2017
When using replay, we add to each thread a replay memory that is up to 50 000 frames in size. The total amount of memory used across all threads is thus similar in size to that of DQN (Mnih et al., 2015). For all Atari experiments, we use a single learning rate adopted from an earlier implementation of A3C without further tuning. We do not anneal the learning rates over the course of training as in Mnih et al. (2016). We otherwise adopt the same optimization procedure as in Mnih et al. (2016). Speciï¬cally, we adopt entropy regularization with weight 0.001, discount the rewards with γ = 0.99, and perform updates every 20 steps (k = 20 in the notation of Section 2). In all our experiments with experience replay, we use importance weight truncation with c = 10. We consider training ACER both with and without trust region updating as described in Section 3.3. When trust region updating is used, we use δ = 1 and α = 0.99 for all experiments. | 1611.01224#24 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 25 | in Table 2. We used an annealing schedule of Ï = max(0.5, exp(â3eâ5 · t)), updated every 2000 steps.
In Kingma et al. (2014), inference over the latent state is done by marginalizing out y and using the reparameterization trick for sampling from qÏ(z|x, y). However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint qÏ(y, z|x), achieving drastic speedups in training without compromising generative or classiï¬cation performance. (Table 2, Figure 5).
Table 2: Marginalizing over y and single-sample variational inference perform equally well when applied to image classiï¬cation on the binarized MNIST dataset (Larochelle & Murray, 2011). We report variational lower bounds and image classiï¬cation accuracy for unlabeled data in the test set.
Marginalization Gumbel ST Gumbel-Softmax 92.6% 92.4% 93.6% | 1611.01144#25 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 25 | Atari games In addition to these pathological cases, we address Freeway, Asteroids, and Seaquest, games from the Atari Learning Environment. In Freeway, the agent controls a chicken with a goal of crossing the road while dodging traffic. The chicken loses a life and starts from the original location if hit by a car. Points are only rewarded for successfully crossing the road. In Asteroids, the agent pilots a ship and gains points from shooting the asteroids. She must avoid colliding with asteroids which cost it lives. In Seaquest, a player swims under water. Periodically, as the oxygen gets low, she must rise to the surface for oxygen. Additionally, fishes swim across the screen. The player gains points each time she shoots a fish. Colliding
8 | 1611.01211#25 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 25 | To compare different agents, we adopt as our metric the median of the human normalized score over all 57 games. The normalization is calculated such that, for each game, human scores and random scores are evaluated to 1, and 0 respectively. The normalized score for a given game at time t is computed as the average normalized score over the past 1 million consecutive frames encountered until time t. For each agent, we plot its cumulative maximum median score over time. The result is summarized in Figure 1.
The four colors in Figure 1 correspond to four replay ratios (0, 1, 4 and 8) with a ratio of 4 meaning that we use the off-policy component of ACER 4 times after using the on-policy component (A3C). That is, a replay ratio of 0 means that we are using A3C. The solid and dashed lines represent ACER with and without trust region updating respectively. The gray and black curves are the original DQN (Mnih et al., 2015) and Prioritized Replay agent of Schaul et al. (2016) agents respectively. | 1611.01224#25 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 26 | Marginalization Gumbel ST Gumbel-Softmax 92.6% 92.4% 93.6%
In Figure 5, we show how Gumbel-Softmax versus marginalization scales with the number of cat- egorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is 2Ã as fast for 10 classes and 9.9Ã as fast for 100 classes.
(Oth 3% BGS AS O/J2AZÂ¥-SbIIP OTZBYS e729 "OLF EF E97 BF OLIPI3RBYEODEFG Ol23 4567989 y ~
(a) (b)
Figure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior g4(y|), providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza- tion on a semi-supervised VAE. Evaluations were performed on a GTX Titan X® GPU. (6) Visualization of MNIST analogies generated by varying style variable z across each row and class variable y across each column.
# 5 DISCUSSION | 1611.01144#26 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 26 | (a) Adventure Seeker
# (b) Cart-Pole
(c) Seaquest
(d) Asteroids
(e) Freeway
Figure 1: In experiments, we consider two toy environments (a,b) and the Atari games Seaquest (c), Asteroids (d), and Freeway (e)
with a fish or running out of oxygen result in death. In all three games, the agent has 3 lives, and the final death is a terminal state. We label each loss of a life as a catastrophe state.
# 5 Experiments
First, on the toy examples, We evaluate standard DQNs and intrinsic fear DQNs using multilayer perceptrons (MLPs) with a single hidden layer and 128 hidden nodes. We train all MLPs by stochastic gradient descent using the Adam optimizer [16].
In Adventure Seeker, an agent can escape from danger with only a few time steps of notice, so we set the fear radius kr to 5. We phase in the fear factor quickly, reaching full strength in just 1000 steps. On this
9
(a) Seaquest (b) Asteroids (c) Freeway (d) Seaquest (e) Asteroids (f) Freeway | 1611.01211#26 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 26 | As shown on the left panel of Figure 1, replay signiï¬cantly increases data efï¬ciency. We observe that when using the trust region optimizer, the average reward as a function of the number of environmental steps increases with the ratio of replay. This increase has diminishing returns, but with enough replay, ACER can match the performance of the best DQN agents. Moreover, it is clear that the off-policy actor critics (ACER) are much more sample efï¬cient than their on-policy counterpart (A3C).
The right panel of Figure 1 shows that ACER agents perform similarly to A3C when measured by wall clock time. Thus, in this case, it is possible to achieve better data-efï¬ciency without necessarily compromising on computation time. In particular, ACER with a replay ratio of 4 is an appealing alternative to either the prioritized DQN agent or A3C.
# 5 CONTINUOUS ACTOR CRITIC WITH EXPERIENCE REPLAY
Retrace requires estimates of both Q and V , but we cannot easily integrate over Q to derive V in continuous action spaces. In this section, we propose a solution to this problem in the form of a novel representation for RL, as well as modiï¬cations necessary for trust region updating.
5.1 POLICY EVALUATION | 1611.01224#26 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 27 | # 5 DISCUSSION
The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose corresponding estimator affords low-variance path derivative gradients for the categorical distri- bution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax enables dramatic speedups in inference over discrete latent variables.
# ACKNOWLEDGMENTS
We sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback.
8
Published as a conference paper at ICLR 2017
# REFERENCES
Y. Bengio, N. L´eonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016. | 1611.01144#27 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 27 | Figure 2: Catastrophes (first row) and reward/episode (second row) for DQNs and Intrinsic Fear. On Adventure Seeker, all Intrinsic Fear models cease to âdieâ within 14 runs, giving unbounded (unplottable) reward thereafter. On Seaquest, the IF model achieves a similar catastrophe rate but significantly higher total reward. On Asteroids, the IF model outperforms DQN. For Freeway, a randomly exploring DQN (under our time limit) never gets reward but IF model learns successfully.
problem we set the fear factor λ to 40. For Cart-Pole, we set a wider fear radius of kr = 20. We initially tried training this model with a short fear radius but made the following observation: One some runs, IF-DQN would surviving for millions of experiences, while on other runs, it might experience many catastrophes. Manually examining fear model output on successful vs unsuccessful runs, we noticed that on the bad runs, the fear model outputs non-zero probability of danger for precisely the 5 moves before a catastrophe. In Cart-Pole, by that time, it is too to correct course. On the more successful runs, the fear model often outputs predictions in the range .1 â .5. We suspect that the gradation between mildly dangerous states and those with certain danger provides a richer reward signal to the DQN. | 1611.01211#27 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 27 | 5.1 POLICY EVALUATION
Retrace provides a target for learning Qθv , but not for learning Vθv . We could use importance sampling to compute Vθv given Qθv , but this estimator has high variance. We propose a new architecture which we call Stochastic Dueling Networks (SDNs), inspired by the Dueling networks of Wang et al. (2016), which is designed to estimate both V Ï and QÏ off-policy while maintaining consistency between the two estimates. At each time step, an SDN outputs a Qθv of QÏ and a deterministic estimate Vθv of V Ï, such that stochastic estimate 1 n
# i=l | 1611.01224#27 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 28 | Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016.
J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016.
P. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â84, 1990.
A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi´nska, S. G. Col- menarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al. Hybrid computing using a neural net- work with dynamic external memory. Nature, 538(7626):471â476, 2016.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013. | 1611.01144#28 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 28 | On both the Adventure Seeker and Cart-Pole environments, DQNs augmented by intrinsic fear far out- perform their otherwise identical counterparts. We also compared IF to some traditional approaches for mitigating catastrophic forgetting. For example, we tried a memory-based method in which we preferentially sample the catastrophic states for updating the model, but they did not improve over the DQN. It seems that the notion of a danger zone is necessary here.
For Seaquest, Asteroids, and Freeway, we use a fear radius of 5 and a fear factor of .5. For all Atari games, the IF models outperform their DQN counterparts. Interestingly while for all games, the IF models achieve higher reward, on Seaquest, IF-DQNs have similar catastrophe rates (Figure 2). Perhaps the IF-DQN enters a region of policy space with a strong incentives to exchange catastrophes for higher reward. This result suggests an interplay between the various reward signals that warrants further exploration. For Asteroids and Freeway, the improvements are more dramatic. Over just a few thousand episodes of Freeway, a randomly exploring DQN achieves zero reward. However, the reward shaping of intrinsic fear leads to rapid improvement.
10
# 6 Related work | 1611.01211#28 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 28 | # i=l
where n is a parameter, see Figure The two estimates are consistent in the sense that gwn(-es) [Eu nnn(-lee) (Q. (x2, a)) = Vo, (x:). Furthermore, we can learn about Vâ by learn- ing Qo: To see this, assume we have learned Q" perfectly such that E,,,.,, vr(-|x1) (Qs. (xe, a)) = Q⢠(at, ax), then Vo, (21) = Eqrn(-|x,) [ExinnmClee) (Q, («1,4)) = Eann(-je,) (Q7 (tt, @)] = V⢠(az). Therefore, a target on Qo, (xz, a) also provides an error signal for updating Vo, .
7
Published as a conference paper at ICLR 2017
Ag, ( % a [urs+++ 5 Un)
Figure 2: A schematic of the Stochastic Dueling Network. In the drawing, [u1, to be samples from Ïθ( real sizes of the networks used. , un] are assumed xt). This schematic illustrates the concept of SDNs but does not reï¬ect the · · · ·|
In addition to SDNs, however, we also construct the following novel target for estimating V Ï: | 1611.01224#28 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 29 | S. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochastic Neural Networks. ICLR, 2016.
E. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ce, 1954.
D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581â3589, 2014.
H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011.
C. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In Advances in Neural Information Pro- cessing Systems, pp. 3086â3094, 2014.
C. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. ArXiv e-prints, November 2016. | 1611.01144#29 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 29 | 10
# 6 Related work
The paper studies safety in RL, intrinsically motivated RL, and the stability of Q-learning with function approximation under distributional shift. Our work also has some connection to reward shaping. We attempt to highlight the most relevant papers here. Several papers address safety in RL. Garcıa and Fernández [2015] provide a thorough review on the topic, identifying two main classes of methods: those that perturb the objective function and those that use external knowledge to improve the safety of exploration. | 1611.01211#29 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 29 | In addition to SDNs, however, we also construct the following novel target for estimating V Ï:
Veerset(,) = min {1 meee (O21. 40) â Qo.(e1.a)) +Va,(e0). (14) H(ae|) is also derived via the truncation and bias correction trick; for more details, see
The above target is also derived via the truncation and bias correction trick; for more details, see Appendix D. Finally, when estimating Qret in continuous domains, we implement a slightly different formulation
Finally, when estimating Q'*' in continuous domains, we implement a slightly different formulation T(ae|r4) H(ae|xe) the action space. Although not essential, we have found this formulation to lead to faster learning. 1 of the truncated importance weights p, = min {1 ( ) a \ where d is the dimensionality of
5.2 TRUST REGION UPDATING
To adopt the trust region updating scheme (Section 3.3) in the continuous control domain, one simply has to choose a distribution f and a gradient speciï¬cation Ëgacer suitable for continuous action spaces.
For the distribution f , we choose Gaussian distributions with ï¬xed diagonal covariance and mean Ïθ(x). To derive Ëgacer dueling network, but with respect to Ï: | 1611.01224#29 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 30 | A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. ICML, 31, 2014.
A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016.
J. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArXiv e-prints, June 2012.
Gabriel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networks by penalizing conï¬dent output distributions. 2016.
J. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicrap. Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-prints, October 2016.
T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
9
Published as a conference paper at ICLR 2017 | 1611.01144#30 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 30 | While a typical reinforcement learner optimizes expected return, some papers suggest that a safely acting agent should also minimize risk. Hans et al. [2008] defines a fatality as any return below some threshold Ï . They propose a solution comprised of a safety function, which identifies unsafe states, and a backup model, which navigates away from those states. Their work, which only addresses the tabular setting, suggests that an agent should minimize the probability of fatality instead of maximizing the expected return. Heger [1994] suggests an alternative Q-learning objective concerned with the minimum (vs. expected) return. Other papers suggest modifying the objective to penalize policies with high-variance returns [10, 8]. Maximizing expected returns while minimizing their variance is a classic problem in finance, where a common objective is the ratio of expected return to its standard deviation [28]. Moreover, Azizzadenesheli et al. [2018] suggests to learn the variance over the returns in order to make safe decisions at each decision step. Moldovan and Abbeel [2012] give a definition of safety based on ergodicity. They consider a fatality to be a state from which one cannot return to the start state. | 1611.01211#30 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 30 | git = B,, [Ee [Ponce 108 F(aildo(21))(QP"(er,a0) - %a.(e9)| +E ann âââ oo) (Qo. 9) â Vag #0) Votre) 108 Heute) - (1s)
In the above definition, we are using Q°P° instead of Qâ¢. Here, Q°P°(x;, az) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1 (Harutyunyan et al.|/2016). Please refer to Appendix [B]an expanded discussion on this design choice. Given an observation x;, we can sample aj, ~ 79(-|x1) to obtain the following Monte Carlo approximation
xt) to obtain the following Monte Carlo approximation Ïθ(xt))(Qopc(xt, at)
~ 79(-|x1) following approximation = PrVoo(xr) log f (aelbo(a1))(QP* (a1, ar) â Vo, (21)) 4+ [BOP] Ge. (ara) â Vox (e1)) Fontes low Hat loete)). 16 pray) +
# Ëgacer t
+
Given f and Ëgacer
, we apply the same steps as detailed in Section 3.3 to complete the update.
# t | 1611.01224#30 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 31 | 9
Published as a conference paper at ICLR 2017
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014a.
D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278â1286, 2014b.
J. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016.
R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pp. 872â879. ACM, 2008.
J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528â3536, 2015. | 1611.01144#31 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 31 | and Abbeel [2012] give a definition of safety based on ergodicity. They consider a fatality to be a state from which one cannot return to the start state. Shalev-Shwartz et al. [2016] theoretically analyzes how strong a penalty should be to discourage accidents. They also consider hard constraints to ensure safety. None of the above works address the case where distributional shift dooms an agent to perpetually revisit known catastrophic failure modes. Other papers incorporate external knowledge into the exploration process. Typically, this requires access to an oracle or extensive prior knowledge of the environment. In the extreme case, some papers suggest confining the policy search to a known subset of safe policies. For reasonably complex environments or classes of policies, this seems infeasible. | 1611.01211#31 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 31 | # Ëgacer t
+
Given f and Ëgacer
, we apply the same steps as detailed in Section 3.3 to complete the update.
# t
The precise pseudo-code of ACER algorithm for continuous spaces results is presented in Appendix A.
8
Published as a conference paper at ICLR 2017
Walker2d (9-DoF /6-dim. Actions) Fish (13-DoF/5-dim. Actions) Cartpole (2-DoF/I-dim. Actions) ) Milion Steps Million Steps Milion Steps Millon Steps Million Steps Episode Rewards Million Steps Humanoid (27-DoF /21-dim. Actionsâ Reacher3 (3-DoF /3-dim. Actions) Cheetah (9-DoF /6-dim. Actions) Episode Rewards | 1611.01224#31 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 32 | C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015.
A SEMI-SUPERVISED CLASSIFICATION MODEL
Figures 6 and 7 describe the architecture used in our experiments for semi-supervised classiï¬cation (Section 4.3).
3 : < beterminisi, differentiable node O Stochastic node | 1611.01144#32 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 32 | The potential oscillatory or divergent behavior of Q-learners with function approximation has been previ- ously identified [5, 2, 11]. Outside of RL, the problem of covariate shift has been extensively studied [30]. Murata and Ozawa [2005] addresses the problem of catastrophic forgetting owing to distributional shift in RL with function approximation, proposing a memory-based solution. Many papers address intrinsic rewards, which are internally assigned, vs the standard (extrinsic) reward. Typically, intrinsic rewards are used to encourage exploration [26, 4] and to acquire a modular set of skills [7]. Some papers refer to the intrinsic reward for discovery as curiosity. Like classic work on intrinsic motivation, our methods perturb the reward function. But instead of assigning bonuses to encourage discovery of novel transitions, we assign penalties to discourage catastrophic transitions.
Key differences In this paper, we undertake a novel treatment of safe reinforcement learning, While the literature offers several notions of safety in reinforcement learning, we see the following problem: Existing safety research that perturbs the reward function requires little foreknowledge, but fundamentally changes the objective globally. On the other hand, processes relying on expert knowledge may presume an unreasonable level of foreknowledge. Moreover, little of the prior work on safe reinforcement learning, to the best of our knowledge, specifically addresses the problem of catastrophic forgetting. This paper proposes a new class of algorithms for avoiding catastrophic states and a theoretical analysis supporting its robustness. | 1611.01211#32 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 32 | Figure 3: [TOP] Screen shots of the continuous control tasks. [BOTTOM] Performance of different methods on these tasks. ACER outperforms all other methods and shows clear gains for the higher- dimensionality tasks (humanoid, cheetah, walker and ï¬sh). The proposed trust region method by itself improves the two baselines (truncated importance sampling and A3C) signiï¬cantly.
# 6 RESULTS ON MUJOCO
We evaluate our algorithms on 6 continuous control tasks, all of which are simulated using the MuJoCo physics engine (Todorov et al., 2012). For descriptions of the tasks, please refer to Appendix E.1. Brieï¬y, the tasks with action dimensionality in brackets are: cartpole (1D), reacher (3D), cheetah (6D), ï¬sh (5D), walker (6D) and humanoid (21D). These tasks are illustrated in Figure 3. | 1611.01224#32 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 33 | 3 : < beterminisi, differentiable node O Stochastic node
Figure 6: Semi-supervised generative model proposed by Kingma et al. (2014). (a) Generative model pθ(x|y, z) synthesizes images from latent Gaussian âstyleâ variable z and categorical class variable y. (b) Inference model qÏ(y, z|x) samples latent state y, z given x. Gaussian z can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when y is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel- Softmax reparameterizes y so that backpropagation is also possible through y without encountering stochastic nodes.
# B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION
Here we derive the probability density function of the Gumbel-Softmax distribution with proba- bilities Ï1, ..., Ïk and temperature Ï . We ï¬rst deï¬ne the logits xi = log Ïi, and Gumbel samples
10
Published as a conference paper at ICLR 2017 | 1611.01144#33 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 33 | 11
# 7 Conclusions
Our experiments demonstrate that DQNs are susceptible to periodically repeating mistakes, however bad, raising questions about their real-world utility when harm can come of actions. While it is easy to visualize these problems on toy examples, similar dynamics are embedded in more complex domains. Consider a domestic robot acting as a barber. The robot might receive positive feedback for giving a closer shave. This reward encourages closer contact at a steeper angle. Of course, the shape of this reward function belies the catastrophe lurking just past the optimal shave. Similar dynamics might be imagines in a vehicle that is rewarded for traveling faster but could risk an accident with excessive speed. Our results with the intrinsic fear model suggest that with only a small amount of prior knowledge (the ability to recognize catastrophe states after the fact), we can simultaneously accelerate learning and avoid catastrophic states. This work is a step towards combating DRLâs tendency to revisit catastrophic states due to catastrophic forgetting.
# References
[1] Kamyar Azizzadenesheli, Emma Brunskill, and Animashree Anandkumar. Efficient exploration through bayesian deep q-networks. arXiv preprint arXiv:1802.04412, 2018.
[2] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. 1995. | 1611.01211#33 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 33 | To benchmark ACER for continuous control, we compare it to its on-policy counterpart both with and without trust region updating. We refer to these two baselines as A3C and Trust-A3C. Additionally, we also compare to a baseline with replay where we truncate the importance weights over trajectories as in (Wawrzy´nski, 2009). For a detailed description of this baseline, please refer to Appendix E. Again, we run this baseline both with and without trust region updating, and refer to these choices as Trust-TIS and TIS respectively. Last but not least, we refer to our proposed approach with SDN and trust region updating as simply ACER. All ï¬ve setups are implemented in the asynchronous A3C framework. | 1611.01224#33 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 34 | 10
Published as a conference paper at ICLR 2017
(a) conv2 conv2 conv2 5x5 5x5 5x5 FC X [>| stride=2 |) stride=2 >} stride=2 >) 157) de(y | x) N=32 N=64 N=128 ReLU ReLU ReLU (b) conv2 conv2 conv2 5x5 5x5 5x5 Fo [x, y] >} stride=2 [> stride=2 |») stride=2 >) 457} de(z | x) N=32 N=64 N=128 ReLU ReLU ReLU () conv2_T conv2_T conv2_T conv2_T FC| 3x3 3x3 3x3 3x3 [¥2] ->16q7>) stride=2 |} stride=-2 |) stride=2 [>} stride=2 [>]FC] >) Po N=128 N=64 N=32 N=32
y y.2) | 1611.01144#34 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 34 | [2] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. 1995.
[3] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 2013.
[4] Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, 2016.
[5] Justin Boyan and Andrew W Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS, 1995.
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI gym, 2016. arxiv.org/abs/1606.01540.
[7] Nuttapong Chentanez, Andrew G Barto, and Satinder P Singh. Intrinsically motivated reinforcement learning. In NIPS, 2004.
[8] Yinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone. Risk-sensitive and robust decision-making: A CVaR optimization approach. In NIPS, 2015. | 1611.01211#34 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 34 | All the aforementioned setups share the same network architecture that computes the policy and state values. We maintain an additional small network that computes the stochastic A values in the case of ACER. We use n = 5 (using the notation in Equation (13)) in all SDNs. Instead of mixing on-policy and replay learning as done in the Atari domain, ACER for continuous actions is entirely off-policy, with experiences generated from the simulator (4 times on average). When using replay, we add to each thread a replay memory that is 5, 000 frames in size and perform updates every 50 steps (k = 50 in the notation of Section 2). The rate of the soft updating (α as in Section 3.3) is set to 0.995 in all setups involving trust region updating. The truncation threshold c is set to 5 for ACER.
9
Published as a conference paper at ICLR 2017
We use diagonal Gaussian policies with ï¬xed diagonal covariances where the diagonal standard deviation is set to 0.3. For all setups, we sample the learning rates log-uniformly in the range [10â4, 10â3.3]. For setups involving trust region updating, we also sample δ uniformly in the range [0.1, 2]. With all setups, we use 30 sampled hyper-parameter settings. | 1611.01224#34 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 35 | y y.2)
Figure 7: Network architecture for (a) classiï¬cation qÏ(y|x) (b) inference qÏ(z|x, y), and (c) gen- erative pθ(x|y, z) models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from.
g1, ..., gk, where gi â¼ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as:
exp ((ai + gi)/T) va exp ((xj + 9;)/T) Yi fori =1,...,k (12)
B.1 CENTERED GUMBEL DENSITY
The mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the normalization of the softmax operation removes one degree of freedom. To compensate for this, we deï¬ne an equivalent sampling process that subtracts off the last element, (xk + gk)/Ï before the softmax:
ye = OP Mi + 9 = (e+ 98))/7) fori =1,..,k (13) Dar xP ((ay + 9 â (we + 9e))/7) | 1611.01144#35 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 35 | [9] Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. Policy networks with two-stage training for dialogue systems. In SIGDIAL, 2016.
[10] Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. JMLR, 2015.
[11] Geoffrey J Gordon. Chattering in SARSA(λ). Technical report, CMU, 1996.
[12] Steve Hanneke. The optimal sample complexity of PAC learning. JMLR, 2016.
[13] Alexander Hans, Daniel SchneegaÃ, Anton Maximilian Schäfer, and Steffen Udluft. Safe exploration for reinforcement learning. In ESANN, 2008.
12
[14] Matthias Heger. Consideration of risk in reinforcement learning. In Machine Learning, 1994.
[15] Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. The dependence of effective planning horizon on model accuracy. In International Conference on Autonomous Agents and Multiagent Systems, 2015.
[16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016. | 1611.01211#35 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 35 | The empirical results for all continuous control tasks are shown Figure 3, where we show the mean and standard deviation of the best 5 out of 30 hyper-parameter settings over which we searched 3. For sensitivity analyses with respect to the hyper-parameters, please refer to Figures 5 and 6 in the Appendix.
In continuous control, ACER outperforms the A3C and truncated importance sampling baselines by a very signiï¬cant margin.
Here, we also ï¬nd that the proposed trust region optimization method can result in huge improvements over the baselines. The high-dimensional continuous action policies are much harder to optimize than the small discrete action policies in Atari, and hence we observe much higher gains for trust region optimization in the continuous control domains. In spite of the improvements brought in by trust region optimization, ACER still outperforms all other methods, specially in higher dimensions.
# 6.1 ABLATIONS | 1611.01224#35 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 36 | To derive the density of this equivalent sampling process, we ï¬rst derive the density for the âcen- teredâ multivariate Gumbel density corresponding to:
ui = xi + gi â (xk + gk) for i = 1, ..., k â 1 (14)
where gi â¼ Gumbel(0, 1). Note the probability density of a Gumbel distribution with scale param- eter β = 1 and mean µ at z is: f (z, µ) = eµâzâeµâz . We can now compute the density of this distribution by marginalizing out the last Gumbel sample, gk:
# oo
oo Pitta) = [ dgy p(uy, ---, Uk|Gx)P( Ie) -_ oo k-1 = [dav (ax) T[ oui) 7 i=1 oo k-1 = [dae F010) TY flee + eas â ws) ied i=l oo k-1 = dg, e~9*-© ** erin Ui kG EME ET Ie [. I
11
Published as a conference paper at ICLR 2017 | 1611.01144#36 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 36 | [17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016.
[18] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 1992.
[19] Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. Efficient exploration for dialogue policy learning with bbq networks & replay buffer spiking. In AAAI, 2018.
[20] James L McClelland, Bruce L McNaughton, and Randall C OâReilly. Why there are complemen- tary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 1995.
[21] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 1989.
[22] Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 2015.
[23] Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. In ICML, 2012. | 1611.01211#36 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 36 | # 6.1 ABLATIONS
To further tease apart the contributions of the different components of ACER, we conduct an ablation analysis where we individually remove Retrace / Q(λ) off-policy correction, SDNs, trust region, and truncation with bias correction from the algorithm. As shown in Figure 4, Retrace and off- policy correction, SDNs, and trust region are critical: removing any one of them leads to a clear deterioration of the performance. Truncation with bias correction did not alter the results in the Fish and Walker2d tasks. However, in Humanoid, where the dimensionality of the action space is much higher, including truncation and bias correction brings a signiï¬cant boost which makes the originally kneeling humanoid stand. Presumably, the high dimensionality of the action space increases the variance of the importance weights which makes truncation with bias correction important. For more details on the experimental setup please see Appendix E.4.
# 7 THEORETICAL ANALYSIS | 1611.01224#36 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 37 | 11
Published as a conference paper at ICLR 2017
We perform a change of variables with v = eâgk , so dv = âeâgk dgk and dgk = âdv egk = dv/v, and deï¬ne uk = 0 to simplify notation:
k-1 p(U1,--;Uk,-1) = stun =0) | dy â A yet âThee uj â a, âve 2pâujâep, (15)
exp (+ . vee Ui ) G => we") T(k) (16) i=l
=T(k) oo (35 (aj â Ui ) (> e=")) (7)
=T(k) (loo exp (¢ =) ( Dex (i - «) (18) i=l
B.2 TRANSFORMING TO A GUMBEL-SOFTMAX
Given samples u1, ..., uk,â1 from the centered Gumbel distribution, we can apply a deterministic transformation h to yield the ï¬rst k â 1 coordinates of the sample from the Gumbel-Softmax:
exp(ui/T) 1+ D2) exp(uj/7) Yirkâ1 = A(ur:k-1), hi (t1:kâ-1) | 1611.01144#37 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 37 | [23] Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. In ICML, 2012.
[24] Makoto Murata and Seiichi Ozawa. A memory-based reinforcement learning model utilizing macro- actions. In Adaptive and Natural Computing Algorithms. 2005.
[25] Will Night. The AI that cut googleâs energy bill could soon help you. MIT Tech Review, 2016.
[26] Jurgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In From animals to animats: SAB90, 1991.
[27] Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. 2016.
[28] William F Sharpe. Mutual fund performance. The Journal of Business, 1966.
[29] David Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016.
[30] Masashi Sugiyama and Motoaki Kawanabe. Machine learning in non-stationary environments: Intro- duction to covariate shift adaptation. MIT Press, 2012.
[31] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988. | 1611.01211#37 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 37 | # 7 THEORETICAL ANALYSIS
Retrace is a very recent development in reinforcement learning. In fact, this work is the ï¬rst to consider Retrace in the policy gradients setting. For this reason, and given the core role that Retrace plays in ACER, it is valuable to shed more light on this technique. In this section, we will prove that Retrace can be interpreted as an application of the importance weight truncation and bias correction trick advanced in this paper.
Consider the following equation:
QÏ(xt, at) = Ext+1at+1 [rt + γÏt+1QÏ(xt+1, at+1)] . (17)
If we apply the weight truncation and bias correction trick to the above equation we obtain
Q" (xt, a) = Be siares ret 7er4+1Q" (41,4141) +7 E [aa â â| Q* (x141,4) } | - ann pr41(a) +
By recursively expanding QÏ as in Equation (18), we can represent QÏ(x, a) as: | 1611.01224#37 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 38 | exp(ui/T) 1+ D2) exp(uj/7) Yirkâ1 = A(ur:k-1), hi (t1:kâ-1)
Note that the final coordinate probability y;, is fixed given the first k â 1, as ean
i=1 yi = 1:
-1 k=l k-1 Ye = {1+ exp(u;/T) =1- Ss Uj (20) j=l j=l
We can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the ï¬rst k â 1 variables:
ho" (yin P(Yi:k) = P(A" (yr:eâ1)) det (oe) (21) Yi:k-1
Thus we need to compute two more pieces: the inverse of h and its Jacobian determinant. The inverse of h is:
k-1 bh (yie1) =7 x | logy: â log {1â Sy; | | =7 x (ogy: â log yx) (22) j=l
with Jacobian | 1611.01144#38 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01211 | 38 | [31] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988.
[32] Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013.
[33] Christopher J.C.H. Watkins and Peter Dayan. Q-learning. Machine Learning, 1992.
13
# An extension to the Theorem 2
In practice, we gradually learn and i improve F where the difference between learned F after two consecrative updates, F, and Frat, consequently, @ Fr, Yplan and w om Yplan decrease. While Frari is learned through using the samples drawn from w âFr Yptan , with high probability
a = VC(F) + log § [ co FYptan(s) |Fts) - Fisx(s} ds < 3200 007) * 83 seS N
ms ~ But in the final bound in Theorem 2, we interested in hes @ "plan (s)|F(s) â Fr44(s) ds. Via decomposing in into two terms
mn ~ me ⢠[ wo M%et0n(s)|F(3) â Fraa(s)| ds + / |e Trtotan (s) = M%otan(s)|ds seS seS
â« | 1611.01211#38 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 38 | By recursively expanding QÏ as in Equation (18), we can represent QÏ(x, a) as:
pusi(b) =e â¢(¢,a) =E yt pi\(retyE [| * (rp415b . 19 Q* (2,4) = Ey X17 (1) (: â, ( perild) 2 (2141, 0) (19)
â¢(¢,a) =E Q* (2,4) = Ey X17
The expectation Eµ is taken over trajectories starting from x with actions generated with respect to µ. When QÏ is not available, we can replace it with our current estimate Q to get a return-based
3 For videos of the policies learned with ACER, please see: https://www.youtube.com/watch?v= NmbeQYoVv5g&list=PLkmHIkhlFjiTlvwxEnsJMs3v7seR5HSP-.
10
(18)
Published as a conference paper at ICLR 2017 | 1611.01224#38 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01211 | 39 | â«
a Therefore, an extra term of Aaa hes lo Fest yptan (s) â @ **%plan(s)|ds appears in the final bound of âplan Theorem 2.
# V C(F)+log 1 δ N
Regarding the choice of γpl an, if λ is less than one, then the best choice of γpl an is γ . Other wise, V C(F)+log 1 δ N V C(F)+log 1 δ N if is equal to exact error in the model estimation, and is greater than 1, then the best γpl an is 0. Since, γpl an is not recommended, and a choice of γpl an ⤠γ is preferred. is an upper bound, not an exact error, on the model estimation, the choice of zero for
14 | 1611.01211#39 | Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear | Many practical environments contain catastrophic states that an optimal agent
would visit infrequently or never. Even on toy problems, Deep Reinforcement
Learning (DRL) agents tend to periodically revisit these states upon forgetting
their existence under a new policy. We introduce intrinsic fear (IF), a learned
reward shaping that guards DRL agents against periodic catastrophes. IF agents
possess a fear model trained to predict the probability of imminent
catastrophe. This score is then used to penalize the Q-learning objective. Our
theoretical analysis bounds the reduction in average return due to learning on
the perturbed objective. We also prove robustness to classification errors. As
a bonus, IF models tend to learn faster, owing to reward shaping. Experiments
demonstrate that intrinsic-fear DQNs solve otherwise pathological environments
and improve on several Atari games. | http://arxiv.org/pdf/1611.01211 | Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng | cs.LG, cs.NE, stat.ML | null | null | cs.LG | 20161103 | 20180313 | [
{
"id": "1802.04412"
}
] |
1611.01224 | 39 | 10
(18)
Published as a conference paper at ICLR 2017
Fish Walker2d Humanoid n o i g e R t s u r T o N s N D S o N r o n e c a r t e R o N . r r o C y c i l o P - f f O n o i t a c n u r T o N . r r o C s a i B &
Figure 4: Ablation analysis evaluating the effect of different components of ACER. Each row compares ACER with and without one component. The columns represents three control tasks. Red lines, in all plots, represent ACER whereas green lines ACER with missing components. This study indicates that all 4 components studied improve performance where 3 are critical to success. Note that the ACER curve is of course the same in all rows.
esitmate of QÏ. This operation also deï¬nes an operator:
- (Ta) (1 pil) =e] BQ(z,a) =E, > ( [#) (n+, ([ pirild) ix) . (20) t>0 i=
BQ(z,a) =E, > t>0
# B
# i=
is a contraction operator with a unique ï¬xed point QÏ B | 1611.01224#39 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01144 | 40 | 12
Published as a conference paper at ICLR 2017
where e is a k â 1 dimensional vector of ones, and weâve used the identities: det(AB) = det(A)det(B), det(diag(x)) = [], xi, and det(J + uv?) = 1+ u7v. We can then plug into the change of variables formula (Eq. using the density of the centered Gumbel (Eq{15}, the inverse of h (Eq. [22) and its Jacobian determinant (Eq. [26):
k , k yt P(Yis + Yk) =T(K) (1 exp (xi) tt) (> exp (x;) tt) i=1 i=l v Kk k ph-l I Q7) i=l =T(k)re} (x exp (2) iw) J] (e @a) /y7"") (28) i=l i=1
13 | 1611.01144#40 | Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete
structure in the world. However, stochastic neural networks rarely use
categorical latent variables due to the inability to backpropagate through
samples. In this work, we present an efficient gradient estimator that replaces
the non-differentiable sample from a categorical distribution with a
differentiable sample from a novel Gumbel-Softmax distribution. This
distribution has the essential property that it can be smoothly annealed into a
categorical distribution. We show that our Gumbel-Softmax estimator outperforms
state-of-the-art gradient estimators on structured output prediction and
unsupervised generative modeling tasks with categorical latent variables, and
enables large speedups on semi-supervised classification. | http://arxiv.org/pdf/1611.01144 | Eric Jang, Shixiang Gu, Ben Poole | stat.ML, cs.LG | null | null | stat.ML | 20161103 | 20170805 | [
{
"id": "1602.06725"
},
{
"id": "1512.00567"
},
{
"id": "1609.01704"
}
] |
1611.01224 | 40 | BQ(z,a) =E, > t>0
# B
# i=
is a contraction operator with a unique ï¬xed point QÏ B
In the following proposition, we show that and that it is equivalent to the Retrace operator. Proposition 1. The operator and
# QÏ
Proposition 1. The operator B is a contraction operator such that \|BQ â Qâ¢\|oo < y|\|Q â Q* and B is equivalent to Retrace. loo
# B
# QÏ
The above proposition not only shows an alternative way of arriving at the same operator, but also provides a different proof of contraction for Retrace. Please refer to Appendix C for the regularization conditions and proof of the above proposition.
Ï and importance sampling. recovers importance sampling; see Finally, Speciï¬cally, when c = 0, Appendix C. , and therefore Retrace, generalizes both the Bellman operator B T Ï and when c = , = B T â B
11
Published as a conference paper at ICLR 2017
# 8 CONCLUDING REMARKS | 1611.01224#40 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 41 | 11
Published as a conference paper at ICLR 2017
# 8 CONCLUDING REMARKS
We have introduced a stable off-policy actor critic that scales to both continuous and discrete action spaces. This approach integrates several recent advances in RL in a principle manner. In addition, it integrates three innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling networks and an efï¬cient trust region policy optimization method.
We showed that the method not only matches the performance of the best known methods on Atari, but that it also outperforms popular techniques on several continuous control problems.
The efï¬cient trust region optimization method advanced in this paper performs remarkably well in continuous domains. It could prove very useful in other deep learning domains, where it is hard to stabilize the training process.
# ACKNOWLEDGMENTS
We are very thankful to Marc Bellemare, Jascha Sohl-Dickstein, and S´ebastien Racaniere for proof- reading and valuable suggestions.
# REFERENCES
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. JAIR, 47:253â279, 2013. | 1611.01224#41 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 42 | G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI Gym. arXiv preprint 1606.01540, 2016.
T. Degris, M. White, and R. S. Sutton. Off-policy actor-critic. In ICML, pp. 457â464, 2012.
Anna Harutyunyan, Marc G Bellemare, Tom Stepleton, and Remi Munos. Q (λ) with off-policy corrections. arXiv preprint arXiv:1602.04951, 2016.
N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
T. Jie and P. Abbeel. On a connection between importance sampling and the likelihood ratio policy gradient. In NIPS, pp. 1000â1008, 2010.
S. Levine and V. Koltun. Guided policy search. In ICML, 2013. | 1611.01224#42 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 43 | S. Levine and V. Koltun. Guided policy search. In ICML, 2013.
S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015.
L.J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293â321, 1992.
N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab, 2000. | 1611.01224#43 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 44 | N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab, 2000.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540): 529â533, 2015.
V. Mnih, A. Puigdom`enech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv:1602.01783, 2016. | 1611.01224#44 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 45 | R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efï¬cient off-policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.
K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In EMNLP, 2015.
12
Published as a conference paper at ICLR 2017
J. Oh, V. Chockalingam, S. P. Singh, and H. Lee. Control of memory, active perception, and action in Minecraft. In ICML, 2016.
D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In ICML, pp. 759â766, 2000.
T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In ICLR, 2016.
J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015a. | 1611.01224#45 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 46 | J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015a.
J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b.
D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016. | 1611.01224#46 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 47 | R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057â1063, 2000.
E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems, pp. 5026â5033, 2012.
Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016.
P. Wawrzy´nski. Real-time reinforcement learning by sequential actorâcritics and experience replay. Neural Networks, 22(10):1484â1497, 2009.
13
Published as a conference paper at ICLR 2017
# A ACER PSEUDO-CODE FOR DISCRETE ACTIONS
# Algorithm 1 ACER for discrete actions (master algorithm) | 1611.01224#47 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 49 | Algorithm 2 ACER for discrete actions Reset gradients d@ < 0 and d6, < 0. Initialize parameters 6â < @ and 6â, < Oy. if not On-Policy then Sample the trajectory {xo, a0, 70, H(-|Zo),++* , Lk, Ak, Tk, H(-|ex)} from the replay memory. else Get state xo end if fori ⬠{0,--- ,k} do Compute f(-/doâ(#:)), Qoz (ws,-) and f(-\do, (#:)). if On-Policy then Perform a; according to f(-|d9(xi)) Receive reward r; and new state 741 H(i) â f(-|bor(2:)) end if ~ . Flaildor (wid) 6: = min {1 ee}. end for e. 0 for terminal x, Q et ae Ma Qo, (xk, a) f(alper(xe)) otherwise fori ¢ {kâ1,--- ,0}do Qret er +7Qre@ Vi â Ya Qar, (xi, a) f (algo (xi) Computing quantities needed for trust region updating: g = min fe, pi(ai)} Voq,(e;) 108 | 1611.01224#49 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 50 | (xi, a) f (algo (xi) Computing quantities needed for trust region updating: g = min fe, pi(ai)} Voq,(e;) 108 f(aildo (2i))(Q⢠â Vi) +X [P= SEF] fale Woy cay tw falter ()) (Qe, 20.08) ~ Ve) platy ke Vo q (ei) Pvce [FC /b00 (ea) IF C1Go" (#2) a Accumulate gradients wrt 6â: d0â < d6â + 2G ares) (9 - max { : sae} r) 2 Accumulate gradients wrt 0/,: dO. <â d@y + Vor, (qr â Qor (zi, a))? Update Retrace target: Q" © p; (qr â Qo, (xi, ai)) + Vi end for | 1611.01224#50 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 51 | end for Perform asynchronous update of θ using dθ and of θv using dθv. Updating the average policy network: θa â αθa + (1 â α)θ
# B Q(λ) WITH OFF-POLICY CORRECTIONS
Given a trajectory generated under the behavior policy µ, the Q(λ) with off-policy corrections estimator (Harutyunyan et al., 2016) can be expressed recursively as follows:
# Qopc(xt, at) = rt + γ[Qopc(xt+1, at+1)
(21) Notice that Qopc(xt, at) is the same as Retrace with the exception that the truncated importance ratio is replaced with 1.
â
14
Published as a conference paper at ICLR 2017
Algorithm 3 ACER for Continuous Actions
Reset gradients d@ < 0 and d6, < 0. Initialize parameters 0â ~ 6 and 6â, + Oy. Sample the trajectory {xo, ao, To, H(-|o), +++ , Tk, Ak, Tk, L(-|2~)} from the replay memory. fori ⬠{0,--- ,k} do
# v â θv. | 1611.01224#51 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 52 | # v â θv.
Compute f(-[éo(:)). Vor Sample aj ~ f(-|d97(xi)) flailegs (wi) ia (ai lei) and p;
(ai lei)
# 1 d
{1, (o:)*}.
# ci â min
1, (Ïi)
.
# (ws). Quy
# v
# (xi, ai), and f (·|Ïθa (xi)).
# F(ai|bor(@i)) n(at |x; )
# end for
°. 0 for terminal x, ret ee Q Vor (wx) otherwise Qe ee qr fori ¢ {kâ1,--- ,0}do Qrt eri +7Qr Qe ar HQ Computing quantities needed for trust region updating:
# for terminal xk
# Qret â
# (xk) otherwise
g © min {c, pi} Vsy,(xi) log f(ailbor (xi) (Q°° (ai, a2) â Vor, (a2)) + : - <| (Qor, (wi, a4) â Vor, (i) Vos (wi) log F (ailbo" (:)) Pils i ke Vb (ei DKx [F (1600 (2) NF ldo" (xi) | 1611.01224#52 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 53 | Accumulate gradients wrt 6: d@ ~â dO + Oe or (aa) (9 â max {0, saa} k) ~ Welz Accumulate gradients wrt 6/,: d0, <â dO, + (Qr⢠â Qor, (i, ai)) Vor, Qor, (i, ai) dB â dB. + min {1, pi} (Q"*(ae,as) â Quy (we, a)) Vor, Vo, (xs) Update Retrace target: Qâ < c; (Ce - Qo, (xi, a)) + Vor (xi) Update Retrace target: Q°?° â (Qâ" _ Qor, (xi, ai)) + Vor (xi) end for Perform asynchronous update of 6 using d@ and of 6, using d6,. Updating the average policy network: 0, < a0, + (1âa)@ | 1611.01224#53 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 54 | Because of the lack of the truncated importance ratio, the operator deï¬ned by Qopc is only a contraction if the target and behavior policies are close to each other (Harutyunyan et al., 2016). Q(λ) with off-policy corrections is therefore less stable compared to Retrace and unsafe for policy evaluation. Qopc, however, could better utilize the returns as the traces are not cut by the truncated importance weights. As a result, Qopc could be used efï¬ciently to estimate QÏ in policy gradient (e.g. in Equation (16)). In our continuous control experiments, we have found that Qopc leads to faster learning.
C RETRACE AS TRUNCATED IMPORTANCE SAMPLING WITH BIAS CORRECTION
For the purpose of proving proposition 1, we assume our environment to be a Markov Decision to be a ï¬nite state space. For notational simplicity, we also Process ( restrict deï¬nes the state transition probabilities and , r :
15
Published as a conference paper at ICLR 2017
Proof of proposition 1. First we show that is a contraction operator.
# B | 1611.01224#54 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 55 | 15
Published as a conference paper at ICLR 2017
Proof of proposition 1. First we show that is a contraction operator.
# B
< E& < BQ(x, a) â Q"(2,a)| pr+1(0) â â| pr+i(b) + pesa(b) â â| | 4 pr+i(b) (Q(we41,b) â Q* (rsa, »)) Q(t141,0) â Q* (@141, ni)| a) (a â Pry) sup 1Q (241, 0) â Q* (@141, 0) (22)
< E& < Where P;+1 1_E baw due to Hélderâs inequality.
Where P;+1 1_E a baw due to Hélderâs inequality. +41 (0) | iE [Pt41(b)]. The last inequality in the above equation is + bw
(22) IA su su p xb Q(e,b) Q(e,b) Q(e,b) Q(e,b) ~ Q* (x, b) E, ~ Q"(«,b)|E, ~ Q"(«,b)|E, â Q*(x,b) | 1611.01224#55 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 56 | where C = 37157 (Mn 71). Since C > D}_9 7 (Mn 7) = 1, we have that yCâ(C'â1) < y. Therefore, we have shown that B is a contraction operator.
shown that B is a contraction operator. is the same as Retrace. By apply the trunction and bias correction trick, we have
# B
Now we show that B
B [Q(xt+1, b)] = E bâ¼Âµ
JE ((oes2.0)] =F [esr 0)Qee41, 0) +E, (222) ee ») - 2) prsil
By adding and subtracting the two sides of Equation (23) inside the summand of Equation (20), we have
# B | 1611.01224#56 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 57 | By adding and subtracting the two sides of Equation (23) inside the summand of Equation (20), we have
# B
BQ(x, a) & t>0 ~~, [Pri (b)Q(x141,b) ss: t>0 al t>0 al t>0 z(t) i=1 (te i=1 (I i=1 Me Me Me b)-c m+7E (a ) bun pr+i( (E354) ae] Q(at41, Q(a141 Q(aH41, 1))â 9, Bees 0438) »d)) â yPr+1Q(@e41, z=) b)| - Q(erar)) + Q(z, a) = RQ(x,
16
# Q(x, a)
Published as a conference paper at ICLR 2017
In the remainder of this appendix, we show that importance sampling. First, we reproduce the deï¬nition of generalizes both the Bellman operator and B :
# B Ït+1(b)
t b)-c BQ(x,a) = Ey yy ( a) (: +7E (Gece Q(t41, »)) 120 je bv pr+i(b) + | 1611.01224#57 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 58 | BQ(x,a) = Ey yy 120 When c = 0, we have that p; = 0
je bv pr+i(b) + Vi. Therefore only the first summand of the sum remains:
When c = 0, we have that p; = 0 Vi. Therefore only the first summand of the sum remains:
â
Q(x, a) = Eµ .
E + 1,âE.
((oesa.0)|
# B . When c =
In this case =
# i: , the compensation term disappears and ¯Ïi = Ïi â γt
BQ(2,a) =E, | 07 20 In this case B is the
â
BQ(2,a) =E, | 07 (1 a) G +E (0x Q(t, »)) =E, |>07' (1 a) rm t>0 i=l 20 i=1 In this case B is the same operator defined by importance sampling.
# B
# D DERIVATION OF V target
By using the truncation and bias correction trick, we can derive the following:
: Tala, a)-1 v(o) =, [min {1 HIE Qreena)] +B (JAP) orterssa)). any (ala) ann pla) |, | 1611.01224#58 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 60 | Vine" (ae) = min {1 wee | QM (a,a:) + E (23) . Qo, (@, ») - (24) * p(ar|re) ~T pr(a)
Through the truncation and bias correction trick again, we have the following identity:
B,eolov0l= 2 [rin nce} eteoa)] +, [Ama], tera) 9
Adding and subtracting both sides of Equation (25) to the RHS of (24) while taking a Monte Carlo approximation, we arrive at V target(xt): Ï(at| µ(at|
E CONTINUOUS CONTROL EXPERIMENTS
E.1 DESCRIPTION OF THE CONTINUOUS CONTROL PROBLEMS
Our continuous control tasks were simulated using the MuJoCo physics engine (Todorov et al. (2012)). For all experiments we considered an episodic setup with an episode length of T = 500 steps and a discount factor of 0.99. | 1611.01224#60 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 61 | Cartpole swingup This is an instance of the classic cart-pole swing-up task. It consists of a pole attached to a cart running on a ï¬nite track. The agent is required to balance the pole near the center of the track by applying a force to the cart only. An episode starts with the pole at a random angle and zero velocity. A reward zero is given except when the pole is approximately upright (within 0.05) for a track length of 2.4. ± The observations include position and velocity of the cart, angle and angular velocity of the pole. a sine/cosine of the angle, the position of the tip of the pole, and Cartesian velocities of the pole. The dimension of the action space is 1.
17
Published as a conference paper at ICLR 2017
Reacher3 The agent needs to control a planar 3-link robotic arm in order to minimize the distance between the end effector of the arm and a target. Both arm and target position are chosen randomly at the beginning of each episode. The reward is zero except when the tip of the arm is within 0.05 of the target, where it is one. The 8-dimensional observation consists of the angles and angular velocity of all joints as well as the displacement between target and the end effector of the arm. The 3-dimensional action are the torques applied to the joints. | 1611.01224#61 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 62 | Cheetah The Half-Cheetah (Wawrzyriski| (2009); [Heess et al.|(2015)) is a planar locomotion task where the agent is required to control a 9-DoF cheetah-like body (in the vertical plane) to move in the direction of the x-axis as quickly as possible. The reward is given by the velocity along the x-axis and a control cost: r = vz + 0.1||al|â. The observation vector consists of the z-position of the torso and its x, z velocities as well as the joint angles and angular velocities. The action dimension is 6. | 1611.01224#62 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 63 | Fish The goal of this task is to control a 13-DoF ï¬sh-like body to swim to a random target in 3D space. The reward is given by the distance between the head of the ï¬sh and the target, a small penalty for the body not being upright, and a control cost. At the beginning of an episode the ï¬sh is initialized facing in a random direction relative to the target. The 24-dimensional observation is given by the displacement between the ï¬sh and the target projected onto the torso coordinate frame, the joint angles and velocities, the cosine of the angle between the z-axis of the torso and the world z-axis, and the velocities of the torso in the torso coordinate frame. The 5-dimensional actions control the position of the side ï¬ns and the tail. | 1611.01224#63 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 64 | Walker The 9-DoF planar walker is inspired by (Schulman et al. (2015a)) and is required to move forward along the x-axis as quickly as possible without falling. The reward consists of the x-velocity of the torso, a quadratic control cost, and terms that penalize deviations of the torso from the preferred height and orientation (i.e. terms that encourage the walker to stay standing and upright). The 24-dimensional observation includes the torso height, velocities of all DoFs, as well as sines and cosines of all body orientations in the x-z plane. The 6-dimensional action controls the torques applied at the joints. Episodes are terminated early with a negative reward when the torso exceeds upper and lower limits on its height and orientation.
Humanoid The humanoid is a 27 degrees-of-freedom body with 21 actuators (21 action dimen- sions). It is initialized lying on the ground in a random conï¬guration and the task requires it to achieve a standing position. The reward function penalizes deviations from the height of the head when standing, and includes additional terms that encourage upright standing, as well as a quadratic action penalty. The 94 dimensional observation contains information about joint angles and velocities and several derived features reï¬ecting the bodyâs pose.
E.2 UPDATE EQUATIONS OF THE BASELINE TIS | 1611.01224#64 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 65 | E.2 UPDATE EQUATIONS OF THE BASELINE TIS
The baseline TIS follows the following update equations,
k-1 k-1 updates to the policy: min {5 (11 rs) \ » y'risi +7 Vo, (@h4t) â voce Vo log m6(at|at), i=0 i=0 k=l k=l updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0
updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0 The baseline Trust-TIS is appropriately modified according to the trust region update described in SectionB.3]
i=0
E.3 SENSITIVITY ANALYSIS
In this section, we assess the sensitivity of ACER to hyper-parameters. In Figures 5 and 6, we show, for each game, the ï¬nal performance of our ACER agent versus the choice of learning rates, and the trust region constraint δ respectively. | 1611.01224#65 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 66 | Note, as we are doing random hyper-parameter search, each learning rate is associated with a random δ and vice versa. It is therefore difï¬cult to tease out the effect of either hyper-parameter independently.
18
xt),
Published as a conference paper at ICLR 2017
We observe, however, that ACER is not very sensitive to the hyper-parameters overall. In addition, smaller δâs do not seem to adversely affect the ï¬nal performance while larger δâs do in domains of higher action dimensionality. Similarly, smaller learning rates perform well while bigger learning rates tend to hurt ï¬nal performance in domains of higher action dimensionality.
Fish Walker2D Cheetah Cumulative Reward Cumulative Reward Cumulative Reward "Log Learning Rate Cartpole ? âLog Leaming Rate Reacher3 âLog Learning Rate Humanoid Cumulative Reward rary Cumulative Reward Cumulative Reward Log Learning Rate Log Leaming Rate Log Learning Rate
Figure 5: Log learning rate vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬nal performance after training for all 30 log learning rates considered. Note that each learning rate is associated with a different δ as a consequence of random search over hyper-parameters. | 1611.01224#66 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 67 | Fish Walker2D. Cumulative Reward Cumulative Reward Trust Region Constraint (3) Trust Region Constraint (6) Humanoid . Cheetah Be 3 @ Zao E 8: . Trust Region Constraint (6) Cartpole ofa a eee Bux 3 gx $ wx z 8 wx Cumulative Reward Reacher3 Cumulative Reward Trust Region Constraint () Trust Region Constraint (4) Trust Region Constraint (6)
Figure 6: Trust region constraint (δ) vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬nal performance after training for all 30 trust region constraints (δ) searched over. Note that each δ is associated with a different learning rate as a consequence of random search over hyper-parameters.
# E.4 EXPERIMENTAL SETUP OF ABLATION ANALYSIS
For the ablation analysis, we use the same experimental setup as in the continuous control experiments while removing one component at a time.
19
Published as a conference paper at ICLR 2017
To evaluate the effectiveness of Retrace/Q(λ) with off-policy correction, we replace both with importance sampling based estimates (following Degris et al. (2012)) which can be expressed recursively: Rt = rt + Ït+1Rt+1. | 1611.01224#67 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.01224 | 68 | To evaluate the Stochastic Dueling Networks, we replace it with two separate networks: one comput- ing the state values and the other Q values. Given Qret(xt, at), the naive way of estimating the state values is to use the following update rule:
ÏtQret(xt, at) Vθv (xt)
# âθv Vθv (xt).
â
The above update rule, however, suffers from high variance. We consider instead the following update rule:
# Qret(xt, at)
Ït Vθv (xt)
# âθv Vθv (xt)
â
which has markedly lower variance. We update our Q estimates as before.
To evaluate the effects of the truncation and bias correction trick, we change our c parameter (see Equation (16)) to
â
20 | 1611.01224#68 | Sample Efficient Actor-Critic with Experience Replay | This paper presents an actor-critic deep reinforcement learning agent with
experience replay that is stable, sample efficient, and performs remarkably
well on challenging environments, including the discrete 57-game Atari domain
and several continuous control problems. To achieve this, the paper introduces
several innovations, including truncated importance sampling with bias
correction, stochastic dueling network architectures, and a new trust region
policy optimization method. | http://arxiv.org/pdf/1611.01224 | Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas | cs.LG | 20 pages. Prepared for ICLR 2017 | null | cs.LG | 20161103 | 20170710 | [
{
"id": "1602.01783"
},
{
"id": "1509.02971"
},
{
"id": "1606.02647"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
},
{
"id": "1602.04951"
}
] |
1611.00712 | 1 | # ABSTRACT
The reparameterization trick enables optimizing large scale stochastic computa- tion graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random vari- able with ï¬xed distribution. After refactoring, the gradients of the loss propa- gated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparame- terizations due to the discontinuous nature of discrete states. In this work we introduce CONCRETE random variablesâCONtinuous relaxations of disCRETE random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit rep- resentation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objec- tives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
# INTRODUCTION | 1611.00712#1 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 2 | # INTRODUCTION
Software libraries for automatic differentiation (AD) (Abadi et al., 2015; Theano Development Team, 2016) are enjoying broad use, spurred on by the success of neural networks on some of the most challenging problems of machine learning. The dominant mode of development in these libraries is to deï¬ne a forward parametric computation, in the form of a directed acyclic graph, that computes the desired objective. If the components of the graph are differentiable, then a backwards computation for the gradient of the objective can be derived automatically with the chain rule. The ease of use and unreasonable effectiveness of gradient descent has led to an explosion in the di- versity of architectures and objective functions. Thus, expanding the range of useful continuous operations can have an outsized impact on the development of new models. For example, a topic of recent attention has been the optimization of stochastic computation graphs from samples of their states. Here, the observation that AD âjust worksâ when stochastic nodes1 can be reparameterized into deterministic functions of their parameters and a ï¬xed noise distribution (Kingma & Welling, 2013; Rezende et al., 2014), has liberated researchers in the development of large complex stochastic architectures (e.g. Gregor et al., 2015). | 1611.00712#2 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 3 | Computing with discrete stochastic nodes still poses a signiï¬cant challenge for AD libraries. Deter- ministic discreteness can be relaxed and approximated reasonably well with sigmoidal functions or the softmax (see e.g., Grefenstette et al., 2015; Graves et al., 2016), but, if a distribution over discrete states is needed, there is no clear solution. There are well known unbiased estimators for the gradi1For our purposes a stochastic node of a computation graph is just a random variable whose distribution depends in some deterministic way on the values of the parent nodes.
1
Published as a conference paper at ICLR 2017
ents of the parameters of a discrete stochastic node from samples. While these can be made to work with AD, they involve special casing and deï¬ning surrogate objectives (Schulman et al., 2015), and even then they can have high variance. Still, reasoning about discrete computation comes naturally to humans, and so, despite the difï¬culty associated, many modern architectures incorporate discrete stochasticity (Mnih et al., 2014; Xu et al., 2015; KoËcisk´y et al., 2016). | 1611.00712#3 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 4 | This work is inspired by the observation that many architectures treat discrete nodes continuously, and gradients rich with counterfactual information are available for each of their possible states. We introduce a CONtinuous relaxation of disCRETE random variables, CONCRETE for short, which allow gradients to ï¬ow through their states. The Concrete distribution is a new parametric family of continuous distributions on the simplex with closed form densities. Sampling from the Concrete distribution is as simple as taking the softmax of logits perturbed by ï¬xed additive noise. This reparameterization means that Concrete stochastic nodes are quick to implement in a way that âjust worksâ with AD. Crucially, every discrete random variable corresponds to the zero temperature limit of a Concrete one. In this view optimizing an objective over an architecture with discrete stochastic nodes can be accomplished by gradient descent on the samples of the corresponding Concrete relaxation. When the objective depends, as in variational inference, on the log-probability of discrete nodes, the Concrete density is used during training in place of the discrete mass. At test time, the graph with discrete nodes is evaluated. | 1611.00712#4 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 5 | The paper is organized as follows. We provide a background on stochastic computation graphs and their optimization in Section 2. Section 3 reviews a reparameterization for discrete random vari- ables, introduces the Concrete distribution, and discusses its application as a relaxation. Section 4 reviews related work. In Section 5 we present results on a density estimation task and a structured prediction task on the MNIST and Omniglot datasets. In Appendices C and F we provide details on the practical implementation and use of Concrete random variables. When comparing the effec- tiveness of gradients obtained via Concrete relaxations to a state-of-the-art-method (VIMCO, Mnih & Rezende, 2016), we ï¬nd that they are competitiveâoccasionally outperforming and occasionally underperformingâall the while being implemented in an AD library without special casing.
2 BACKGROUND
2.1 OPTIMIZING STOCHASTIC COMPUTATION GRAPHS | 1611.00712#5 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 6 | 2 BACKGROUND
2.1 OPTIMIZING STOCHASTIC COMPUTATION GRAPHS
Stochastic computation graphs (SCGs) provide a formalism for specifying input-output mappings, potentially stochastic, with learnable parameters using directed acyclic graphs (see Schulman et al. (2015) for a review). The state of each non-input node in such a graph is obtained from the states of its parent nodes by either evaluating a deterministic function or sampling from a conditional distribution. Many training objectives in supervised, unsupervised, and reinforcement learning can be expressed in terms of SCGs. | 1611.00712#6 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 7 | To optimize an objective represented as a SCG, we need estimates of its parameter gradients. We will concentrate on graphs with some stochastic nodes (backpropagation covers the rest). For simplicity, we restrict our attention to graphs with a single stochastic node X. We can interpret the forward pass in the graph as ï¬rst sampling X from the conditional distribution pÏ(x) of the stochastic node given its parents, then evaluating a deterministic function fθ(x) at X. We can think of fθ(X) as a noisy objective, and we are interested in optimizing its expected value L(θ, Ï) = E Xâ¼pÏ(x)[fθ(X)] w.r.t. parameters θ, Ï.
In general, both the objective and its gradients are intractable. We will side-step this issue by esti- mating them with samples from pÏ(x). The gradient w.r.t. to the parameters θ has the form
# âθE
Xâ¼pÏ(x)[fθ(X)] = E
Xâ¼pÏ(x)[ (1)
âθL(θ, Ï) =
# âθfθ(X)]
and can be easily estimated using Monte Carlo sampling: | 1611.00712#7 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 8 | âθL(θ, Ï) =
# âθfθ(X)]
and can be easily estimated using Monte Carlo sampling:
f 1 8 8 VoL(0,8) ~ =) _, Volo(X*), (2)
where X* ~ p(x) iid. The more challenging task is to compute the gradient @ of pg(x). The expression obtained by differentiating the expected objective,
# w.r.t. the parameters
âÏL(θ, Ï) = âÏ pÏ(x)fθ(x) dx = fθ(x) âÏpÏ(x) dx, (3)
2
Published as a conference paper at ICLR 2017
does not have the form of an expectation w.r.t. x and thus does not directly lead to a Monte Carlo gradient estimator. However, there are two ways of getting around this difï¬culty which lead to the two classes of estimators we will now discuss.
2.2 SCORE FUNCTION ESTIMATORS | 1611.00712#8 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 9 | 2.2 SCORE FUNCTION ESTIMATORS
The score function estimator (SFE, Fu, 2006), also known as the REINFORCE (Williams, 1992) or likelihood-ratio estimator (Glynn, 1990), is based on the identity âÏ log pÏ(x), which allows the gradient in Eq. 3 to be written as an expectation: âÏL(θ, Ï) = E âÏ log pÏ(X)] . Estimating this expectation using naive Monte Carlo gives the estimator
Vol(,8) ~ =~, fol X*)Vg low po(X*), ()
where X s pÏ(x) i.i.d. This is a very general estimator that is applicable whenever log pÏ(x) is differentiable w.r.t. Ï. As it does not require fθ(x) to be differentiable or even continuous as a function of x, the SFE can be used with both discrete and continuous random variables. | 1611.00712#9 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 10 | Though the basic version of the estimator can suffer from high variance, various variance reduc- tion techniques can be used to make the estimator much more effective (Greensmith et al., 2004). Baselines are the most important and widely used of these techniques (Williams, 1992). A number of score function estimators have been developed in machine learning (Paisley et al., 2012; Gregor et al., 2013; Ranganath et al., 2014; Mnih & Gregor, 2014; Titsias & L´azaro-Gredilla, 2015; Gu et al., 2016), which differ primarily in the variance reduction techniques used.
2.3 REPARAMETERIZATION TRICK | 1611.00712#10 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 11 | 2.3 REPARAMETERIZATION TRICK
In many cases we can sample from pÏ(x) by ï¬rst sampling Z from some ï¬xed distribution q(z) and then transforming the sample using some function gÏ(z). For example, a sample from Normal(µ, Ï2) can be obtained by sampling Z from the standard form of the distribution Normal(0, 1) and then transforming it using gµ,Ï(Z) = µ + ÏZ. This two-stage reformulation of the sampling process, called the reparameterization trick, allows us to transfer the dependence on Ï from p into f by writing fθ(x) = fθ(gÏ(z)) for x = gÏ(z), making it possible to reduce the problem of estimating the gradient w.r.t. parameters of a distribution to the simpler problem of estimating the gradient w.r.t. parameters of a deterministic function.
Having reparameterized pÏ(x), we can now express the objective as an expectation w.r.t. q(z): Xâ¼pÏ(x)[fθ(X)] = E | 1611.00712#11 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 12 | As q(z) does not depend on Ï, we can estimate the gradient w.r.t. Ï in exactly the same way we estimated the gradient w.r.t. θ in Eq. 1. Assuming differentiability of fθ(x) w.r.t. x and of gÏ(z) w.r.t. Ï and using the chain rule gives âÏL(θ, Ï) = E
# âÏfθ(gÏ(Z))] = E
âÏgÏ(Z)] .
The reparameterization trick, introduced in the context of variational inference independently by Kingma & Welling (2014), Rezende et al. (2014), and Titsias & L´azaro-Gredilla (2014), is usu- ally the estimator of choice when it is applicable. For continuous latent variables which are not directly reparameterizable, new hybrid estimators have also been developed, by combining partial reparameterizations with score function estimators (Ruiz et al., 2016; Naesseth et al., 2016).
2.4 APPLICATION: VARIATIONAL TRAINING OF LATENT VARIABLE MODELS | 1611.00712#12 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 13 | 2.4 APPLICATION: VARIATIONAL TRAINING OF LATENT VARIABLE MODELS
We will now see how the task of training latent variable models can be formulated in the SCG framework. Such models assume that each observation x is obtained by first sampling a vector of latent variables Z from the prior pg(z) before sampling the observation itself from pg(x | z). Thus the probability of observation x is pg(x) = 3), po(z)pe(x | z). Maximum likelihood train- ing of such models is infeasible, because the log-likelihood (LL) objective L(@) = log pe(x) =
3
(6)
Published as a conference paper at ICLR 2017
(a) Discrete(α) (b) Concrete(α, λ)
Discrete(α) and 3-ary Con- Figure 1: Visualization of sampling graphs for 3-ary discrete D crete X Concrete(α, λ). White operations are deterministic, blue are stochastic, rounded are continuous, square discrete. The top node is an example state; brightness indicates a value in [0,1].
log E expectation being inside the log. The multi-sample variational objective (Burda et al., 2016), | 1611.00712#13 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 14 | log E expectation being inside the log. The multi-sample variational objective (Burda et al., 2016),
1 po(Z', x) log | â âââ_]]. (8) (2 dX do(Z" |) Ln(0,¢)=. E Zingy (2|e)
provides a convenient alternative which has precisely the form we considered in Section 2.1. This ap- x) with its own parameters, which serves proach relies on introducing an auxiliary distribution qÏ(z as approximation to the intractable posterior pθ(z x). The model is trained by jointly maximizing | the objective w.r.t. to the parameters of p and q. The number of samples used inside the objective m allows trading off the computational cost against the tightness of the bound. For m = 1, Lm(θ, Ï) becomes is the widely used evidence lower bound (ELBO, Hoffman et al., 2013) on log pθ(x), while for m > 1, it is known as the importance weighted bound (Burda et al., 2016).
3 THE CONCRETE DISTRIBUTION
3.1 DISCRETE RANDOM VARIABLES AND THE GUMBEL-MAX TRICK | 1611.00712#14 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 15 | 3 THE CONCRETE DISTRIBUTION
3.1 DISCRETE RANDOM VARIABLES AND THE GUMBEL-MAX TRICK
To motivate the construction of Concrete random variables, we review a method for sampling from discrete distributions called the Gumbel-Max trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). We restrict ourselves to a representation of discrete states as vectors d k=1 dk = 1. This is a ï¬exible representation in a computation graph; to achieve an integral representation take the inner product of d with (1, . . . , n), and to achieve a point mass representation in Rm take W d where W RmÃn. Consider an unnormalized parameterization (α1, . . . , αn) where αk â tion D â¼ Max trick proceeds as follows: sample Uk â¼ log Uk) log αk â {
â
, set Dk = 1 and the remaining Di = 0 for i }
â
_ Ok Vie Gs (9)
In other words, the sampling of a discrete random variable can be refactored into a deterministic functionâcomponentwise addition followed by argmaxâof the parameters log αk and ï¬xed dis- tribution
â
â | 1611.00712#15 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 16 | â
â
The apparently arbitrary choice of noise gives the trick its name, as log U ) has a Gumbel distribution. This distribution features in extreme value theory (Gumbel, 1954) where it plays a central role similar to the Normal distribution: the Gumbel distribution is stable under max opera- tions, and for some distributions, the order statistics (suitably normalized) of i.i.d. draws approach the Gumbel in distribution. The Gumbel can also be recognized as a log-transformed exponen- tial random variable. So, the correctness of (9) also reduces to a well known result regarding the argmin of exponential random variables. See (Hazan et al., 2016) for a collection of related work, and particularly the chapter (Maddison, 2016) for a proof and generalization of this trick.
4
Published as a conference paper at ICLR 2017
(a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2 | 1611.00712#16 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 17 | (a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2
Figure 2: A discrete distribution with unnormalized probabilities (α1, α2, α3) = (2, 0.5, 1) and three corresponding Concrete densities at increasing temperatures λ. Each triangle represents the set of points (y1, y2, y3) in the simplex â2 = . For λ = 0 the size of white circles represents the mass assigned to each vertex of the simplex under the the intensity of the shading represents the value of pα,λ(y). discrete distribution. For λ
2, 1, 0.5 }
â {
3.2 CONCRETE RANDOM VARIABLES | 1611.00712#17 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 18 | 2, 1, 0.5 }
â {
3.2 CONCRETE RANDOM VARIABLES
The derivative of the argmax is 0 everywhere except at the boundary of state changes, where it is undefined. For this reason the Gumbel-Max trick is not a suitable reparameterization for use in SCGs with AD. Here we introduce the Concrete distribution motivated by considering a graph, which is the same as Figure[Ialup to a continuous relaxation of the argmax computation, see Figure[Ib] This will ultimately allow the optimization of parameters a, via gradients. The argmax computation returns states on the vertices of the simplex Aâ-! = {x ⬠R" | x, ⬠(0, 1], \o¢_, ve = 1}. The idea behind Concrete random variables is to relax the state of a discrete variable from the vertices into the interior where it is a random probability vectorâa vector of numbers between 0 and | that sum to 1. To sample a Concrete random variable X ⬠A"! at temperature \ ⬠(0,00) with parameters a, ⬠(0, 00), sample G,, ~ Gumbel i.i.d. and set
# Rn
â
â
) with parameters αk â Xk =
# ), sample Gk â¼ . | 1611.00712#18 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 19 | # Rn
â
â
) with parameters αk â Xk =
# ), sample Gk â¼ .
# â exp((log αk + Gk)/λ) i=1 exp((log αi + Gi)/λ)
exp((log ag + Gx)/A) YUL, exp((log a; + Gi)/d) Xk (10)
The softmax computation of (10) smoothly approaches the discrete argmax computation as λ 0 while preserving the relative order of the Gumbels log αk + Gk. So, imagine making a series of forward passes on the graphs of Figure 1. Both graphs return a stochastic value for each forward pass, but for smaller temperatures the outputs of Figure 1b become more discrete and eventually indistinguishable from a typical forward pass of Figure 1a.
The distribution of X sampled via (10) has a closed form density on the simplex. Because there may be other ways to sample a Concrete random variable, we take the density to be its deï¬nition. Deï¬nition 1 (Concrete Random Variables). Let α Concrete distribution X
â¼
Po.A(t) = (n= 1)!" TT (=) ; an k=1 a py VT; | 1611.00712#19 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 20 | â¼
Po.A(t) = (n= 1)!" TT (=) ; an k=1 a py VT;
Proposition 1 lists a few properties of the Concrete distribution. (a) is conï¬rmation that our def- inition corresponds to the sampling routine (10). (b) conï¬rms that rounding a Concrete random variable results in the discrete random variable whose distribution is described by the logits log αk, (c) conï¬rms that taking the zero temperature limit of a Concrete random variable is the same as rounding. Finally, (d) is a convexity result on the density. We prove these results in Appendix A. Proposition 1 (Some Properties of Concrete Random Variables). Let X location parameters α
(a) (Reparameterization) If Gy, ~ Gumbel i.i.d., then (b) (Rounding) P(X, > X; fori #k) = ax/(X7}_, (c) (Zero temperature) P (limy.9 Xz = 1) = ax/(S0j_1
â
â
â
d= exp((log αk+Gk)/λ)
# Gumbel i.i.d., then Xk
i=1 exp((log αi+Gi)/λ) , | 1611.00712#20 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 21 | d= exp((log αk+Gk)/λ)
# Gumbel i.i.d., then Xk
i=1 exp((log αi+Gi)/λ) ,
Gumbel i.i.d., then Xk n
#k) = ax/(X7}_,
i=1 αi),
i=1 αi),
5
Published as a conference paper at ICLR 2017
(a) λ = 0 (b) λ = 1/2 (c) λ = 1 (d) λ = 2
Figure 3: A visualization of the binary special case. (a) shows the discrete trick, which works by passing a noisy logit through the unit step function. (b), (c), (d) show Concrete relaxations; the horizontal blue densities show the density of the input distribution and the vertical densities show the corresponding Binary Concrete density on (0, 1) for varying λ.
(d) (Convex eventually) If λ (n 1)â1, then pα,λ(x) is log-convex in x.
â¤
â | 1611.00712#21 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 22 | (d) (Convex eventually) If λ (n 1)â1, then pα,λ(x) is log-convex in x.
â¤
â
The binary case of the Gumbel-Max trick simpliï¬es to passing additive noise through a step func- tion. The corresponding Concrete relaxation is implemented by passing additive noise through a sigmoidâsee Figure 3. We cover this more thoroughly in Appendix B, along with a cheat sheet (Appendix F) on the density and implementation of all the random variables discussed in this work.
3.3 CONCRETE RELAXATIONS
Concrete random variables may have some intrinsic value, but we investigate them simply as surro- gates for optimizing a SCG with discrete nodes. When it is computationally feasible to integrate over the discreteness, that will always be a better choice. Thus, we consider the use case of optimizing a large graph with discrete stochastic nodes from samples. | 1611.00712#22 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 23 | First, we outline our proposal for how to use Concrete relaxations by considering a variational autoencoder with a single discrete latent variable. Let P,(d) be the mass function of some n- dimensional one-hot discrete random variable with unnormalized probabilities a ⬠(0,00)â and po(z|d) some distribution over a data point x given d ⬠(0, 1)" one-hot. The generative model is then po ,a(x,d) = po(2|d)P.(d). Let Qa(d|2) be an approximating posterior over d ⬠(0, 1)" one- hot whose unnormalized probabilities a(x) ⬠(0,00)" depend on x. All together the variational lowerbound we care about stochastically optimizing is
pθ(x D)Pa(D) x) | E Dâ¼Qα(d|x) L1(θ, a, α) = log | Qα(D , (12) | 1611.00712#23 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 24 | with respect to θ, a, and any parameters of α. First, we relax the stochastic computation D Concrete(α(x), λ1) 12 will re- with density qα,λ1(z sult in a non-interpretable objective, which does not necessarily lowerbound log p(x), because E x)/Pa(Z)] is not a KL divergence. Thus we propose ârelaxingâ the terms Pa(d) and Qα(d
x) to reï¬ect the true sampling distribution. Thus, the relaxed objective is: | pθ(x L1(θ, a, α)
|
where pa,λ2(z) is a Concrete density with location a and temperature λ2. At test time we evaluate the discrete lowerbound L1(θ, a, α). Naively implementing Eq. 13 will result in numerical issues. We discuss this and other details in Appendix C. | 1611.00712#24 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
1611.00712 | 25 | Thus, the basic paradigm we propose is the following: during training replace every discrete node with a Concrete node at some ï¬xed temperature (or with an annealing schedule). The graphs are identical up to the softmax / argmax computations, so the parameters of the relaxed graph and discrete graph are the same. When an objective depends on the log-probability of discrete variables in the SCG, as the variational lowerbound does, we propose that the log-probability terms are also ârelaxedâ to represent the true distribution of the relaxed node. At test time the original discrete loss is evaluated. This is possible, because the discretization of any Concrete distribution has a closed form mass function, and the relaxation of any discrete distribution into a Concrete distribution has a closed form density. This is not always possible. For example, the multinomial probit modelâthe Gumbel-Max trick with Gaussians replacing Gumbelsâdoes not have a closed form mass.
The success of Concrete relaxations will depend on the choice of temperature during training. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior
6
(13)
Published as a conference paper at ICLR 2017 | 1611.00712#25 | The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables | The reparameterization trick enables optimizing large scale stochastic
computation graphs via gradient descent. The essence of the trick is to
refactor each stochastic node into a differentiable function of its parameters
and a random variable with fixed distribution. After refactoring, the gradients
of the loss propagated by the chain rule through the graph are low variance
unbiased estimators of the gradients of the expected loss. While many
continuous random variables have such reparameterizations, discrete random
variables lack useful reparameterizations due to the discontinuous nature of
discrete states. In this work we introduce Concrete random
variables---continuous relaxations of discrete random variables. The Concrete
distribution is a new family of distributions with closed form densities and a
simple reparameterization. Whenever a discrete stochastic node of a computation
graph can be refactored into a one-hot bit representation that is treated
continuously, Concrete stochastic nodes can be used with automatic
differentiation to produce low-variance biased gradients of objectives
(including objectives that depend on the log-probability of latent stochastic
nodes) on the corresponding discrete graph. We demonstrate the effectiveness of
Concrete relaxations on density estimation and structured prediction tasks
using neural networks. | http://arxiv.org/pdf/1611.00712 | Chris J. Maddison, Andriy Mnih, Yee Whye Teh | cs.LG, stat.ML | null | null | cs.LG | 20161102 | 20170305 | [
{
"id": "1610.05683"
},
{
"id": "1502.04623"
},
{
"id": "1610.02287"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.