id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1611.01224#62
Sample Efficient Actor-Critic with Experience Replay
¼Âµ JE ((oes2.0)] =F [esr 0)Qee41, 0) +E, (222) ee ») - 2) prsil By adding and subtracting the two sides of Equation (23) inside the summand of Equation (20), we have # B BQ(x, a) & t>0 ~~, [Pri (b)Q(x141,b) ss: t>0 al t>0 al t>0 z(t) i=1 (te i=1 (I i=1 Me Me Me b)-c m+7E (a ) bun pr+i( (E354) ae] Q(at41, Q(a141 Q(aH41, 1))â 9, Bees 0438) »d)) â yPr+1Q(@e41, z=) b)| - Q(erar)) + Q(z, a) = RQ(x,
1611.01224#61
1611.01224#63
1611.01224
[ "1602.01783" ]
1611.01224#63
Sample Efficient Actor-Critic with Experience Replay
16 # Q(x, a) Published as a conference paper at ICLR 2017 In the remainder of this appendix, we show that importance sampling. First, we reproduce the deï¬ nition of generalizes both the Bellman operator and B : # B Ï t+1(b) t b)-c BQ(x,a) = Ey yy ( a) (: +7E (Gece Q(t41, »)) 120 je bv pr+i(b) + BQ(x,a) = Ey yy 120 When c = 0, we have that p; = 0 je bv pr+i(b) + Vi. Therefore only the first summand of the sum remains: When c = 0, we have that p; = 0 Vi. Therefore only the first summand of the sum remains:
1611.01224#62
1611.01224#64
1611.01224
[ "1602.01783" ]
1611.01224#64
Sample Efficient Actor-Critic with Experience Replay
â Q(x, a) = Eµ . E + 1,â E. ((oesa.0)| # B . When c = In this case = # i: , the compensation term disappears and Â¯Ï i = Ï i â γt BQ(2,a) =E, | 07 20 In this case B is the â BQ(2,a) =E, | 07 (1 a) G +E (0x Q(t, »)) =E, |>07' (1 a) rm t>0 i=l 20 i=1 In this case B is the same operator defined by importance sampling. # B # D DERIVATION OF V target By using the truncation and bias correction trick, we can derive the following: : Tala, a)-1 v(o) =, [min {1 HIE Qreena)] +B (JAP) orterssa)). any (ala) ann pla) |, We, however, cannot use the above equation as a target as we do not have access to QÏ
1611.01224#63
1611.01224#65
1611.01224
[ "1602.01783" ]
1611.01224#65
Sample Efficient Actor-Critic with Experience Replay
. To derive a target, we can take a Monte Carlo approximation of the ï¬ rst expectation in the RHS of the above equation and replace the ï¬ rst occurrence of QÏ with Qret and the second with our current neural net approximation Qθv (xt, Vine" (ae) = min {1 wee | QM (a,a:) + E (23) . Qo, (@, ») - (24) * p(ar|re) ~T pr(a)
1611.01224#64
1611.01224#66
1611.01224
[ "1602.01783" ]
1611.01224#66
Sample Efficient Actor-Critic with Experience Replay
Through the truncation and bias correction trick again, we have the following identity: B,eolov0l= 2 [rin nce} eteoa)] +, [Ama], tera) 9 Adding and subtracting both sides of Equation (25) to the RHS of (24) while taking a Monte Carlo approximation, we arrive at V target(xt): Ï (at| µ(at| E CONTINUOUS CONTROL EXPERIMENTS E.1 DESCRIPTION OF THE CONTINUOUS CONTROL PROBLEMS Our continuous control tasks were simulated using the MuJoCo physics engine (Todorov et al. (2012)). For all experiments we considered an episodic setup with an episode length of T = 500 steps and a discount factor of 0.99. Cartpole swingup This is an instance of the classic cart-pole swing-up task. It consists of a pole attached to a cart running on a ï¬ nite track. The agent is required to balance the pole near the center of the track by applying a force to the cart only. An episode starts with the pole at a random angle and zero velocity. A reward zero is given except when the pole is approximately upright (within 0.05) for a track length of 2.4. ± The observations include position and velocity of the cart, angle and angular velocity of the pole. a sine/cosine of the angle, the position of the tip of the pole, and Cartesian velocities of the pole. The dimension of the action space is 1.
1611.01224#65
1611.01224#67
1611.01224
[ "1602.01783" ]
1611.01224#67
Sample Efficient Actor-Critic with Experience Replay
17 Published as a conference paper at ICLR 2017 Reacher3 The agent needs to control a planar 3-link robotic arm in order to minimize the distance between the end effector of the arm and a target. Both arm and target position are chosen randomly at the beginning of each episode. The reward is zero except when the tip of the arm is within 0.05 of the target, where it is one. The 8-dimensional observation consists of the angles and angular velocity of all joints as well as the displacement between target and the end effector of the arm. The 3-dimensional action are the torques applied to the joints. Cheetah The Half-Cheetah (Wawrzyriski| (2009); [Heess et al.|(2015)) is a planar locomotion task where the agent is required to control a 9-DoF cheetah-like body (in the vertical plane) to move in the direction of the x-axis as quickly as possible. The reward is given by the velocity along the x-axis and a control cost: r = vz + 0.1||al|â
1611.01224#66
1611.01224#68
1611.01224
[ "1602.01783" ]
1611.01224#68
Sample Efficient Actor-Critic with Experience Replay
. The observation vector consists of the z-position of the torso and its x, z velocities as well as the joint angles and angular velocities. The action dimension is 6. Fish The goal of this task is to control a 13-DoF ï¬ sh-like body to swim to a random target in 3D space. The reward is given by the distance between the head of the ï¬ sh and the target, a small penalty for the body not being upright, and a control cost. At the beginning of an episode the ï¬ sh is initialized facing in a random direction relative to the target. The 24-dimensional observation is given by the displacement between the ï¬ sh and the target projected onto the torso coordinate frame, the joint angles and velocities, the cosine of the angle between the z-axis of the torso and the world z-axis, and the velocities of the torso in the torso coordinate frame. The 5-dimensional actions control the position of the side ï¬ ns and the tail. Walker The 9-DoF planar walker is inspired by (Schulman et al. (2015a)) and is required to move forward along the x-axis as quickly as possible without falling. The reward consists of the x-velocity of the torso, a quadratic control cost, and terms that penalize deviations of the torso from the preferred height and orientation (i.e. terms that encourage the walker to stay standing and upright). The 24-dimensional observation includes the torso height, velocities of all DoFs, as well as sines and cosines of all body orientations in the x-z plane. The 6-dimensional action controls the torques applied at the joints. Episodes are terminated early with a negative reward when the torso exceeds upper and lower limits on its height and orientation. Humanoid The humanoid is a 27 degrees-of-freedom body with 21 actuators (21 action dimen- sions).
1611.01224#67
1611.01224#69
1611.01224
[ "1602.01783" ]
1611.01224#69
Sample Efficient Actor-Critic with Experience Replay
It is initialized lying on the ground in a random conï¬ guration and the task requires it to achieve a standing position. The reward function penalizes deviations from the height of the head when standing, and includes additional terms that encourage upright standing, as well as a quadratic action penalty. The 94 dimensional observation contains information about joint angles and velocities and several derived features reï¬ ecting the bodyâ s pose. E.2 UPDATE EQUATIONS OF THE BASELINE TIS The baseline TIS follows the following update equations, k-1 k-1 updates to the policy: min {5 (11 rs) \ » y'risi +7 Vo, (@h4t) â voce Vo log m6(at|at), i=0 i=0 k=l k=l updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0 updates to the value: min {s (11 nus) \ » yr t a Vo, (K-41) â Vo, (x2)| Vo, Vo, (xe). i=0 i=0 The baseline Trust-TIS is appropriately modified according to the trust region update described in SectionB.3] i=0 E.3 SENSITIVITY ANALYSIS In this section, we assess the sensitivity of ACER to hyper-parameters. In Figures 5 and 6, we show, for each game, the ï¬ nal performance of our ACER agent versus the choice of learning rates, and the trust region constraint δ respectively. Note, as we are doing random hyper-parameter search, each learning rate is associated with a random δ and vice versa.
1611.01224#68
1611.01224#70
1611.01224
[ "1602.01783" ]
1611.01224#70
Sample Efficient Actor-Critic with Experience Replay
It is therefore difï¬ cult to tease out the effect of either hyper-parameter independently. 18 xt), Published as a conference paper at ICLR 2017 We observe, however, that ACER is not very sensitive to the hyper-parameters overall. In addition, smaller δâ s do not seem to adversely affect the ï¬ nal performance while larger δâ s do in domains of higher action dimensionality. Similarly, smaller learning rates perform well while bigger learning rates tend to hurt ï¬ nal performance in domains of higher action dimensionality. Fish Walker2D Cheetah Cumulative Reward Cumulative Reward Cumulative Reward "Log Learning Rate Cartpole ? â Log Leaming Rate Reacher3 â Log Learning Rate Humanoid Cumulative Reward rary Cumulative Reward Cumulative Reward Log Learning Rate Log Leaming Rate Log Learning Rate
1611.01224#69
1611.01224#71
1611.01224
[ "1602.01783" ]
1611.01224#71
Sample Efficient Actor-Critic with Experience Replay
Figure 5: Log learning rate vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬ nal performance after training for all 30 log learning rates considered. Note that each learning rate is associated with a different δ as a consequence of random search over hyper-parameters. Fish Walker2D. Cumulative Reward Cumulative Reward Trust Region Constraint (3) Trust Region Constraint (6) Humanoid . Cheetah Be 3 @ Zao E 8: . Trust Region Constraint (6) Cartpole ofa a eee Bux 3 gx $ wx z 8 wx Cumulative Reward Reacher3 Cumulative Reward Trust Region Constraint () Trust Region Constraint (4) Trust Region Constraint (6) Figure 6: Trust region constraint (δ) vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the ï¬ nal performance after training for all 30 trust region constraints (δ) searched over. Note that each δ is associated with a different learning rate as a consequence of random search over hyper-parameters.
1611.01224#70
1611.01224#72
1611.01224
[ "1602.01783" ]
1611.01224#72
Sample Efficient Actor-Critic with Experience Replay
# E.4 EXPERIMENTAL SETUP OF ABLATION ANALYSIS For the ablation analysis, we use the same experimental setup as in the continuous control experiments while removing one component at a time. 19 Published as a conference paper at ICLR 2017 To evaluate the effectiveness of Retrace/Q(λ) with off-policy correction, we replace both with importance sampling based estimates (following Degris et al. (2012)) which can be expressed recursively: Rt = rt + Ï t+1Rt+1.
1611.01224#71
1611.01224#73
1611.01224
[ "1602.01783" ]
1611.01224#73
Sample Efficient Actor-Critic with Experience Replay
To evaluate the Stochastic Dueling Networks, we replace it with two separate networks: one comput- ing the state values and the other Q values. Given Qret(xt, at), the naive way of estimating the state values is to use the following update rule: Ï tQret(xt, at) Vθv (xt) # â θv Vθv (xt). â The above update rule, however, suffers from high variance. We consider instead the following update rule:
1611.01224#72
1611.01224#74
1611.01224
[ "1602.01783" ]
1611.01224#74
Sample Efficient Actor-Critic with Experience Replay
# Qret(xt, at) Ï t Vθv (xt) # â θv Vθv (xt) â which has markedly lower variance. We update our Q estimates as before. To evaluate the effects of the truncation and bias correction trick, we change our c parameter (see Equation (16)) to â 20
1611.01224#73
1611.01224
[ "1602.01783" ]
1611.01211#0
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
8 1 0 2 r a M 3 1 ] G L . s c [ 8 v 1 1 2 1 0 . 1 1 6 1 : v i X r a # Combating Reinforcement Learningâ s Sisyphean Curse with Intrinsic Fear Zachary C. Lipton1,2,3, Kamyar Azizzadenesheli4, Abhishek Kumar3, Lihong Li5, Jianfeng Gao6, Li Deng7 Carnegie Mellon University1, Amazon AI2, University of California, San Diego3, Univerisity of California, Irvine4, Google5, Microsoft Research6, Citadel7 [email protected], [email protected], [email protected] { lihongli, jfgao, deng } @microsoft.com
1611.01211#1
1611.01211
[ "1802.04412" ]
1611.01211#1
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
# March 1, 2022 # Abstract Many practical environments contain catastrophic states that an optimal agent would visit infrequently or never. Even on toy problems, Deep Reinforcement Learning (DRL) agents tend to periodically revisit these states upon forgetting their existence under a new policy. We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes. IF agents possess a fear model trained to predict the probability of imminent catastrophe. This score is then used to penalize the Q- learning objective. Our theoretical analysis bounds the reduction in average return due to learning on the perturbed objective. We also prove robustness to classification errors. As a bonus, IF models tend to learn faster, owing to reward shaping. Experiments demonstrate that intrinsic-fear DQNs solve otherwise pathological environments and improve on several Atari games.
1611.01211#0
1611.01211#2
1611.01211
[ "1802.04412" ]
1611.01211#2
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
# Introduction Following the success of deep reinforcement learning (DRL) on Atari games [22] and the board game of Go [29], researchers are increasingly exploring practical applications. Some investigated applications include robotics [17], dialogue systems [9, 19], energy management [25], and self-driving cars [27]. Amid this push to apply DRL, we might ask, can we trust these agents in the wild? Agents acting society may cause harm. A self-driving car might hit pedestrians and a domestic robot might injure a child. Agents might also cause self-injury, and while Atari lives lost are inconsequential, robots are expensive. Unfortunately, it may not be feasible to prevent all catastrophes without requiring extensive prior knowledge [10]. Moreover, for typical DQNs, providing large negative rewards does not solve the problem: as soon as the catastrophic trajectories are flushed from the replay buffer, the updated Q-function ceases to discourage revisiting these states. In this paper, we define avoidable catastrophes as states that prior knowledge dictates an optimal policy should visit rarely or never. Additionally, we define danger statesâ those from which a catastrophic state can
1611.01211#1
1611.01211#3
1611.01211
[ "1802.04412" ]
1611.01211#3
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
1 be reached in a small number of steps, and assume that the optimal policy does visit the danger states rarely or never. The notion of a danger state might seem odd absent any assumptions about the transition function. With a fully-connected transition matrix, all states are danger states. However, physical environments are not fully connected. A car cannot be parked this second, underwater one second later. This work primarily addresses how we might prevent DRL agents from perpetually making the same mistakes. As a bonus, we show that the prior knowledge knowledge that catastrophic states should be avoided accelerates learning. Our experiments show that even on simple toy problems, the classic deep Q-network (DQN) algorithm fails badly, repeatedly visiting catastrophic states so long as they continue to learn. This poses a formidable obstacle to using DQNs in the real world. How can we trust a DRL-based agent that was doomed to periodically experience catastrophes, just to remember that they exist? Imagine a self-driving car that had to periodically hit a few pedestrians to remember that it is undesirable. In the tabular setting, an RL agent never forgets the learned dynamics of its environment, even as its policy evolves. Moreover, when the Markovian assumption holds, convergence to a globally optimal policy is guaranteed. However, the tabular approach becomes infeasible in high-dimensional, continuous state spaces. The trouble for DQNs owes to the use of function approximation [24]. When training a DQN, we successively update a neural network based on experiences. These experiences might be sampled in an online fashion, from a trailing window (experience replay buffer), or uniformly from all past experiences. Regardless of which mode we use to train the network, eventually, states that a learned policy never encounters will come to form an infinitesimally small region of the training distribution. At such times, our networks suffer the well-known problem of catastrophic forgetting [21, 20]. Nothing prevents the DQNâ s policy from drifting back towards one that revisits forgotten catastrophic mistakes. We illustrate the brittleness of modern DRL algorithms with a simple pathological problem called Adventure Seeker. This problem consists of a one-dimensional continuous state, two actions, simple dynamics, and admits an analytic solution. Nevertheless, the DQN fails. We then show that similar dynamics exist in the classic RL environment Cart-Pole. To combat these problems, we propose the intrinsic fear (IF) algorithm.
1611.01211#2
1611.01211#4
1611.01211
[ "1802.04412" ]
1611.01211#4
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
In this approach, we train a supervised fear model that predicts which states are likely to lead to a catastrophe within kr steps. The output of the fear model (a probability), scaled by a fear factor penalizes the Q-learning target. Crucially, the fear model maintains buffers of both safe and danger states. This model never forgets danger states, which is possible due to the infrequency of catastrophes. We validate the approach both empirically and theoretically. Our experiments address Adventure Seeker, Cartpole, and several Atari games.
1611.01211#3
1611.01211#5
1611.01211
[ "1802.04412" ]
1611.01211#5
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
In these environments, we label every lost life as a catastrophe. On the toy environments, IF agents learns to avoid catastrophe indefinitely. In Seaquest experiments, the IF agent achieves higher reward and in Asteroids, the IF agent achieves both higher reward and fewer catastrophes. The improvement on Freeway is most dramatic. We also make the following theoretical contributions: First, we prove that when the reward is bounded and the optimal policy rarely visits the danger states, an optimal policy learned on the perturbed reward function has approximately the same return as the optimal policy learned on the original value function. Second, we prove that our method is robust to noise in the danger model.
1611.01211#4
1611.01211#6
1611.01211
[ "1802.04412" ]
1611.01211#6
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
2 # Intrinsic fear An agent interacts with its environment via a Markov decision process, or MDP, (S, A,7,R, y). At each step t, the agent observes a state s â ¬ S and then chooses an action a â ¬ A according to its policy 7. The environment then transitions to state s;,, â ¬ S according to transition dynamics 7 (s;+1|s;, ay) and generates a reward r; with expectation R(s, a). This cycle continues until each episode terminates. An agent seeks to maximize the cumulative discounted return
1611.01211#5
1611.01211#7
1611.01211
[ "1802.04412" ]
1611.01211#7
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
_, y'r;. Temporal-differences methods [31] like Q-learning [33] model the Q-function, which gives the optimal discounted total reward of a state-action pair. Problems of practical interest tend to have large state spaces, thus the Q-function is typically approximated by parametric models such as neural networks. In Q-learning with function approximation, an agent collects experiences by acting greedily with respect to Q(s, a; θQ ) and updates its parameters θQ .
1611.01211#6
1611.01211#8
1611.01211
[ "1802.04412" ]
1611.01211#8
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Updates proceed as follows. For a given experience (st , at , rt , st +1), we minimize the squared Bellman error: L = (Q(st , at ; θQ ) â yt )2 (1) for ys = ry + y - maxgâ Q(S;41, 4â ; OQ). Traditionally, the parameterised Q(s, a; @) is trained by stochastic approximation, estimating the loss on each experience as it is encountered, yielding the update: θt +1 â θt + α(yt â Q(st , at ; θt ))â Q(st , at ; θt ) . (2)
1611.01211#7
1611.01211#9
1611.01211
[ "1802.04412" ]
1611.01211#9
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Q-learning methods also require an exploration strategy for action selection. For simplicity, we consider only the ϵ-greedy heuristic. A few tricks help to stabilize Q-learning with function approximation. Notably, with experience replay [18], the RL agent maintains a buffer of experiences, of experience to update the Q-function. We propose a new formulation: Suppose there exists a subset C C S of known catastrophe states/ And assume that for a given environment, the optimal policy rarely enters from which catastrophe states are reachable in a short number of steps. We define the distance d(s;, s;) to be length N of the smallest sequence of transitions {(s;, a;,1;, Sra}, that traverses state space from s; to s;.' Definition 2.1. Suppose a priori knowledge that acting according to the optimal policy z*, an agent rarely encounters states s â ¬ S that lie within distance d(s,c) < k; for any catastrophe state c â ¬ C. Then each state s for which Hc â ¬ C s.t. d(s,c) < k; is a danger state. In Algorithm 1, the agent maintains both a DQN and a separate, supervised fear model F : S + [0,1]. F provides an auxiliary source of reward, penalizing the Q-learner for entering likely danger states. In our case, we use a neural network of the same architecture as the DON (but for the output layer). While one could sharing weights between the two networks, such tricks are not relevant to this paperâ
1611.01211#8
1611.01211#10
1611.01211
[ "1802.04412" ]
1611.01211#10
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
s contribution. We train the fear model to predict the probability that any state will lead to catastrophe within k moves. Over the course of training, our agent adds each experience (s, a, r,sâ ) to its experience replay buffer. Whenever a catastrophe is reached at, say, the n;p turn of an episode, we add the preceding k, (fear radius) states to a danger buffer. We add the first n â k, states of that episode to a safe buffer. When n < k,, all states for that episode are added to the list of danger states. Then after each turn, in addition to updating the Q-network, we update the fear model, sampling 50% of states from the danger buffer, assigning them label 1, and the remaining 50% from the safe buffer, assigning them label 0. 1In the stochastic dynamics setting, the distance is the minimum mean passing time between the states.
1611.01211#9
1611.01211#11
1611.01211
[ "1802.04412" ]
1611.01211#11
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
3 Algorithm 1 Training DQN with Intrinsic Fear 1: Input: Q (DQN), F (fear model), fear factor λ, fear phase-in length kλ, fear radius kr 2: Output: Learned parameters θQ and θF 3: Initialize parameters θQ and θF randomly 4: Initialize replay buffer D, danger state buffer DD , and safe state buffer DS 5: Start per-episode turn counter ne 6: for t in 1:T do 7: With probability ϵ select random action at Otherwise, select a greedy action at = arg maxa Q(st , a; θQ ) Execute action at in environment, observing reward rt and successor state st +1 Store transition (st , at , rt , st +1) in D if st +1 is a catastrophe state then 8: 9: 10: 11: 12: Add states st â kr through st to DD else 13: 14: Add states st â ne through st â kr â 1 to DS 14: Add states s;_y, through s;-,,-1 to Ds Sample a random mini-batch of transitions (sÏ , aÏ , rÏ , sÏ +1) from D Î»Ï â min(λ, λ ·t kλ 15: 16: # ) for terminal sÏ +1 : rÏ â Î»Ï 16: A; â min(A, e) for terminal s,4; :
1611.01211#10
1611.01211#12
1611.01211
[ "1802.04412" ]
1611.01211#12
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Tr â Ar 17: yr â 4 for non-terminal s,+; : rp + maxg Q(s;41,.4â ; Og)â A+ F(sc+13 OF) 18: 09 â 09-9 Vog(Yr â Ose, 475 OQ)? 19: Sample random mini-batch s; with 50% of examples from Dp and 50% from Ds 1, fors;â ¬ Dp yi 0, fors; â ¬ Ds 21: Or â Or â 1 - Vo, lossr(y;, F(s;; OF) 20: For each update to the DON, we perturb the TD target y;. Instead of updating Q(s;, a;;69) towards r; + maxq QO(s;41, 4â ; 99), we modify the target by subtracting the intrinsic fear: yy are max Q(Sri, aâ ; 09) â A+ F(se413 OF) (3) where F (s; θF ) is the fear model and λ is a fear factor determining the scale of the impact of intrinsic fear on the Q-function update. # 3 Analysis Note that IF perturbs the objective function. Thus, one might be concerned that the perturbed reward might lead to a sub-optimal policy. Fortunately, as we will show formally, if the labeled catastrophe states and danger zone do not violate our assumptions, and if the fear model reaches arbitrarily high accuracy, then this will not happen. For an MDP, M = (S,A,7,R,y), with 0 < y < 1, the average reward return is as follows:
1611.01211#11
1611.01211#13
1611.01211
[ "1802.04412" ]
1611.01211#13
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
4 limyâ oo #2m| yr rela if y=1 qo (1) = a= yEm| EP y'nla| if 0<y<1 The optimal policy Ï â of the model M is the policy which maximizes the average reward return, Ï â = maxÏ â P η(Ï ) where P is a set of stationary polices. Theorem 1. For a given MDP, M, with γ â [0, 1] and a catastrophe detector f , let Ï â denote any optimal policy of M, and Ë Ï denote an optimal policy of M equipped with fear model F , and λ, environment (M, F ). If the probability Ï â visits the states in the danger zone is at most ϵ, and 0 â ¤ R(s, a) â ¤ 1, then ηM (Ï â ) â ¥ ηM ( Ë Ï ) â ¥ ηM, F ( Ë Ï ) â ¥ ηM (Ï â ) â λϵ . (4) In other words, Ë Ï is λϵ-optimal in the original MDP. Proof. The policy Ï â visits the fear zone with probability at most ϵ. Therefore, applying Ï â on the envi- ronment with intrinsic fear (M, F ), provides a expected return of at least ηM (Ï â ) â ϵλ. Since there exists a policy with this expected return on (M, F ), therefore, the optimal policy of (M, F ), must result in an expected return of at least ηM (Ï â ) â ϵλ on (M, F ), i.e. ηM, F ( Ë Ï ) â ¥ ηM (Ï â ) â ϵλ. The expected return ηM, F ( Ë Ï ) decomposes into two parts: (i) the expected return from original environment M, ηM ( Ë Ï ), (ii) the expected return from the fear model. If Ë Ï visits the fear zone with probability at most Ë Ïµ, then ηM, F ( Ë Ï ) â
1611.01211#12
1611.01211#14
1611.01211
[ "1802.04412" ]
1611.01211#14
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
¥ ηM ( Ë Ï ) â λ Ë Ïµ. Therefore, applying Ë Ï on M promises an expected return of at least ηM (Ï â ) â ϵλ + Ë ÏµÎ», lower bounded by ηM (Ï â ) â ϵλ. It is worth noting that the theorem holds for any optimal policy of M. If one of them does not visit the fear zone at all (i.e., ϵ = 0), then ηM (Ï
1611.01211#13
1611.01211#15
1611.01211
[ "1802.04412" ]
1611.01211#15
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
â ) = ηM, F ( Ë Ï ) and the fear signal can boost up the process of learning the optimal policy. Since we empirically learn the fear model F using collected data of some finite sample size N , our RL agent has access to an imperfect fear model Ë F , and therefore, computes the optimal policy based on Ë F . In this case, the RL agent trains with intrinsic fear generated by Ë F , learning a different value function than the RL agent with perfect F . To show the robustness against errors in Ë F , we are interested in the average deviation in the value functions of the two agents. Our second main theoretical result, given in Theorem 2, allows the RL agent to use a smaller discount factor, denoted γpl an, than the actual one (γpl an â ¤ γ ), to reduce the planning horizon and computation cost. Moreover, when an estimated model of the environment is used, Jiang et al. [2015] shows that using a smaller discount factor for planning may prevent over-fitting to the estimated model. Our result demonstrates that using a smaller discount factor for planning can reduce reduction of expected return when an estimated fear model is used.
1611.01211#14
1611.01211#16
1611.01211
[ "1802.04412" ]
1611.01211#16
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Ï â F1,γ1 F2,γ2 (s), s â S, denote Specifically, for a given environment, with fear model F1 and discount factor γ1, let V the state value function under the optimal policy of an environment with fear model F2 and the discount factor γ2. In the same environment, let Ï Ï (s) denote the visitation distribution over states under policy Ï . We are interested in the average reduction on expected return caused by an imperfect classifier; this 5 reduction, denoted L(F, F, Y>Yplan), is defined as (1-y) I. ora vf â (s)-Vp sneco a Theorem 2. Suppose Ypian < y, and é â ¬ (0, 1). Let F be the fear model in Â¥ with minimum empirical risk on N samples. For a given MDP model, the average reduction on expected return, L(F,F,Y,Ypian), vanishes as N increase: with probability at least 1â
1611.01211#15
1611.01211#17
1611.01211
[ "1802.04412" ]
1611.01211#17
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
6, L = O λ 1 â γ 1 â γpl an VC(F ) + log 1 δ N + (γ â γpl an) 1 â γpl an , (5) where VC(F ) is the VC dimension of the hypothesis class F . Proof. In order to analyze [ir - Vey en 0}. which is always non-negative, we decompose it as follows: (6) F.Yplan (veer - VER) + [v rey (=v nan) The first term is the difference in the expected returns of Ï
1611.01211#16
1611.01211#18
1611.01211
[ "1802.04412" ]
1611.01211#18
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
â from s: F,γ under two different discount factors, starting ELD! = Yptan)Â¥els0 = sy Fl . (7) t=0 γ â γpl an (1â γpl an )(1â γ ) . Since rt â ¤ 1, â t, using the geometric series, Eq. 7 is upper bounded by 1 = # 1 1â γpl an 1â γ â 1 Y-Yplan ce Since r; < 1, Vt, using the geometric series, Eq. 7 is upper bounded by ry - is an optimal policy of an PF, . The second term is upper bounded by V, F, Pte (s) â V,. yee (s) since 7, Yol , sYÂ¥plan environment equipped with (F, Yplan)- Furthermore, as Ypian S y andr; > 0, we have Ve. vee (s) = F.Yplan F.Ypla the deviation of the value function under two different close policies. Since F and F are close, we expect that this deviation to be small. With one more decomposition step a a V,, plan (s). Therefore, the second term of Eq. 6 is upper bounded by Vay ete (s )-V, â "nn ), which is sÂ¥plan + Yplan Poi _ plan F.Yptan Vea lee OW) = [Vet (= Vp ra") F.Ypian +(v. Prptan (gy v, Pptan(s)) 4 Yplan F.Yplan Yplan F.Yplan s[efinera- rf)
1611.01211#17
1611.01211#19
1611.01211
[ "1802.04412" ]
1611.01211#19
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
. Since the middle term in this equation is non-positive, we can ignore it for the purpose of upper-bounding the left-hand side. The upper bound is sum of the remaining two terms which is also upper bounded by 2 6 times of the maximum of them; 2 max vz (s)- VE me{ap, ty | FeYptan â Yplan *Yplanâ F.Ypian (s)} , which is the deviation in values of different domains. The value functions satisfy the Bellman equation for any Ï : Vin Ypta (8) =R(s, 2(s)) + AF(s) +Yptan [T(S8. 2 VE yyy 9048 VE vptan (s) Ris, x) + AF(s) © +Yplan | T(s'|s, n(s))VZ (sâ )ds ° seS >Yplan
1611.01211#18
1611.01211#20
1611.01211
[ "1802.04412" ]
1611.01211#20
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
which can be solved using iterative updates of dynamic programing. Let V Ï i (s) respectably denote the iâ th iteration of the dynamic programmings corresponding to the first and second equalities in Eq. 8. Therefore, for any state Vi(8)-Vi"(s) = A/F(s) - Aâ F(s) +Yptan J T(s'5.2(9)) (Vi-us") â Hits?) ds ay YptanT")" (F~F) (3), (10) where (J 7)â is a kernel and denotes the transition operator applied i times to itself. The classification error ~ con ~ |Fs) - Fis) is the zero-one loss of binary classifier, therefore, its expectation hes w@ "plan (s) |F«s) - Fis ds is bounded by 3200 Lo Hoes linear operator, with probability at least 1 â 6 [32, 12]. As long as the operator (J)! is a 3200 VC(F) + log 5 fer Fyptan (§) |v7(s) - Vi (s)\ds < y_ 2200 NOG) NES )+}e8 5 : (11) seS â
1611.01211#19
1611.01211#21
1611.01211
[ "1802.04412" ]
1611.01211#21
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Yplan N Therefore, L£(F, F, Y:Yplan) is bounded by (1 â y) times of sum of Eq. 11 and ~~, with probability at least 1â 6. 7 Theorem 2 holds for both finite and continuous state-action MDPs. Over the course of our experiments, we discovered the following pattern: Intrinsic fear models are more effective when the fear radius kr is large enough that the model can experience danger states at a safe distance and correct the policy, without experiencing many catastrophes. When the fear radius is too small, the danger probability is only nonzero at states from which catastrophes are inevitable anyway and intrinsic fear seems not to help. We also found that wider fear factors train more stably when phased in over the course of many episodes. So, in all of our experiments we gradually phase in the fear factor from 0 to λ reaching full strength at predetermined time step kλ.
1611.01211#20
1611.01211#22
1611.01211
[ "1802.04412" ]
1611.01211#22
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
7 # 4 Environments We demonstrate our algorithms on the following environments: (i) Adventure Seeker, a toy pathological environment that we designed to demonstrate catastrophic forgetting; (ii) Cartpole, a classic RL environment; and (ii) the Atari games Seaquest, Asteroids, and Freeway [3]. Adventure Seeker We imagine a player placed on a hill, sloping upward to the right (Figure 1(a)). At each turn, the player can move to the right (up the hill) or left (down the hill). The environment adjusts the playerâ s position accordingly, adding some random noise. Between the left and right edges of the hill, the player gets more reward for spending time higher on the hill. But if the player goes too far to the right, she will fall off, terminating the episode (catastrophe).
1611.01211#21
1611.01211#23
1611.01211
[ "1802.04412" ]
1611.01211#23
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Formally, the state is single continuous variable s â [0, 1.0], denoting the playerâ s position. The starting position for each episode is chosen uniformly at random in the interval [.25, .75]. The available actions consist only of {â 1, +1} (left and right). Given an action at in state st , T (st +1|st , at ) the successor state is produced st +1 â st + .01·at +η where η â ¼ N (0, .012). The reward at each turn is st (proportional to height). The player falls off the hill, entering the catastrophic terminating state, whenever st +1 > 1.0 or st +1 < 0.0. This game should be easy to solve. There exists a threshold above which the agent should always choose to go left and below which it should always go right. And yet a DQN agent will periodically die. Initially, the DQN quickly learns a good policy and avoids the catastrophe, but over the course of continued training, the agent, owing to the shape of the reward function, collapses to a policy which always moves right, regardless of the state. We might critically ask in what real-world scenario, we could depend upon a system that cannot solve Adventure Seeker. Cart-Pole In this classic RL environment, an agent balances a pole atop a cart (Figure 1(b)). Qualitatively, the game exhibits four distinct catastrophe modes. The pole could fall down to the right or fall down to the left. Additionally, the cart could run off the right boundary of the screen or run off the left.
1611.01211#22
1611.01211#24
1611.01211
[ "1802.04412" ]
1611.01211#24
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Formally, at each time, the agent observes a four-dimensional state vector (x, v, θ, Ï ) consisting respectively of the cart position, cart velocity, pole angle, and the poleâ s angular velocity. At each time step, the agent chooses an action, applying a force of either â 1 or +1. For every time step that the pole remains upright and the cart remains on the screen, the agent receives a reward of 1. If the pole falls, the episode terminates, giving a return of 0 from the penultimate state. In experiments, we use the implementation CartPole-v0 contained in the openAI gym [6]. Like Adventure Seeker, this problem admits an analytic solution. A perfect policy should never drop the pole. But, as with Adventure Seeker, a DQN converges to a constant rate of catastrophes per turn.
1611.01211#23
1611.01211#25
1611.01211
[ "1802.04412" ]
1611.01211#25
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Atari games In addition to these pathological cases, we address Freeway, Asteroids, and Seaquest, games from the Atari Learning Environment. In Freeway, the agent controls a chicken with a goal of crossing the road while dodging traffic. The chicken loses a life and starts from the original location if hit by a car. Points are only rewarded for successfully crossing the road. In Asteroids, the agent pilots a ship and gains points from shooting the asteroids. She must avoid colliding with asteroids which cost it lives. In Seaquest, a player swims under water. Periodically, as the oxygen gets low, she must rise to the surface for oxygen. Additionally, fishes swim across the screen. The player gains points each time she shoots a fish.
1611.01211#24
1611.01211#26
1611.01211
[ "1802.04412" ]
1611.01211#26
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Colliding 8 (a) Adventure Seeker # (b) Cart-Pole (c) Seaquest (d) Asteroids (e) Freeway Figure 1: In experiments, we consider two toy environments (a,b) and the Atari games Seaquest (c), Asteroids (d), and Freeway (e) with a fish or running out of oxygen result in death. In all three games, the agent has 3 lives, and the final death is a terminal state. We label each loss of a life as a catastrophe state.
1611.01211#25
1611.01211#27
1611.01211
[ "1802.04412" ]
1611.01211#27
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
# 5 Experiments First, on the toy examples, We evaluate standard DQNs and intrinsic fear DQNs using multilayer perceptrons (MLPs) with a single hidden layer and 128 hidden nodes. We train all MLPs by stochastic gradient descent using the Adam optimizer [16]. In Adventure Seeker, an agent can escape from danger with only a few time steps of notice, so we set the fear radius kr to 5. We phase in the fear factor quickly, reaching full strength in just 1000 steps.
1611.01211#26
1611.01211#28
1611.01211
[ "1802.04412" ]
1611.01211#28
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
On this 9 (a) Seaquest (b) Asteroids (c) Freeway (d) Seaquest (e) Asteroids (f) Freeway Figure 2: Catastrophes (first row) and reward/episode (second row) for DQNs and Intrinsic Fear. On Adventure Seeker, all Intrinsic Fear models cease to â dieâ within 14 runs, giving unbounded (unplottable) reward thereafter. On Seaquest, the IF model achieves a similar catastrophe rate but significantly higher total reward. On Asteroids, the IF model outperforms DQN. For Freeway, a randomly exploring DQN (under our time limit) never gets reward but IF model learns successfully. problem we set the fear factor λ to 40. For Cart-Pole, we set a wider fear radius of kr = 20. We initially tried training this model with a short fear radius but made the following observation: One some runs, IF-DQN would surviving for millions of experiences, while on other runs, it might experience many catastrophes. Manually examining fear model output on successful vs unsuccessful runs, we noticed that on the bad runs, the fear model outputs non-zero probability of danger for precisely the 5 moves before a catastrophe.
1611.01211#27
1611.01211#29
1611.01211
[ "1802.04412" ]
1611.01211#29
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
In Cart-Pole, by that time, it is too to correct course. On the more successful runs, the fear model often outputs predictions in the range .1 â .5. We suspect that the gradation between mildly dangerous states and those with certain danger provides a richer reward signal to the DQN. On both the Adventure Seeker and Cart-Pole environments, DQNs augmented by intrinsic fear far out- perform their otherwise identical counterparts. We also compared IF to some traditional approaches for mitigating catastrophic forgetting. For example, we tried a memory-based method in which we preferentially sample the catastrophic states for updating the model, but they did not improve over the DQN. It seems that the notion of a danger zone is necessary here. For Seaquest, Asteroids, and Freeway, we use a fear radius of 5 and a fear factor of .5. For all Atari games, the IF models outperform their DQN counterparts. Interestingly while for all games, the IF models achieve higher reward, on Seaquest, IF-DQNs have similar catastrophe rates (Figure 2). Perhaps the IF-DQN enters a region of policy space with a strong incentives to exchange catastrophes for higher reward. This result suggests an interplay between the various reward signals that warrants further exploration.
1611.01211#28
1611.01211#30
1611.01211
[ "1802.04412" ]
1611.01211#30
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
For Asteroids and Freeway, the improvements are more dramatic. Over just a few thousand episodes of Freeway, a randomly exploring DQN achieves zero reward. However, the reward shaping of intrinsic fear leads to rapid improvement. 10 # 6 Related work The paper studies safety in RL, intrinsically motivated RL, and the stability of Q-learning with function approximation under distributional shift. Our work also has some connection to reward shaping. We attempt to highlight the most relevant papers here. Several papers address safety in RL. Garcıa and Fernández [2015] provide a thorough review on the topic, identifying two main classes of methods: those that perturb the objective function and those that use external knowledge to improve the safety of exploration. While a typical reinforcement learner optimizes expected return, some papers suggest that a safely acting agent should also minimize risk. Hans et al. [2008] defines a fatality as any return below some threshold Ï
1611.01211#29
1611.01211#31
1611.01211
[ "1802.04412" ]
1611.01211#31
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
. They propose a solution comprised of a safety function, which identifies unsafe states, and a backup model, which navigates away from those states. Their work, which only addresses the tabular setting, suggests that an agent should minimize the probability of fatality instead of maximizing the expected return. Heger [1994] suggests an alternative Q-learning objective concerned with the minimum (vs. expected) return. Other papers suggest modifying the objective to penalize policies with high-variance returns [10, 8]. Maximizing expected returns while minimizing their variance is a classic problem in finance, where a common objective is the ratio of expected return to its standard deviation [28]. Moreover, Azizzadenesheli et al. [2018] suggests to learn the variance over the returns in order to make safe decisions at each decision step. Moldovan and Abbeel [2012] give a definition of safety based on ergodicity. They consider a fatality to be a state from which one cannot return to the start state. Shalev-Shwartz et al. [2016] theoretically analyzes how strong a penalty should be to discourage accidents. They also consider hard constraints to ensure safety. None of the above works address the case where distributional shift dooms an agent to perpetually revisit known catastrophic failure modes. Other papers incorporate external knowledge into the exploration process. Typically, this requires access to an oracle or extensive prior knowledge of the environment. In the extreme case, some papers suggest confining the policy search to a known subset of safe policies. For reasonably complex environments or classes of policies, this seems infeasible. The potential oscillatory or divergent behavior of Q-learners with function approximation has been previ- ously identified [5, 2, 11]. Outside of RL, the problem of covariate shift has been extensively studied [30]. Murata and Ozawa [2005] addresses the problem of catastrophic forgetting owing to distributional shift in RL with function approximation, proposing a memory-based solution. Many papers address intrinsic rewards, which are internally assigned, vs the standard (extrinsic) reward. Typically, intrinsic rewards are used to encourage exploration [26, 4] and to acquire a modular set of skills [7]. Some papers refer to the intrinsic reward for discovery as curiosity. Like classic work on intrinsic motivation, our methods perturb the reward function.
1611.01211#30
1611.01211#32
1611.01211
[ "1802.04412" ]
1611.01211#32
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
But instead of assigning bonuses to encourage discovery of novel transitions, we assign penalties to discourage catastrophic transitions. Key differences In this paper, we undertake a novel treatment of safe reinforcement learning, While the literature offers several notions of safety in reinforcement learning, we see the following problem: Existing safety research that perturbs the reward function requires little foreknowledge, but fundamentally changes the objective globally. On the other hand, processes relying on expert knowledge may presume an unreasonable level of foreknowledge. Moreover, little of the prior work on safe reinforcement learning, to the best of our knowledge, specifically addresses the problem of catastrophic forgetting. This paper proposes a new class of algorithms for avoiding catastrophic states and a theoretical analysis supporting its robustness.
1611.01211#31
1611.01211#33
1611.01211
[ "1802.04412" ]
1611.01211#33
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
11 # 7 Conclusions Our experiments demonstrate that DQNs are susceptible to periodically repeating mistakes, however bad, raising questions about their real-world utility when harm can come of actions. While it is easy to visualize these problems on toy examples, similar dynamics are embedded in more complex domains. Consider a domestic robot acting as a barber. The robot might receive positive feedback for giving a closer shave. This reward encourages closer contact at a steeper angle. Of course, the shape of this reward function belies the catastrophe lurking just past the optimal shave. Similar dynamics might be imagines in a vehicle that is rewarded for traveling faster but could risk an accident with excessive speed. Our results with the intrinsic fear model suggest that with only a small amount of prior knowledge (the ability to recognize catastrophe states after the fact), we can simultaneously accelerate learning and avoid catastrophic states. This work is a step towards combating DRLâ s tendency to revisit catastrophic states due to catastrophic forgetting.
1611.01211#32
1611.01211#34
1611.01211
[ "1802.04412" ]
1611.01211#34
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
# References [1] Kamyar Azizzadenesheli, Emma Brunskill, and Animashree Anandkumar. Efficient exploration through bayesian deep q-networks. arXiv preprint arXiv:1802.04412, 2018. [2] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. 1995. [3] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:
1611.01211#33
1611.01211#35
1611.01211
[ "1802.04412" ]
1611.01211#35
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 2013. [4] Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, 2016. [5] Justin Boyan and Andrew W Moore. Generalization in reinforcement learning: Safely approximating the value function. In NIPS, 1995.
1611.01211#34
1611.01211#36
1611.01211
[ "1802.04412" ]
1611.01211#36
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI gym, 2016. arxiv.org/abs/1606.01540. [7] Nuttapong Chentanez, Andrew G Barto, and Satinder P Singh. Intrinsically motivated reinforcement learning. In NIPS, 2004. [8] Yinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone. Risk-sensitive and robust decision-making: A CVaR optimization approach. In NIPS, 2015. [9] Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. Policy networks with two-stage training for dialogue systems. In SIGDIAL, 2016. [10] Javier Garcıa and Fernando Fernández.
1611.01211#35
1611.01211#37
1611.01211
[ "1802.04412" ]
1611.01211#37
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
A comprehensive survey on safe reinforcement learning. JMLR, 2015. [11] Geoffrey J Gordon. Chattering in SARSA(λ). Technical report, CMU, 1996. [12] Steve Hanneke. The optimal sample complexity of PAC learning. JMLR, 2016. [13] Alexander Hans, Daniel Schneegaà , Anton Maximilian Schäfer, and Steffen Udluft. Safe exploration for reinforcement learning. In ESANN, 2008.
1611.01211#36
1611.01211#38
1611.01211
[ "1802.04412" ]
1611.01211#38
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
12 [14] Matthias Heger. Consideration of risk in reinforcement learning. In Machine Learning, 1994. [15] Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. The dependence of effective planning horizon on model accuracy. In International Conference on Autonomous Agents and Multiagent Systems, 2015. [16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016. [18] Long-Ji Lin.
1611.01211#37
1611.01211#39
1611.01211
[ "1802.04412" ]
1611.01211#39
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 1992. [19] Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. Efficient exploration for dialogue policy learning with bbq networks & replay buffer spiking. In AAAI, 2018. [20] James L McClelland, Bruce L McNaughton, and Randall C Oâ Reilly.
1611.01211#38
1611.01211#40
1611.01211
[ "1802.04412" ]
1611.01211#40
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Why there are complemen- tary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 1995. [21] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 1989. [22] Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 2015. [23] Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. In ICML, 2012.
1611.01211#39
1611.01211#41
1611.01211
[ "1802.04412" ]
1611.01211#41
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
[24] Makoto Murata and Seiichi Ozawa. A memory-based reinforcement learning model utilizing macro- actions. In Adaptive and Natural Computing Algorithms. 2005. [25] Will Night. The AI that cut googleâ s energy bill could soon help you. MIT Tech Review, 2016. [26] Jurgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In From animals to animats: SAB90, 1991.
1611.01211#40
1611.01211#42
1611.01211
[ "1802.04412" ]
1611.01211#42
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
[27] Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. 2016. [28] William F Sharpe. Mutual fund performance. The Journal of Business, 1966. [29] David Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 2016. [30] Masashi Sugiyama and Motoaki Kawanabe.
1611.01211#41
1611.01211#43
1611.01211
[ "1802.04412" ]
1611.01211#43
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
Machine learning in non-stationary environments: Intro- duction to covariate shift adaptation. MIT Press, 2012. [31] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 1988. [32] Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013. [33] Christopher J.C.H. Watkins and Peter Dayan. Q-learning. Machine Learning, 1992.
1611.01211#42
1611.01211#44
1611.01211
[ "1802.04412" ]
1611.01211#44
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
13 # An extension to the Theorem 2 In practice, we gradually learn and i improve F where the difference between learned F after two consecrative updates, F, and Frat, consequently, @ Fr, Yplan and w om Yplan decrease. While Frari is learned through using the samples drawn from w â Fr Yptan , with high probability a = VC(F) + log § [ co FYptan(s) |Fts) - Fisx(s} ds < 3200 007) * 83 seS N ms ~ But in the final bound in Theorem 2, we interested in hes @ "plan (s)|F(s) â Fr44(s) ds. Via decomposing in into two terms mn ~ me â ¢ [ wo M%et0n(s)|F(3) â Fraa(s)| ds + / |e Trtotan (s) = M%otan(s)|ds seS seS â « a Therefore, an extra term of Aaa hes lo Fest yptan (s) â @ **%plan(s)|ds appears in the final bound of â plan Theorem 2. # V C(F)+log 1 δ N Regarding the choice of γpl an, if λ is less than one, then the best choice of γpl an is γ . Other wise, V C(F)+log 1 δ N V C(F)+log 1 δ N if is equal to exact error in the model estimation, and is greater than 1, then the best γpl an is 0. Since, γpl an is not recommended, and a choice of γpl an â ¤ γ is preferred. is an upper bound, not an exact error, on the model estimation, the choice of zero for
1611.01211#43
1611.01211#45
1611.01211
[ "1802.04412" ]
1611.01211#45
Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear
14
1611.01211#44
1611.01211
[ "1802.04412" ]
1611.01144#0
Categorical Reparameterization with Gumbel-Softmax
7 1 0 2 g u A 5 ] L M . t a t s [ 5 v 4 4 1 1 0 . 1 1 6 1 : v i X r a Published as a conference paper at ICLR 2017 # CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX Eric Jang Google Brain [email protected] Shixiang Guâ University of Cambridge MPI T¨ubingen [email protected] Ben Pooleâ Stanford University [email protected] # ABSTRACT
1611.01144#1
1611.01144
[ "1602.06725" ]
1611.01144#1
Categorical Reparameterization with Gumbel-Softmax
Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efï¬ cient gradient estimator that replaces the non-differentiable sample from a cat- egorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax esti- mator outperforms state-of-the-art gradient estimators on structured output predic- tion and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classiï¬
1611.01144#0
1611.01144#2
1611.01144
[ "1602.06725" ]
1611.01144#2
Categorical Reparameterization with Gumbel-Softmax
cation. # INTRODUCTION Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, and reinforcement learning domains. For example, discrete variables have been used to learn probabilis- tic latent representations that correspond to distinct semantic classes (Kingma et al., 2014), image regions (Xu et al., 2015), and memory locations (Graves et al., 2014; Graves et al., 2016). Discrete representations are often more interpretable (Chen et al., 2016) and more computationally efï¬ cient (Rae et al., 2016) than their continuous analogues. However, stochastic networks with discrete variables are difï¬ cult to train because the backprop- agation algorithm â while permitting efï¬ cient computation of parameter gradients â cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally focused on either score function estimators augmented with Monte Carlo variance reduction tech- niques (Paisley et al., 2012; Mnih & Gregor, 2014; Gu et al., 2016; Gregor et al., 2013), or biased path derivative estimators for Bernoulli variables (Bengio et al., 2013). However, no existing gra- dient estimator has been formulated speciï¬ cally for categorical variables.
1611.01144#1
1611.01144#3
1611.01144
[ "1602.06725" ]
1611.01144#3
Categorical Reparameterization with Gumbel-Softmax
The contributions of this work are threefold: 1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx- imate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick. 2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es- timators on both Bernoulli variables and categorical variables. 3. We show that this estimator can be used to efï¬ ciently train semi-supervised models (e.g. Kingma et al. (2014)) without costly marginalization over unobserved categorical latent variables. The practical outcome of this paper is a simple, differentiable approximate sampling mechanism for categorical variables that can be integrated into neural networks and trained using standard back- propagation.
1611.01144#2
1611.01144#4
1611.01144
[ "1602.06725" ]
1611.01144#4
Categorical Reparameterization with Gumbel-Softmax
â Work done during an internship at Google Brain. 1 Published as a conference paper at ICLR 2017 2 THE GUMBEL-SOFTMAX DISTRIBUTION We begin by deï¬ ning the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let z be a categorical variable with class probabilities Ï 1, Ï 2, ...Ï k. For the remainder of this paper we assume categorical samples are encoded as k-dimensional one-hot vectors lying on the corners of the (k â 1)-dimensional simplex, â kâ 1. This allows us to deï¬ ne quantities such as the element-wise mean Ep[z] = [Ï 1, ..., Ï k] of these vectors. The Gumbel-Max trick (Gumbel, 1954; Maddison et al., 2014) provides a simple and efï¬ cient way to draw samples z from a categorical distribution with class probabilities Ï : z = one_hot arg max i [gi + log Ï i] (1) where g1...gk are i.i.d samples drawn from Gumbel(0, 1)1. We use the softmax function as a continu- ous, differentiable approximation to arg max, and generate k-dimensional sample vectors y â
1611.01144#3
1611.01144#5
1611.01144
[ "1602.06725" ]
1611.01144#5
Categorical Reparameterization with Gumbel-Softmax
â kâ 1 where exp((log(m) + 9:)/7) Yi E fori = 1,... (2) Yj-1 exp((log(7j) + 9;)/T) The density of the Gumbel-Softmax distribution (derived in Appendix B) is: k ok y Prr(Yis--s Ya) = E(k) (> nist) Tl) 3) i=l i=l This distribution was independently discovered by Maddison et al. (2016), where it is referred to as the concrete distribution. As the softmax temperature Ï approaches 0, samples from the Gumbel- Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution p(z). a) 5 Categorical 7T=1.0 = 10.0 i a a la a __. b) i | | L | L L â _ category Figure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor- ical distributions and continuous categorical densities. (a) For low temperatures (Ï = 0.1, Ï = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a cate- gorical random variable with the same logits. As the temperature increases (Ï = 1.0, Ï = 10.0), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel- Softmax distributions are identical to samples from a categorical distribution as Ï â 0.
1611.01144#4
1611.01144#6
1611.01144
[ "1602.06725" ]
1611.01144#6
Categorical Reparameterization with Gumbel-Softmax
At higher temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as Ï â â . 2.1 GUMBEL-SOFTMAX ESTIMATOR The Gumbel-Softmax distribution is smooth for Ï > 0, and therefore has a well-deï¬ ned gradi- ent â y/â Ï with respect to the parameters Ï . Thus, by replacing categorical samples with Gumbel- Softmax samples we can use backpropagation to compute gradients (see Section 3.1). We denote 1The Gumbel(0, 1) distribution can be sampled using inverse transform sampling by drawing u â ¼ Uniform(0, 1) and computing g = â log(â log(u)). 2
1611.01144#5
1611.01144#7
1611.01144
[ "1602.06725" ]
1611.01144#7
Categorical Reparameterization with Gumbel-Softmax
Published as a conference paper at ICLR 2017 this procedure of replacing non-differentiable categorical samples with a differentiable approxima- tion during training as the Gumbel-Softmax estimator. While Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre- sponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large, and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1). In practice, we start at a high temperature and anneal to a small but non-zero temperature. In our experiments, we ï¬ nd that the softmax temperature Ï
1611.01144#6
1611.01144#8
1611.01144
[ "1602.06725" ]
1611.01144#8
Categorical Reparameterization with Gumbel-Softmax
can be annealed according to a variety of schedules and still perform well. If Ï is a learned parameter (rather than annealed via a ï¬ xed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al., 2015; Pereyra et al., 2016), where the Gumbel-Softmax distribution can adaptively adjust the â conï¬ denceâ of proposed samples during the training process. 2.2 STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre- sentations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize y using arg max but use our continuous approximation in the backward pass by approxi- mating â
1611.01144#7
1611.01144#9
1611.01144
[ "1602.06725" ]
1611.01144#9
Categorical Reparameterization with Gumbel-Softmax
θz â â θy. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in Bengio et al. (2013). ST Gumbel-Softmax allows samples to be sparse even when the temperature Ï is high. # 3 RELATED WORK In this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al., 2015) with discrete random variable z whose distribution depends on parameter θ, and cost function f (z). The objective is to minimize the expected cost L(θ) = Ezâ ¼pθ(z)[f (z)] via gradient descent, which requires us to estimate â θEzâ ¼pθ(z)[f (z)].
1611.01144#8
1611.01144#10
1611.01144
[ "1602.06725" ]
1611.01144#10
Categorical Reparameterization with Gumbel-Softmax
3.1 PATH DERIVATIVE GRADIENT ESTIMATORS For distributions that are reparameterizable, we can compute the sample z as a deterministic function g of the parameters 6 and an independent random variable ¢, so that z = g(0,¢). The path-wise gradients from f to @ can then be computed without encountering any stochastic nodes: 0 (a) Of Og â E.~ z))| = â E,. 0,â ¬))] =Eenp, | 4 SpEsr LF) = Fy C2] = Bony, [SE ) For example, the normal distribution z â ¼ N (µ, Ï ) can be re-written as µ + Ï Â· N (0, 1), making it trivial to compute â z/â µ and â z/â Ï .
1611.01144#9
1611.01144#11
1611.01144
[ "1602.06725" ]
1611.01144#11
Categorical Reparameterization with Gumbel-Softmax
This reparameterization trick is commonly applied to train- ing variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling, 2013; Rezende et al., 2014b). As shown in Figure 2, we exploit such a trick in the con- struction of the Gumbel-Softmax estimator. Biased path derivative estimators can be utilized even when z is not reparameterizable. In general, we can approximate â θz â â θm(θ), where m is a differentiable proxy for the stochastic sample. For Bernoulli variables with mean parameter θ, the Straight-Through (ST) estimator (Bengio et al., 2013) approximates m = µθ(z), implying â θm = 1. For k = 2 (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by Chung et al. (2016), but uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016) considers an al- ternative approach where each binary latent variable parameterizes a continuous mixture model. Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables. One limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance.
1611.01144#10
1611.01144#12
1611.01144
[ "1602.06725" ]
1611.01144#12
Categorical Reparameterization with Gumbel-Softmax
3 Published as a conference paper at ICLR 2017 6) os" aly < 6: detent, differentiable node Siochastie node Forward pass alogPy(Â¥) a0 Backpropagation # a) <> # CE ' J Figure 2: Gradient estimation in stochastic computation graphs. (1) â θf (x) can be computed via backpropagation if x(θ) is deterministic and differentiable. (2) The presence of stochastic node z precludes backpropagation as the sampler function does not have a well-deï¬ ned gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of â θf (x) by backpropagating along a surrogate loss Ë f log pθ(z), where Ë f = f (x) â b and b is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates â θz â 1. (5) Gumbel-Softmax is a path derivative estimator for a continuous distribution y that approximates z. Reparameterization allows gradients to ï¬ ow from f (y) to θ. y can be annealed to one-hot categorical variables over the course of training. Gumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre- sponding discrete sample z. 3.2 SCORE FUNCTION-BASED GRADIENT ESTIMATORS The score function estimator (SF, also referred to as REINFORCE (Williams, 1992) and likelihood ratio estimator (Glynn, 1990)) uses the identity â
1611.01144#11
1611.01144#13
1611.01144
[ "1602.06725" ]
1611.01144#13
Categorical Reparameterization with Gumbel-Softmax
θpθ(z) = pθ(z)â θ log pθ(z) to derive the follow- ing unbiased estimator: â θEz [f (z)] = Ez [f (z)â θ log pθ(z)] (5) SF only requires that pθ(z) is continuous in θ, and does not require backpropagating through f or the sample z. However, SF suffers from high variance and is consequently slow to converge. In particular, the variance of SF scales linearly with the number of dimensions of the sample vector (Rezende et al., 2014a), making it especially challenging to use for categorical distributions. The variance of a score function estimator can be reduced by subtracting a control variate b(z) from the learning signal f , and adding back its analytical expectation µb = Ez [b(z)â θ log pθ(z)] to keep the estimator unbiased:
1611.01144#12
1611.01144#14
1611.01144
[ "1602.06725" ]
1611.01144#14
Categorical Reparameterization with Gumbel-Softmax
â θEz [f (z)] = Ez [f (z)â θ log pθ(z) + (b(z)â θ log pθ(z) â b(z)â θ log pθ(z))] = Ez [(f (z) â b(z))â θ log pθ(z)] + µb (6) (7) We brieï¬
1611.01144#13
1611.01144#15
1611.01144
[ "1602.06725" ]
1611.01144#15
Categorical Reparameterization with Gumbel-Softmax
y summarize recent stochastic gradient estimators that utilize control variates. We direct the reader to Gu et al. (2016) for further detail on these techniques. â ¢ NVIL (Mnih & Gregor, 2014) uses two baselines: (1) a moving average ¯f of f to center the learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network 4 Published as a conference paper at ICLR 2017 ï¬ tted to f â ¯f (a control variate for the centered learning signal itself). Finally, variance normalization divides the learning signal by max(1, Ï f ), where Ï 2 f is a moving average of Var[f ]. e DARN (Gregor et al.| 2013) uses b = f(z) + fâ (Z)(z â 2), where the baseline corre- sponds to the first-order Taylor approximation of f(z) from f(z). z is chosen to be 1/2 for Bernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores the correction term jy in the estimator expression. e@ MuProp (Gu et al.||2016) also models the baseline as a first-order Taylor expansion: b = f(2) + f'(@)G = 2Z) and py = f'(Z)VoEz [z]. To overcome backpropagation through discrete sampling, a mean-field approximation fy7r(j19(z)) is used in place of f(z) to compute the baseline and derive the relevant gradients. e VIMCO (Mnih & Rezende}|2016) is a gradient estimator for multi-sample objectives that uses the mean of other samples 6 = 1/m Vii f (z;) to construct a baseline for each sample 24 © Z1zm. We exclude VIMCO from our experiments because we are comparing estimators for single-sample objectives, although Gumbel-Softmax can be easily extended to multi- sample objectives. 3.3 SEMI-SUPERVISED GENERATIVE MODELS Semi-supervised learning considers the problem of learning from both labeled data (x, y) â ¼ DL and unlabeled data x â ¼ DU , where x are observations (i.e. images) and y are corresponding labels (e.g. semantic class). For semi-supervised classiï¬
1611.01144#14
1611.01144#16
1611.01144
[ "1602.06725" ]
1611.01144#16
Categorical Reparameterization with Gumbel-Softmax
cation, Kingma et al. (2014) propose a variational autoencoder (VAE) whose latent state is the joint distribution over a Gaussian â styleâ variable z and a categorical â semantic classâ variable y (Figure 6, Appendix). The VAE objective trains a discriminative network qÏ (y|x), inference network qÏ (z|x, y), and generative network pθ(x|y, z) end-to-end by maximizing a variational lower bound on the log-likelihood of the observation under the generative model. For labeled data, the class y is observed, so inference is only done on z â ¼ q(z|x, y). The variational lower bound on labeled data is given by: log pθ(x, y) â ¥ â L(x, y) = Ezâ ¼qÏ (z|x,y) [log pθ(x|y, z)] â KL[q(z|x, y)||pθ(y)p(z)] For unlabeled data, difï¬ culties arise because the categorical distribution is not reparameterizable. Kingma et al. (2014) approach this by marginalizing out y over all classes, so that for unlabeled data, inference is still on qÏ (z|x, y) for each y. The lower bound on unlabeled data is: log po() 2 â U(x) = Eznqg(y,z|x) [log pa(zly, z) + log po(y) + log p(z) â ga(y,2|z)] (9) = YE aolyle)(-L(w,y) + H(ag(ula))) (10) y The full maximization objective is: J = E(x,y)â ¼DL [â L(x, y)] + Exâ ¼DU [â U(x)] + α · E(x,y)â ¼DL[log qÏ (y|x)] (11) where α is the scalar trade-off between the generative and discriminative objectives. One limitation of this approach is that marginalization over all k class values becomes prohibitively expensive for models with a large number of classes. If D, I, G are the computational cost of sam- pling from qÏ (y|x), qÏ
1611.01144#15
1611.01144#17
1611.01144
[ "1602.06725" ]
1611.01144#17
Categorical Reparameterization with Gumbel-Softmax
(z|x, y), and pθ(x|y, z) respectively, then training the unsupervised objective requires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through y â ¼ qÏ (y|x) for single sample gradient estimation, and achieves a cost of O(D + I + G) per training step. Experimental comparisons in training speed are shown in Figure 5. # 4 EXPERIMENTAL RESULTS
1611.01144#16
1611.01144#18
1611.01144
[ "1602.06725" ]
1611.01144#18
Categorical Reparameterization with Gumbel-Softmax
In our ï¬ rst set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and 5 (8) Published as a conference paper at ICLR 2017 Slope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction and (2) variational training of generative models. We use the MNIST dataset with ï¬ xed binarization for training and evaluation, which is common practice for evaluating stochastic gradient estimators (Salakhutdinov & Murray, 2008; Larochelle & Murray, 2011). Learning rates are chosen from {3eâ 5, 1eâ 5, 3eâ 4, 1eâ 4, 3eâ 3, 1eâ
1611.01144#17
1611.01144#19
1611.01144
[ "1602.06725" ]
1611.01144#19
Categorical Reparameterization with Gumbel-Softmax
3}; we select the best learn- ing rate for each estimator using the MNIST validation set, and report performance on the test set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but are discretized to one-hot vectors during evaluation. We also found that variance normalization was nec- essary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activation functions for binary (Bernoulli) neural networks and softmax activations for categorical variables. Models were trained using stochastic gradient descent with momentum 0.9. 4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS The objective of structured output prediction is to predict the lower half of a 28 x 28 MNIST digit given the top half of the image (14 x 28). This is acommon benchmark for training stochastic binary networks (SBN) (Raiko et al.| 2014} Gu et al.| 2016} Mnih & Rezende| 2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood objective, Ej, po (is|:rper) [2 2, log po (aiower|ti)], where m = 1 is used for training and m = 1000 is used for evaluation. We trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi- narized activations (denoted as 392-(20 Ã 10)-(20 Ã 10)-392).
1611.01144#18
1611.01144#20
1611.01144
[ "1602.06725" ]
1611.01144#20
Categorical Reparameterization with Gumbel-Softmax
As shown in Figure 3, ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari- ables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a ï¬ xed Ï = 1. (a) (b) Figure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarized MNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) and (b) categorical latent variables (392-(20 à 10)-(20 à 10)-392).
1611.01144#19
1611.01144#21
1611.01144
[ "1602.06725" ]
1611.01144#21
Categorical Reparameterization with Gumbel-Softmax
4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS We train variational autoencoders (Kingma & Welling, 2013), where the objective is to learn a gener- ative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (20Ã 10). We use a learned cat- egorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice,
1611.01144#20
1611.01144#22
1611.01144
[ "1602.06725" ]
1611.01144#22
Categorical Reparameterization with Gumbel-Softmax
6 Published as a conference paper at ICLR 2017 we ï¬ nd that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with m = 1000. The temperature is annealed using the schedule Ï = max(0.5, exp(â rt)) of the global training step t, where Ï is updated every N steps. N â {500, 1000} and r â {1eâ 5, 1eâ
1611.01144#21
1611.01144#23
1611.01144
[ "1602.06725" ]
1611.01144#23
Categorical Reparameterization with Gumbel-Softmax
4} are hyperparameters for which we select the best-performing estimator on the validation set and report test performance. As shown in Figure 4, ST Gumbel-Softmax outperforms other estimators for Categorical variables, and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables. # Bound (nats) (a) (b) Figure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables (784 â 200 â 784) and (b) categorical latent variables (784 â (20 Ã 10) â 200). Table 1:
1611.01144#22
1611.01144#24
1611.01144
[ "1602.06725" ]
1611.01144#24
Categorical Reparameterization with Gumbel-Softmax
The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to negative variational lower bounds (nats) on the log-likelihood (lower is better). SBN (Bern.) SBN (Cat.) VAE (Bern.) VAE (Cat.) SF 72.0 73.1 112.2 110.6 DARN MuProp 59.7 67.9 110.9 128.8 58.9 63.0 109.7 107.0 ST 58.9 61.8 116.0 110.9 Annealed ST Gumbel-S. 58.7 61.1 111.5 107.8 58.5 59.0 105.0 101.5 4.3 GENERATIVE SEMI-SUPERVISED CLASSIFICATION We apply the Gumbel-Softmax estimator to semi-supervised classiï¬ cation on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al., 2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax. We trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model qÏ (y|x) and inference model qÏ (z|x, y) are each im- plemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model pθ(x|y, z) is a 4-layer convolutional-transpose network with ReLU activations.
1611.01144#23
1611.01144#25
1611.01144
[ "1602.06725" ]
1611.01144#25
Categorical Reparameterization with Gumbel-Softmax
Experimental details are provided in Appendix A. Estimators were trained and evaluated against several values of α = {0.1, 0.2, 0.3, 0.8, 1.0} and the best unlabeled classiï¬ cation results for test sets were selected for each estimator and reported 7 Published as a conference paper at ICLR 2017 in Table 2. We used an annealing schedule of Ï = max(0.5, exp(â 3eâ 5 · t)), updated every 2000 steps.
1611.01144#24
1611.01144#26
1611.01144
[ "1602.06725" ]
1611.01144#26
Categorical Reparameterization with Gumbel-Softmax
In Kingma et al. (2014), inference over the latent state is done by marginalizing out y and using the reparameterization trick for sampling from qÏ (z|x, y). However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint qÏ (y, z|x), achieving drastic speedups in training without compromising generative or classiï¬
1611.01144#25
1611.01144#27
1611.01144
[ "1602.06725" ]
1611.01144#27
Categorical Reparameterization with Gumbel-Softmax
cation performance. (Table 2, Figure 5). Table 2: Marginalizing over y and single-sample variational inference perform equally well when applied to image classiï¬ cation on the binarized MNIST dataset (Larochelle & Murray, 2011). We report variational lower bounds and image classiï¬ cation accuracy for unlabeled data in the test set. Marginalization Gumbel ST Gumbel-Softmax 92.6% 92.4% 93.6% In Figure 5, we show how Gumbel-Softmax versus marginalization scales with the number of cat- egorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is 2à as fast for 10 classes and 9.9à as fast for 100 classes. (Oth 3% BGS AS O/J2AZÂ¥-SbIIP OTZBYS e729 "OLF EF E97 BF OLIPI3RBYEODEFG Ol23 4567989 y ~ (a) (b) Figure 5:
1611.01144#26
1611.01144#28
1611.01144
[ "1602.06725" ]
1611.01144#28
Categorical Reparameterization with Gumbel-Softmax
Gumbel-Softmax allows us to backpropagate through samples from the posterior g4(y|), providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza- tion on a semi-supervised VAE. Evaluations were performed on a GTX Titan X® GPU. (6) Visualization of MNIST analogies generated by varying style variable z across each row and class variable y across each column.
1611.01144#27
1611.01144#29
1611.01144
[ "1602.06725" ]
1611.01144#29
Categorical Reparameterization with Gumbel-Softmax
# 5 DISCUSSION The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose corresponding estimator affords low-variance path derivative gradients for the categorical distri- bution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax enables dramatic speedups in inference over discrete latent variables.
1611.01144#28
1611.01144#30
1611.01144
[ "1602.06725" ]
1611.01144#30
Categorical Reparameterization with Gumbel-Softmax
# ACKNOWLEDGMENTS We sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback. 8 Published as a conference paper at ICLR 2017 # REFERENCES Y. Bengio, N. L´eonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
1611.01144#29
1611.01144#31
1611.01144
[ "1602.06725" ]
1611.01144#31
Categorical Reparameterization with Gumbel-Softmax
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR, abs/1606.03657, 2016. J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint arXiv:1609.01704, 2016. P. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75â 84, 1990. A. Graves, G. Wayne, M. Reynolds, T. Harley, I. Danihelka, A. Grabska-Barwi´nska, S. G. Col- menarejo, E. Grefenstette, T. Ramalho, J. Agapiou, et al.
1611.01144#30
1611.01144#32
1611.01144
[ "1602.06725" ]
1611.01144#32
Categorical Reparameterization with Gumbel-Softmax
Hybrid computing using a neural net- work with dynamic external memory. Nature, 538(7626):471â 476, 2016. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013.
1611.01144#31
1611.01144#33
1611.01144
[ "1602.06725" ]
1611.01144#33
Categorical Reparameterization with Gumbel-Softmax
S. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochastic Neural Networks. ICLR, 2016. E. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Ofï¬ ce, 1954. D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
1611.01144#32
1611.01144#34
1611.01144
[ "1602.06725" ]
1611.01144#34
Categorical Reparameterization with Gumbel-Softmax
D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581â 3589, 2014. H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, volume 1, pp. 2, 2011. C. J. Maddison, D. Tarlow, and T. Minka. A* sampling.
1611.01144#33
1611.01144#35
1611.01144
[ "1602.06725" ]
1611.01144#35
Categorical Reparameterization with Gumbel-Softmax
In Advances in Neural Information Pro- cessing Systems, pp. 3086â 3094, 2014. C. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. ArXiv e-prints, November 2016. A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. ICML, 31, 2014.
1611.01144#34
1611.01144#36
1611.01144
[ "1602.06725" ]
1611.01144#36
Categorical Reparameterization with Gumbel-Softmax
A. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv preprint arXiv:1602.06725, 2016. J. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArXiv e-prints, June 2012. Gabriel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networks by penalizing conï¬ dent output distributions. 2016. J. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicrap.
1611.01144#35
1611.01144#37
1611.01144
[ "1602.06725" ]
1611.01144#37
Categorical Reparameterization with Gumbel-Softmax
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-prints, October 2016. T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014. 9 Published as a conference paper at ICLR 2017 D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014a. D. J. Rezende, S. Mohamed, and D. Wierstra.
1611.01144#36
1611.01144#38
1611.01144
[ "1602.06725" ]
1611.01144#38
Categorical Reparameterization with Gumbel-Softmax
Stochastic backpropagation and approximate infer- ence in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278â 1286, 2014b. J. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016. R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pp. 872â
1611.01144#37
1611.01144#39
1611.01144
[ "1602.06725" ]
1611.01144#39
Categorical Reparameterization with Gumbel-Softmax
879. ACM, 2008. J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528â 3536, 2015. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
1611.01144#38
1611.01144#40
1611.01144
[ "1602.06725" ]
1611.01144#40
Categorical Reparameterization with Gumbel-Softmax
R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256, 1992. K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2015. A SEMI-SUPERVISED CLASSIFICATION MODEL Figures 6 and 7 describe the architecture used in our experiments for semi-supervised classiï¬ cation (Section 4.3). 3 : < beterminisi, differentiable node O Stochastic node Figure 6: Semi-supervised generative model proposed by Kingma et al. (2014). (a) Generative model pθ(x|y, z) synthesizes images from latent Gaussian â
1611.01144#39
1611.01144#41
1611.01144
[ "1602.06725" ]