doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1707.01891
4
∗Also at the Department of Computing Science, University of Alberta, [email protected] 1An implementation of Trust-PCL is available at https://github.com/tensorflow/models/ tree/master/research/pcl_rl 1 Published as a conference paper at ICLR 2018 optimal policy and value function satisfy a set of pathwise consistency properties along any sam- pled path (Nachum et al., 2017), which allows both on and off-policy data to be incorporated in an actor-critic algorithm, PCL. The original PCL algorithm optimized an entropy regularized max- imum reward objective and was evaluated on relatively simple tasks. Here we extend the ideas of PCL to achieve strong results on standard, challenging continuous control benchmarks. The main observation is that by alternatively augmenting the maximum reward objective with a relative en- tropy regularizer, the optimal policy and values still satisfy a certain set of pathwise consistencies along any sampled trajectory. The resulting objective is equivalent to maximizing expected reward subject to a penalty-based constraint on divergence from a reference (i.e., previous) policy.
1707.01891#4
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
5
We exploit this observation to propose a new off-policy trust region algorithm, Trust-PCL, that is able to exploit off-policy data to train policy and value estimates. Moreover, we present a simple method for determining the coefficient on the relative entropy regularizer to remain agnostic to reward scale, hence ameliorating the task of hyperparameter tuning. We find that the incorporation of a relative entropy regularizer is crucial for good and stable performance. We evaluate Trust- PCL against TRPO, and observe that Trust-PCL is able to solve difficult continuous control tasks, while improving the performance of TRPO both in terms of the final reward achieved as well as sample-efficiency. # 2 RELATED WORK Trust Region Methods. Gradient descent is the predominant optimization method for neural networks. A gradient descent step is equivalent to solving a trust region constrained optimization, minimize €(0 + d0) = €(0) + Ve(0)'dd st. dO™dO<e, (1) dO™dO<e, which yields the locally optimal update dd = —nV¢(0) such that 7 = v/e/||V(4)||; hence by considering a Euclidean ball, gradient descent assumes the parameters lie in a Euclidean space.
1707.01891#5
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
6
However, in machine learning, particularly in the context of multi-layer neural network training, Euclidean geometry is not necessarily the best way to characterize proximity in parameter space. It is often more effective to define an appropriate Riemannian metric that respects the loss surface (Amari (2012), which allows much steeper descent directions to be identified within a local neigh- borhood (e.g., [Amari] (1998); [Martens & Grosse] (2015p). Whenever the loss is defined in terms of a Bregman divergence between an (unknown) optimal parameter 6* and model parameter 0, i.e, (0) = Dr(6*, 8), it is natural to use the same divergence to form the trust region: minimize Dp(6*,9+d0) s.t. Dp(0,0+d0) <e. (2)
1707.01891#6
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
7
minimize Dp(6*,9+d0) s.t. Dp(0,0+d0) <e. (2) The natural gradient (Amari|/1998) is a generalization of gradient descent where the Fisher informa- tion matrix F'(9) is used ti to define the local geometry of the parameter space around @. If a parameter update is constrained by d@" F(6)d@ < ¢, a descent direction of d@ = —nF(0)~!V£(6) is obtained. This geometry is especially effective for optimizing the log-likelihood of a conditional probabilistic model, where the objective is in fact the KL divergence Dx 1,(9", #). The local optimization is, minimize Dx, (6",9+d0) s.t. Dxi(0,0+d0) = <e. (3) # ue
1707.01891#7
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
8
minimize Dx, (6",9+d0) s.t. Dxi(0,0+d0) = <e. (3) # ue ames natural gradient approximates the trust region by D(a, b) © (a — a)(a — b), which s accurate up to a second order Taylor approximation. Previous work (Kakade| Kaba 005) Bore Schneider 2003 } Peters & Schaal|/2008 } Schulman et al. 2015) has applied natural gradient to policy optimization, locally improving expected reward subject to variants of do'F F(0)d0 < e. Recently, TRPO (Schulman et al. 2015} 2016) has achieved state-of-the-art results in continuous control by adding several approximations to the natural gradient to make nonlinear policy optimization feasible. Another approach to trust region optimization is given by proximal gradient methods (Parikh et al., 2014). The class of proximal gradient methods most similar to our work are those that replace the hard constraint in (2) with a penalty added to the objective. These techniques have recently become popular in RL (Wang et al., 2016; Heess et al., 2017; Schulman et al., 2017b), although in terms of final reward performance on continuous control benchmarks, TRPO is still considered to be the state-of-the-art. 2
1707.01891#8
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
9
2 Published as a conference paper at ICLR 2018 Norouzi et al. (2016) make the observation that entropy regularized expected reward may be ex- pressed as a reversed KL divergence DKL(θ, θ∗), which suggests that an alternative to the constraint in (3) should be used when such regularization is present: s.t. Dr (0+, 0) = do" F(6 + d0)d0 <e. minimize DKL(θ + dθ, θ∗) (4) Unfortunately, this update requires computing the Fisher matrix at the endpoint of the update. The use of F'(@) in previous work can be considered to be an approximation when entropy regularization is present, but it is not ideal, particularly if dé is large. In this paper, by contrast, we demonstrate that the optimal dé under the reverse KL constraint Dx,,(@ + dé, 0) < € can indeed be characterized. Defining the constraint in this way appears to be more natural and effective than that of TRPO.
1707.01891#9
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
10
Softmax Consistency. To comply with the information geometry over policy parameters, previous work has used the relative entropy (i.e., KL divergence) to regularize policy optimization; resulting in a softmax relationship between the optimal policy and state values (Peters et al., 2010; Azar et al., 2012; 2011; Fox et al., 2016; Rawlik et al., 2013) under single-step rollouts. Our work is unique in that we leverage consistencies over multi-step rollouts. The existence of multi-step softmax consistencies has been noted by prior work—first by Nachum et al. (2017) in the presence of entropy regularization. The existence of the same consistencies with relative entropy has been noted by Schulman et al. (2017a). Our work presents multi-step con- sistency relations for a hybrid relative entropy plus entropy regularized expected reward objective, interpreting relative entropy regularization as a trust region constraint. This work is also distinct from prior work in that the coefficient of relative entropy can be automatically determined, which we have found to be especially crucial in cases where the reward distribution changes dramatically during training.
1707.01891#10
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
11
Most previous work on softmax consistency (e.g., Fox et al. (2016); Azar et al. (2012); Nachum et al. (2017)) have only been evaluated on relatively simple tasks, including grid-world and discrete algo- rithmic environments. Rawlik et al. (2013) conducted evaluations on simple variants of the CartPole and Pendulum continuous control tasks. More recently, Haarnoja et al. (2017) showed that soft Q- learning (a single-step special case of PCL) can succeed on more challenging environments, such as a variant of the Swimmer task we consider below. By contrast, this paper presents a successful appli- cation of the softmax consistency concept to difficult and standard continuous-control benchmarks, resulting in performance that is competitive with and in some cases beats the state-of-the-art. # 3 NOTATION & BACKGROUND
1707.01891#11
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
12
# 3 NOTATION & BACKGROUND We model an agent’s behavior by a policy distribution π(a | s) over a set of actions (possibly discrete or continuous). At iteration t, the agent encounters a state st and performs an action at sampled from π(a | st). The environment then returns a scalar reward rt ∼ r(st, at) and transitions to the next state st+1 ∼ ρ(st, at). When formulating expectations over actions, rewards, and state transitions we will often omit the sampling distributions, π, r, and ρ, respectively. Maximizing Expected Reward. The standard objective in RL is to maximize expected future discounted reward. We formulate this objective on a per-state basis recursively as Orr (8,7) = Ears [r + YOpr(s’,7)] - (5) The overall, state-agnostic objective is the expected per-state objective when states are sampled from interactions with the environment: OER(π) = Es[OER(s, π)]. (6) Most policy-based algorithms, critic (Konda & Tsitsiklis, 2000), aim to optimize OER given a parameterized policy.
1707.01891#12
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
13
(6) Most policy-based algorithms, critic (Konda & Tsitsiklis, 2000), aim to optimize OER given a parameterized policy. Path Consistency Learning (PCL). Inspired by Williams & Peng (1991), Nachum et al. (2017) augment the objective OER in (5) with a discounted entropy regularizer to derive an objective, OENT(s, π) = OER(s, π) + τ H(s, π) , where τ ≥ 0 is a user-specified temperature parameter that controls the degree of entropy regular- ization, and the discounted entropy H(s, π) is recursively defined as H(s,7) = Ea,s[—log (a | s) + yH(s’, 7)] - (8) 3 (7) Published as a conference paper at ICLR 2018 Note that the objective OENT(s, π) can then be re-expressed recursively as, Ognt(s,7) = Eayr,s’[r — T log r(a | s) + yOrnr(s’, 7)] - (9)
1707.01891#13
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
14
Nachum et al. (2017) show that the optimal policy π∗ for OENT and V ∗(s) = OENT(s, π∗) mutually satisfy a softmax temporal consistency constraint along any sequence of states s0, . . . , sd starting at s0 and a corresponding sequence of actions a0, . . . , ad−1: d-1 V*(80) = E.|7"V"(sa) + 9 (ri ~ tT log" (ailsi)) | - (10) i=0 This observation led to the development of the PCL algorithm, which attempts to minimize squared error between the LHS and RHS of (10) to simultaneously optimize parameterized πθ and Vφ. Im- portantly, PCL is applicable to both on-policy and off-policy trajectories.
1707.01891#14
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
15
Trust Region Policy Optimization (TRPO). As noted, standard policy-based algorithms for max- imizing OER can be unstable and require small learning rates for training. To alleviate this issue, Schulman et al. (2015) proposed to perform an iterative trust region optimization to maximize OER. At each step, a prior policy ˜π is used to sample a large batch of trajectories, then π is subsequently optimized to maximize OER while remaining within a constraint defined by the average per-state KL-divergence with ˜π. That is, at each iteration TRPO solves the constrained optimization problem, maximize Oge(t) 8.t. — Esnap[ KL (#(—|s) || t(—|s))] < €. e8D) The prior policy is then replaced with the new policy π, and the process is repeated. # 4 METHOD To enable more stable training and better exploit the natural information geometry of the parameter space, we propose to augment the entropy regularized expected reward objective OENT in (7) with a discounted relative entropy trust region around a prior policy ˜π, maximize E,[Ognr(7)]s.t. Es[G(s,7,7)] <e, (12) where the discounted relative entropy is recursively defined as
1707.01891#15
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
16
where the discounted relative entropy is recursively defined as G(s,7,7) = Ea, [log (als) — log #(a|s) + yG(s',7,7)] . (13) This objective attempts to maximize entropy regularized expected reward while maintaining natural proximity to the previous policy. Although previous work has separately proposed to use relative entropy and entropy regularization, we find that the two components serve different purposes, each of which is beneficial: entropy regularization helps improve exploration, while the relative entropy improves stability and allows for a faster learning rate. This combination is a key novelty. Using the method of Lagrange multipliers, we cast the constrained optimization problem in (13) into maximization of the following objective, ORELENT(s, π) = OENT(s, π) − λG(s, π, ˜π) . (14) Again, the environment-wide objective is the expected per-state objective when states are sampled from interactions with the environment, ORELENT(π) = Es[ORELENT(s, π)]. (15) 4.1 PATH CONSISTENCY WITH RELATIVE ENTROPY
1707.01891#16
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
17
ORELENT(π) = Es[ORELENT(s, π)]. (15) 4.1 PATH CONSISTENCY WITH RELATIVE ENTROPY A key technical observation is that the ORELENT objective has a similar decomposition structure to OENT, and one can cast ORELENT as an entropy regularized expected reward objective with a set of transformed rewards, i.e., ORELENT(s, π) = ∼ OER(s, π) + (τ + λ)H(s, π), (16) 4 (13) Published as a conference paper at ICLR 2018 ∼ OER(s, π) is an expected reward objective on a transformed reward distribution function where ˜r(s, a) = r(s, a) + λ log ˜π(a|s). Thus, in what follows, we derive a corresponding form of the multi-step path consistency in (10). Let π∗ denote the optimal policy, defined as π∗ = argmaxπ ORELENT(π). As in PCL (Nachum et al., 2017), this optimal policy may be expressed as
1707.01891#17
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
18
Ej. 1~F (Se ae), Sega Lt > VV* (S41 s ails:) exn{ Fev (sesae), ia (se41)] = ak where V* are the softmax state values defined recursively as Ex, .7(s¢,a),8¢41 [ht + WV (5 V*(s:) = (7 +A) tog f exp { Fuv(sea) sean lf + W"(se44)) \ da. (18) A T+X We may re-arrange (17) to yield V ∗(st) = E˜rt∼˜r(st,at),st+1[˜rt − (τ + λ) log π∗(at|st) + γV ∗(st+1)] = Ert,st+1[rt − (τ + λ) log π∗(at|st) + λ log ˜π(at+i|st+i) + γV ∗(st+1)]. (19) (20)
1707.01891#18
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
19
This is a single-step temporal consistency which may be extended to multiple steps by further ex- panding V*(s,41) on the RHS using the same identity. Thus, in general we have the following softmax temporal consistency constraint along any sequence of states defined by a starting state s; and a sequence of actions ay, ... , @t-4+d—1: d-1 d-1 Vi(si) = E |y!V*(se4a) + 007 (reg — (7 + A) log (aryilseps) + Alog #(ar4i]8e4)) TepisSt+i ‘0 (21) 4.2 TRUST-PCL We propose to train a parameterized policy πθ and value estimate Vφ to satisfy the multi-step con- sistencies in (21). Thus, we define a consistency error for a sequence of states, actions, and rewards st:t+d ≡ (st, at, rt, . . . , st+d−1, at+d−1, rt+d−1, st+d) sampled from the environment as C(st:t+d, θ, φ) = − Vφ(st) + γdVφ(st+d) +
1707.01891#19
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
20
C(st:t+d, θ, φ) = − Vφ(st) + γdVφ(st+d) + = —Va(se) + ¥V(sta) + d-1 (22) SO! (rete = (7 + A) log ro (aeqilseps) + Alog mg (ae4ilse4a)) - i=0 We aim to minimize the squared consistency error on every sub-trajectory of length d. That is, the loss for a given batch of episodes (or sub-episodes) S = {s(k) 0:Tk B T,-1 £(S,0,6) => > C(s{), 1.9.6)". (23) k=1 t=0 t=0 We perform gradient descent on θ and φ to minimize this loss. In practice, we have found that it is beneficial to learn the parameter φ at least as fast as θ, and accordingly, given a mini-batch of episodes we perform a single gradient update on θ and possibly multiple gradient updates on φ (see Appendix for details).
1707.01891#20
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
21
In principle, the mini-batch S may be taken from either on-policy or off-policy trajectories. In our implementation, we utilized a replay buffer prioritized by recency. As episodes (or sub-episodes) are sampled from the environment they are placed in a replay buffer and a priority p(s0:T ) is given to a trajectory s0:T equivalent to the current training step. Then, to sample a batch for training, B episodes are sampled from the replay buffer proportional to exponentiated priority exp{βp(s0:T )} for some hyperparameter β ≥ 0. For the prior policy π˜θ, we use a lagged geometric mean of the parameters. At each training step, we update ˜θ ← α˜θ + (1 − α)θ. Thus on average our training scheme attempts to maximize entropy regularized expected reward while penalizing divergence from a policy roughly 1/(1 − α) training steps in the past. 5 (17) . Published as a conference paper at ICLR 2018 4.3 AUTOMATIC TUNING OF THE LAGRANGE MULTIPLIER λ
1707.01891#21
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
22
5 (17) . Published as a conference paper at ICLR 2018 4.3 AUTOMATIC TUNING OF THE LAGRANGE MULTIPLIER λ The use of a relative entropy regularizer as a penalty rather than a constraint introduces several difficulties. The hyperparameter λ must necessarily adapt to the distribution of rewards. Thus, λ must be tuned not only to each environment but also during training on a single environment, since the observed reward distribution changes as the agent’s behavior policy improves. Using a constraint form of the regularizer is more desirable, and others have advocated its use in practice (Schulman et al., 2015) specifically to robustly allow larger updates during training. To this end, we propose to redirect the hyperparameter tuning from \ to e. Specifically, we present a method which, given a desired hard constraint on the relative entropy defined by €, approximates the equivalent penalty coefficient A(c). This is a key novelty of our work and is distinct from previous attempts at automatically tuning a regularizing coefficient, which iteratively increase and decrease the coefficient based on observed training behavior (Schulman et al.| 2017b} Heess et al} 2017).
1707.01891#22
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
23
We restrict our analysis to the undiscounted setting γ = 1 with entropy regularizer τ = 0. Addi- tionally, we assume deterministic, finite-horizon environment dynamics. An additional assumption we make is that the expected KL-divergence over states is well-approximated by the KL-divergence starting from the unique initial state s0. Although in our experiments these restrictive assumptions are not met, we still found our method to perform well for adapting λ during training. In this setting the optimal policy of (14) is proportional to exponentiated scaled reward. Specifically, for a full episode s0:T = (s0, a0, r0, . . . , sT −1, aT −1, rT −1, sT ), we have 1° (so) x Aso) exp {SCOT 04) where (sor) = an t(a;|s;) and R(so:r) = ye a r;. The normalization factor of 7* is a r;. The normalization factor of 7* is { Rison) \ . Z = Es0:T ∼˜π exp . (25)
1707.01891#23
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
24
a r;. The normalization factor of 7* is { Rison) \ . Z = Es0:T ∼˜π exp . (25) We would like to approximate the trajectory-wide KL-divergence between π∗ and ˜π. We may ex- press the KL-divergence analytically: KL(n* ||) = Eyopvn- [oe Ur) (26) T(Sorr =Egy.par* [As sor) _ toe Z] (27) — log Z + Esgnws [As (so:r) ote) (28) 7(So:7) # [As (so:r) Rlosn) Xr ) Rlosn) Xr —logZ + Es. px% ) exp{R(s0r)/A — log 2)| . (29) Since all expectations are with respect to ˜π, this quantity is tractable to approximate given episodes sampled from ˜π
1707.01891#24
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
25
Since all expectations are with respect to ˜π, this quantity is tractable to approximate given episodes sampled from ˜π Therefore, in Trust-PCL, given a set of episodes sampled from the prior policy 73 and a desired maximum divergence e€, we can perform a simple line search to find a suitable \(e) which yields KL(x"*||79) as close as possible to €. The preceding analysis provided a method to determine \(¢) given a desired maximum divergence e. However, there is still a question of whether ¢ should change during training. Indeed, as episodes may possibly i increase in length, K L(7*||7) naturally increases when compared to the average per- state K L(x*(—|s)||7(—|s)), and vice versa for decreasing length. Thus, in practice, given an € and a set of sampled episodes S= {s\*) Te })_, we approximate the best \ which yields a maximum divergence of < < k=1 Lk- This makes it so that € corresponds more to a constraint on the length- averaged KL- divergence. To avoid incurring a prohibitively large number of interactions with the environment for each pa- rameter update, in practice we use the last 100 episodes as the set of sampled episodes S. While 6 Published as a conference paper at ICLR 2018
1707.01891#25
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
26
6 Published as a conference paper at ICLR 2018 this is not exactly the same as sampling episodes from π˜θ, it is not too far off since π˜θ is a lagged version of the online policy πθ. Moreover, we observed this protocol to work well in practice. A more sophisticated and accurate protocol may be derived by weighting the episodes according to the importance weights corresponding to their true sampling distribution. # 5 EXPERIMENTS We evaluate Trust-PCL against TRPO on a number of benchmark tasks. We choose TRPO as a base- line since it is a standard algorithm known to achieve state-of-the-art performance on the continuous control tasks we consider (see e.g., leaderboard results on the OpenAI Gym website (Brockman et al., 2016)). We find that Trust-PCL can match or improve upon TRPO’s performance in terms of both average reward and sample efficiency. 5.1 SETUP
1707.01891#26
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
27
5.1 SETUP We chose a number of control tasks available from OpenAI Gym (Brockman et al., 2016). The first task, Acrobot, is a discrete-control task, while the remaining tasks (HalfCheetah, Swimmer, Hopper, Walker2d, and Ant) are well-known continuous-control tasks utilizing the MuJoCo envi- ronment (Todorov et al., 2012). For TRPO we trained using batches of Q = 25, 000 steps (12, 500 for Acrobot), which is the approximate batch size used by other implementations (Duan et al., 2016; Schulman, 2017). Thus, at each training iteration, TRPO samples 25, 000 steps using the policy π˜θ and then takes a single step within a KL-ball to yield a new πθ.
1707.01891#27
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
28
Trust-PCL is off-policy, so to evaluate its performance we alternate between collecting experience and training on batches of experience sampled from the replay buffer. Specifically, we alternate between collecting P = 10 steps from the environment and performing a single gradient step based on a batch of size Q = 64 sub-episodes of length P from the replay buffer, with a recency weight of β = 0.001 on the sampling distribution of the replay buffer. To maintain stability we use α = 0.99 and we modified the loss from squared loss to Huber loss on the consistency error. Since our policy is parameterized by a unimodal Gaussian, it is impossible for it to satisfy all path consistencies, and so we found this crucial for stability. For each of the variants and for each environment, we performed a hyperparameter search to find the best hyperparameters. The plots presented here show the reward achieved during training on the best hyperparameters averaged over the best 4 seeds of 5 randomly seeded training runs. Note that this reward is based on greedy actions (rather than random sampling).
1707.01891#28
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
29
Experiments were performed using Tensorflow (Abadi et al., 2016). Although each training step of Trust-PCL (a simple gradient step) is considerably faster than TRPO, we found that this does not have an overall effect on the run time of our implementation, due to a combination of the fact that each environment step is used in multiple training steps of Trust-PCL and that a majority of the run time is spent interacting with the environment. A detailed description of our implementation and hyperparameter search is available in the Appendix. 5.2 RESULTS We present the reward over training of Trust-PCL and TRPO in Figure 1. We find that Trust-PCL can match or beat the performance of TRPO across all environments in terms of both final reward and sample efficiency. These results are especially significant on the harder tasks (Walker2d and Ant). We additionally present our results compared to other published results in Table 1. We find that even when comparing across different implementations, Trust-PCL can match or beat the state-of-the-art. 5.2.1 HYPERPARAMETER ANALYSIS
1707.01891#29
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
30
5.2.1 HYPERPARAMETER ANALYSIS The most important hyperparameter in our method is €, which determines the size of the trust region and thus has a critical role in the stability of the algorithm. To showcase this effect, we present the reward during training for several different values of € in Figure [2] As € increases, instability in- creases as well, eventually having an adverse effect on the agent’s ability to achieve optimal reward. 7 Published as a conference paper at ICLR 2018 Acrobot HalfCheetah Swimmer Hopper Walker2d Ant Figure 1: The results of Trust-PCL against a TRPO baseline. Each plot shows average greedy reward with single standard deviation error intervals capped at the min and max across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that Trust-PCL is consistently able to match and, in many cases, beat TRPO’s performance both in terms of reward and sample efficiency. Hopper Walker2d
1707.01891#30
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
31
Hopper Walker2d Figure 2: The results of Trust-PCL across several values of ¢, defining the size of the trust re- gion. Each plot shows average greedy reward across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that instability increases with €, thus concluding that the use of trust region is crucial. Note that standard PCL standard PCL would fail (Nachum et al.|/2017) corresponds to € — oo (that is, \ = 0). Therefore, in the se environments, and the use of trust region is crucial.
1707.01891#31
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
32
The main advantage of Trust-PCL over existing trust region methods for continuous control is its ability to learn in an off-policy manner. The degree to which Trust-PCL is off-policy is determined by a combination of the hyparparameters α, β, and P . To evaluate the importance of training off-policy, we evaluate Trust-PCL with a hyperparameter setting that is more on-policy. We set α = 0.95, β = 0.1, and P = 1, 000. In this setting, we also use large batches of Q = 25 episodes of length P (a total of 25, 000 environment steps per batch). Figure 3 shows the results of Trust-PCL with our original parameters and this new setting. We note a dramatic advantage in sample efficiency when using off-policy training. Although Trust-PCL (on-policy) can achieve state-of-the-art reward performance, it requires an exorbitant amount of experience. On the other hand, Trust-PCL (off8 Published as a conference paper at ICLR 2018 Hopper Walker2d
1707.01891#32
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
33
Published as a conference paper at ICLR 2018 Hopper Walker2d Figure 3: The results of Trust-PCL varying the degree of on/off-policy. We see that Trust-PCL (on-policy) has a behavior similar to TRPO, achieving good final reward but requiring an exorbitant number of experience collection. When collecting less experience per training step in Trust-PCL (off-policy), we are able to improve sample efficiency while still achieving a competitive final re- ward. Domain HalfCheetah Swimmer Hopper Walker2d Ant TRPO-GAE 4871.36 137.25 3765.78 6028.73 2918.25 TRPO (rllab) 2889 – – 1487 1520 TRPO (ours) 4343.6 288.1 3516.7 2838.4 4347.5 Trust-PCL 7057.1 297.0 3804.9 5027.2 6104.2 IPG 4767 – – 3047 4415
1707.01891#33
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
34
Table 1: Results for best average reward in the first 10M steps of training for our implementations (TRPO (ours) and Trust-PCL) and external implementations. TRPO-GAE are results of Schulman (2017) available on the OpenAI Gym website. TRPO (rllab) and IPG are taken from Gu et al. (2017b). These results are each on different setups with different hyperparameter searches and in some cases different evaluation protocols (e.g.,TRPO (rllab) and IPG were run with a simple linear value network instead of the two-hidden layer network we use). Thus, it is not possible to make any definitive claims based on this data. However, we do conclude that our results are overall competitive with state-of-the-art external implementations. policy) can be competitive in terms of reward while providing a significant improvement in sample efficiency.
1707.01891#34
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
35
policy) can be competitive in terms of reward while providing a significant improvement in sample efficiency. One last hyperparameter is τ , determining the degree of exploration. Anecdotally, we found τ to not be of high importance for the tasks we evaluated. Indeed many of our best results use τ = 0. Including τ > 0 had a marginal effect, at best. The reason for this is likely due to the tasks themselves. Indeed, other works which focus on exploration in continuous control have found the need to propose exploration-advanageous variants of these standard benchmarks (Haarnoja et al., 2017; Houthooft et al., 2016). # 6 CONCLUSION
1707.01891#35
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
36
# 6 CONCLUSION We have presented Trust-PCL, an off-policy algorithm employing a relative-entropy penalty to im- pose a trust region on a maximum reward objective. We found that Trust-PCL can perform well on a set of standard control tasks, improving upon TRPO both in terms of average reward and sample effi- ciency. Our best results on Trust-PCL are able to maintain the stability and solution quality of TRPO while approaching the sample-efficiency of value-based methods (see e.g., Metz et al. (2017)). This gives hope that the goal of achieving both stability and sample-efficiency without trading-off one for the other is attainable in a single unifying RL algorithm. 9 Published as a conference paper at ICLR 2018 # 7 ACKNOWLEDGMENT We thank Matthew Johnson, Luke Metz, Shane Gu, and the Google Brain team for insightful com- ments and discussions. # REFERENCES Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large- scale machine learning. arXiv:1605.08695, 2016.
1707.01891#36
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
37
Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural Comput., 10, 1998. Shun-Ichi Amari. Differential-geometrical methods in statistics, volume 28. Springer Science & Business Media, 2012. Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming with function approximation. AISTATS, 2011. Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming. JMLR, 13, 2012. J Andrew Bagnell and Jeff Schneider. Covariant policy search. 2003. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. 2016. Roy Fox, Ari Pakman, and Naftali Tishby. G-learning: Taming the noise in reinforcement learning via soft updates. Uncertainty in Artifical Intelligence, 2016. URL http://arxiv.org/abs/ 1512.08562.
1707.01891#37
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
38
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efficient policy gradient with an off-policy critic. ICLR, 2017a. Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, Bernhard Sch¨olkopf, and Sergey Levine. Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning. arXiv preprint arXiv:1706.00387, 2017b. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017. Nicolas Heess, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017.
1707.01891#38
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
39
Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109–1117, 2016. Sham M Kakade. A natural policy gradient. In NIPS, 2002. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms, 2000. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015. 10 Published as a conference paper at ICLR 2018 Luke Metz, Julian Ibarz, Navdeep Jaitly, and James Davidson. Discrete sequential prediction of continuous actions for deep RL. CoRR, abs/1705.05035, 2017. URL http://arxiv.org/ abs/1705.05035.
1707.01891#39
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
40
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. arXiv:1312.5602, 2013. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016. Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. CoRR, abs/1702.08892, 2017. URL http: //arxiv.org/abs/1702.08892. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. NIPS, 2016. Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trends®) in Optimization, 1(3):127-239, 2014.
1707.01891#40
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
41
Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trends®) in Optimization, 1(3):127-239, 2014. Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21, 2008. Jan Peters, Katharina Mulling, and Yasemin Altun. Relative entropy policy search. In AAAI, 2010. Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and rein- forcement learning by approximate inference. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013. John Schulman. Modular rl. http://github.com/joschu/modular_rl, 2017. Accessed: 2017-06-01. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, 2015. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. ICLR, 2016. John Schulman, Pieter Abbeel, and Xi Chen. Equivalence between policy gradients and soft q- learning. arXiv preprint arXiv:1704.06440, 2017a.
1707.01891#41
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
42
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017b. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 387–395, 2014. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026– 5033. IEEE, 2012. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q- learning. AAAI, 2016. Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016. Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
1707.01891#42
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
43
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 1991. 11 Published as a conference paper at ICLR 2018 A IMPLEMENTATION BENEFITS OF TRUST-PCL We have already highlighted the ability of Trust-PCL to use off-policy data to stably train both a parameterized policy and value estimate, which sets it apart from previous methods. We have also noted the ease with which exploration can be incorporated through the entropy regularizer. We elaborate on several additional benefits of Trust-PCL. Compared to TRPO, Trust-PCL is much easier to implement. Standard TRPO implementations perform second-order gradient calculations on the KL-divergence to construct a Fisher information matrix (more specifically a vector product with the inverse Fisher information matrix). This yields a vector direction for which a line search is subsequently employed to find the optimal step. Compare this to Trust-PCL which employs simple gradient descent. This makes implementation much more straightforward and easily realizable within standard deep learning frameworks.
1707.01891#43
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
44
Even if one replaces the constraint on the average KL-divergence of TRPO with a simple regu- larization penalty (as in proximal policy gradient methods (Schulman et al., 2017b; Wang et al., 2016)), optimizing the resulting objective requires computing the gradient of the KL-divergence. In Trust-PCL, there is no such necessity. The per-state KL-divergence need not have an analyt- ically computable gradient. In fact, the KL-divergence need not have a closed form at all. The only requirement of Trust-PCL is that the log-density be analytically computable. This opens up the possible policy parameterizations to a much wider class of functions. While continuous control has traditionally used policies parameterized by unimodal Gaussians, with Trust-PCL the policy can be replaced with something much more expressive—for example, mixtures of Gaussians or auto- regressive policies as in Metz et al. (2017). We have yet to fully explore these additional benefits in this work, but we hope that future investi- gations can exploit the flexibility and ease of implementation of Trust-PCL to further the progress of RL in continuous control environments. # B EXPERIMENTAL SETUP We describe in detail the experimental setup regarding implementation and hyperparameter search.
1707.01891#44
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
45
# B EXPERIMENTAL SETUP We describe in detail the experimental setup regarding implementation and hyperparameter search. # B.1 ENVIRONMENTS In Acrobot, episodes were cut-off at step 500. For the remaining environments, episodes were cut- off at step 1, 000. Acrobot, HalfCheetah, and Swimmer are all non-terminating environments. Thus, for these envi- ronments, each episode had equal length and each batch contained the same number of episodes. Hopper, Walker2d, and Ant are environments that can terminate the agent. Thus, for these environ- ments, the batch size throughout training remained constant in terms of steps but not in terms of episodes. There exists an additional common MuJoCo task called Humanoid. We found that neither our implementation of TRPO nor Trust-PCL could make more than negligible headway on this task, and so omit it from the results. We are aware that TRPO with the addition of GAE and enough fine- tuning can be made to achieve good results on Humanoid (Schulman et al., 2016). We decided to not pursue a GAE implementation to keep a fair comparison between variants. Trust-PCL can also be made to incorporate an analogue to GAE (by maintaining consistencies at varying time scales), but we leave this to future work. IMPLEMENTATION DETAILS
1707.01891#45
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
46
IMPLEMENTATION DETAILS We use fully-connected feed-forward neural networks to represent both policy and value. The policy πθ is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t, the network is given the observation st. It produces a vector µt, which is combined with a learnable (but t-agnostic) parameter ξ to parametrize a unimodal Gaussian with mean µt and standard deviation exp(ξ). The next action at is sampled randomly from this Gaussian. 12 Published as a conference paper at ICLR 2018 The value network V, is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t the network is given the observation s, and the component-wise squared observation s; © s;. It produces a single scalar value. # B.2.1 TRPO LEARNING At each training iteration, both the policy and value parameters are updated. The policy is trained by performing a trust region step according to the procedure described in Schulman et al. (2015).
1707.01891#46
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
47
At each training iteration, both the policy and value parameters are updated. The policy is trained by performing a trust region step according to the procedure described in Schulman et al. (2015). The value parameters at each step are solved using an LBFGS optimizer. To avoid instability, the value parameters are solved to fit a mixture of the empirical values and the expected values. That is, we determine ¢ to minimize >, cpacn(Vo(s) — &V3(s) — (1 — %)V3(s))?, where again ¢ is the previous value parameterization. We use & = 0.9. This method for training ¢ is according to that used in|Schulman] (2017). B.2.2 TRUST-PCL LEARNING At each training iteration, both the policy and value parameters are updated. The specific updates are slightly different between Trust-PCL (on-policy) and Trust-PCL (off-policy).
1707.01891#47
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
48
For Trust-PCL (on-policy), the policy is trained by taking a single gradient step using the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001. The value network update is inspired by that used in TRPO we perform 5 gradients steps with learning rate 0.001, calculated with regards to a mix between the empirical values and the expected values according to the previous ˜φ. We use κ = 0.95. For Trust-PCL (off-policy), both the policy and value parameters are updated in a single step using the Adam optimizer with learning rate 0.0001. For this variant, we also utilize a target value network (lagged at the same rate as the target policy network) to replace the value estimate at the final state for each path. We do not mix between empirical and expected values. B.3 HYPERPARAMETER SEARCH
1707.01891#48
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
49
B.3 HYPERPARAMETER SEARCH We found the most crucial hyperparameters for effective learning in both TRPO and Trust- PCL to be e (the constraint defining the size of the trust region) and d (the rollout determin- ing how to evaluate the empirical value of a state). For TRPO we performed a grid search over € € {0.01,0.02,0.05,0.1},d € {10,50}. For Trust-PCL we performed a grid search over € € {0.001, 0.002, 0.005, 0.01}, d € {10,50}. For Trust-PCL we also experimented with the value of 7, either keeping it at a constant 0 (thus, no exploration) or decaying it from 0.1 to 0.0 by a smoothed exponential rate of 0.1 every 2,500 training iterations. We fix the discount to γ = 0.995 for all environments. # C PSEUDOCODE A simplified pseudocode for Trust-PCL is presented in Algorithm 1. 13 Published as a conference paper at ICLR 2018 # Algorithm 1 Trust-PCL
1707.01891#49
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01891
51
function Gradients({s‘ S;. ») pte D) 1/C is the consi. stency error sine in Equation|22| P-1 C(g(h , Compute Ad = ye 1Xnso © (s.,. tiptar®s #)VoC(s} hep ttp+ar 99): k , Compute Ad = De 1 Spo cn toe teptar9 8VEC(S tap ta 9, @). Return Ad, Ad end function Initialize 0, 6, A, set 0 = 0. Initialize empty replay buffer RB(). for i = 0 to N—1do // Collect Sample P steps si.4p ~ 7 on ENV. Insert s;.44p to RB. // Train Sample batch {s\) t4P B 7 from RB to contain a total of Q transitions (B + Q/P). Aé, Ad = Gradients( ({s\¥ tp tbe): Update 6¢ 0—, Ad. Update @ + ¢ — m Ad. // Update auxiliary variables Update 6 = a6 + (1 — a)é. Update in terms of € according to Section[4.3] # end for 14
1707.01891#51
Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a prohibitively large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL. The algorithm is the result of observing that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. Thus, Trust-PCL is able to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL improves the solution quality and sample efficiency of TRPO.
http://arxiv.org/pdf/1707.01891
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans
cs.AI
ICLR 2018
null
cs.AI
20170706
20180222
[ { "id": "1605.08695" }, { "id": "1707.06347" }, { "id": "1702.08165" }, { "id": "1704.06440" }, { "id": "1509.02971" }, { "id": "1707.02286" }, { "id": "1611.01224" }, { "id": "1606.01540" }, { "id": "1706.00387" } ]
1707.01495
0
8 1 0 2 b e F 3 2 ] G L . s c [ 3 v 5 9 4 1 0 . 7 0 7 1 : v i X r a # Hindsight Experience Replay Marcin Andrychowicz*, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel', Wojciech Zaremba‘ OpenAI # Abstract Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be com- bined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https: //goo.gl1/SMrQnI. # 1 Introduction
1707.01495#0
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
1
# 1 Introduction Reinforcement learning (RL) combined with neural networks has recently led to a wide range of successes in learning policies for sequential decision-making problems. This includes simulated environments, such as playing Atari games (Mnih et al., 2015), and defeating the best human player at the game of Go (Silver et al., 2016), as well as robotic tasks such as helicopter control (Ng et al., 2006), hitting a baseball (Peters and Schaal, 2008), screwing a cap onto a bottle (Levine et al., 2015), or door opening (Chebotar et al., 2016). However, a common challenge, especially for robotics, is the need to engineer a reward function that not only reflects the task at hand but is also carefully shaped (Ng et al., 1999) to guide the policy optimization. For example, Popov et al. (2017) use a cost function consisting of five relatively complicated terms which need to be carefully weighted in order to train a policy for stacking a brick on top of another one. The necessity of cost engineering limits the applicability of RL in the real world because it requires both RL expertise and domain-specific knowledge. Moreover, it is not applicable in situations where we do not know what admissible behaviour may look like. It is therefore of great practical relevance to develop algorithms which can learn from unshaped reward signals, e.g. a binary signal indicating successful task completion.
1707.01495#1
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
2
One ability humans have, unlike the current generation of model-free RL algorithms, is to learn almost as much from achieving an undesired outcome as from the desired one. Imagine that you are learning how to play hockey and are trying to shoot a puck into a net. You hit the puck but it misses the net on the right side. The conclusion drawn by a standard RL algorithm in such a situation would * [email protected] t Equal advising. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. be that the performed sequence of actions does not lead to a successful shot, and little (if anything) would be learned. It is however possible to draw another conclusion, namely that this sequence of actions would be successful if the net had been placed further to the right.
1707.01495#2
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
3
In this paper we introduce a technique called Hindsight Experience Replay (HER) which allows the algorithm to perform exactly this kind of reasoning and can be combined with any off-policy RL algorithm. It is applicable whenever there are multiple goals which can be achieved, e.g. achieving each state of the system may be treated as a separate goal. Not only does HER improve the sample efficiency in this setting, but more importantly, it makes learning possible even if the reward signal is sparse and binary. Our approach is based on training universal policies (Schaul et al., 2015a) which take as input not only the current state, but also a goal state. The pivotal idea behind HER is to replay each episode with a different goal than the one the agent was trying to achieve, e.g. one of the goals which was achieved in the episode. # 2 Background In this section we introduce reinforcement learning formalism used in the paper as well as RL algorithms we use in our experiments. # 2.1 Reinforcement Learning We consider the standard reinforcement learning formalism consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is described by a set of states S, a set of actions A, a distribution of initial states p(so), a reward function r : S x A — R, transition probabilities p(s:41|s¢, a), and a discount factor 7 € [0,1].
1707.01495#3
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
4
A deterministic policy is a mapping from states to actions: 7 : S — A. Every episode starts with sampling an initial state s9. At every timestep ¢ the agent produces an action based on the current state: ay = 7(8,). Then it gets the reward r; = r(sz, a) and the environment’s new state is sampled from the distribution p(-|s;, a). A discounted sum of future rewards is called a return: Ry = y=, yi. The agent’s goal is to maximize its expected return E,,,[Ro|so]. The Q-function or action-value function is defined as Q”(s;, ar) = E[R;|s¢, ai]. Let * denote an optimal policy i.e. any policy 1* s.t. Q™ (s, a) > Q7(s,a) for every s € S,ae A and any policy 7. All optimal policies have the same Q-function which is called optimal Q-function and denoted Q*. It is easy to show that it satisfies the following equation called the Bellman equation: Q"(s.4) = Exep¢jsa) [r(ssa) + ymax Q*(s',a!)| # 2.2 Deep Q-Networks (DQN)
1707.01495#4
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
5
# 2.2 Deep Q-Networks (DQN) Deep Q-Networks (DQN) (Mnih et al., 2015) is a model-free RL algorithm for discrete action spaces. Here we sketch it only informally, see Mnih et al. (2015) for more details. In DQN we maintain a neural network Q which approximates Q*. A greedy policy w.r.t. Q is defined as T@Q(s) = argmax,¢ ,Q(s, a). An €-greedy policy w.r.t. Q is a policy which with probability € takes a random action (sampled uniformly from A) and takes the action 7g(s) with probability 1 — «.
1707.01495#5
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
6
During training we generate episodes using e-greedy policy w.r.t. the current approximation of the action-value function Q. The transition tuples (s;, a¢, 4, $41) encountered during training are stored in the so-called replay buffer. The generation of new episodes is interleaved with neural network training. The network is trained using mini-batch gradient descent on the loss £ which encourages the approximated Q-function to satisfy the Bellman equation: £ = E (Q(s:, az) — yt). where yz = ry + YMaxec.4 Q(sy41, a’) and the tuples (51, a4, rz, 8¢41) are sampled from the replay buffer'. In order to make this optimization procedure more stable the targets y, are usually computed using a separate target network which changes at a slower pace than the main network. A common practice 'The targets y, depend on the network parameters but this dependency is ignored during backpropagation. is to periodically set the weights of the target network to the current weights of the main network (e.g. Mnih et al. (2015)) or to use a polyak-averaged” (Polyak and Juditsky, 1992) version of the main network instead (Lillicrap et al., 2015). # 2.3 Deep Deterministic Policy Gradients (DDPG)
1707.01495#6
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
7
# 2.3 Deep Deterministic Policy Gradients (DDPG) Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) is a model-free RL algorithm for continuous action spaces. Here we sketch it only informally, see Lillicrap et al. (2015) for more details. In DDPG we maintain two neural networks: a target policy (also called an actor) 7: S > A and an action-value function approximator (called the critic) Q : S x A — R. The critic’s job is to approximate the actor’s action-value function Q”. Episodes are generated using a behavioral policy which is a noisy version of the target policy, e.g. m™(s) = 1(s) + N(0,1). The critic is trained in a similar way as the Q-function in DQN but the targets y, are computed using actions outputted by the actor, ie. y, = ry + YQ(S141,7(St41))- The actor is trained with mini-batch gradient descent on the loss £, = —E,Q(s,7(s)), where s is sampled from the replay buffer. The gradient of £, w.r.t. actor parameters can be computed by backpropagation through the combined critic and actor networks.
1707.01495#7
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
8
# 2.4 Universal Value Function Approximators (UVFA) Universal Value Function Approximators (UVFA) (Schaul et al., 2015a) is an extension of DQN to the setup where there is more than one goal we may try to achieve. Let G be the space of possible goals. Every goal g € G corresponds to some reward function rz : S x A — R. Every episode starts with sampling a state-goal pair from some distribution p(so, g). The goal stays fixed for the whole episode. At every timestep the agent gets as input not only the current state but also the current goal a:SxG-— Aand gets the reward r, = r,(s;,a;,). The Q-function now depends not only on a state-action pair but also on a goal Q*(s;, a1, 9) = E[R:|s:, at, g]. Schaul et al. (2015a) show that in this setup it is possible to train an approximator to the Q-function using direct bootstrapping from the Bellman equation (just like in case of DQN) and that a greedy policy derived from it can generalize to previously unseen state-action pairs. The extension of this approach to DDPG is straightforward. # 3 Hindsight Experience Replay # 3.1 A motivating example
1707.01495#8
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
9
# 3 Hindsight Experience Replay # 3.1 A motivating example Consider a bit-flipping environment with the state space S = {0, 1}” and the action space A = {0,1,..., — 1} for some integer n in which executing the i-th action flips the i-th bit of the state. For every episode we sample uniformly an initial state as well as a target state and the policy gets a reward of —1 as long as it is not in the target state, i.e. rg(s,a) = —[s F g].
1707.01495#9
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
10
Standard RL algorithms are bound to fail in this environment for n > 40 because they will never experience any reward other than —1. Figure Notice that using techniques for improving exploration (e.g. VIME ment. (Houthooft et al., 2016), count-based exploration (Ostrovski et al., 2017) or bootstrapped DQN (Osband et al., 2016)) does not help here because the real problem is not in lack of diversity of states being visited, rather it is simply impractical to explore such a large state space. The standard solution to this problem would be to use 08 a shaped reward function which is more informative and guides the agent towards the goal, e.g. ry(s,a@) = —||s — g||°. While using a shaped reward solves the problem in our toy environment, it may be difficult to apply to more complicated problems. We investigate the results of reward shaping experimentally in Sec. 4.4. 0.6 0.4 success rate ° = Instead of shaping the reward we propose a different solution which does not require any domain knowledge. Consider an episode with Figure 1: Bit-flipping experi- ment. == DON — DQN+HER 08 0.6 0.4 success rate ° = iy o 2 ° 10 20 30 40 50 number of bits n °
1707.01495#10
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
12
a state sequence s),..., 57 anda goal g 4 s1,..., 87 which implies that the agent received a reward of —1 at every timestep. The pivotal idea behind our approach is to re-examine this trajectory with a different goal — while this trajectory may not help us learn how to achieve the state g, it definitely tells us something about how to achieve the state s;. This information can be harvested by using an off-policy RL algorithm and experience replay where we replace g in the replay buffer by sp. In addition we can still replay with the original goal g left intact in the replay buffer. With this modification at least half of the replayed trajectories contain rewards different from —1 and learning becomes much simpler. Fig. 1 compares the final performance of DQN with and without this additional replay technique which we call Hindsight Experience Replay (HER). DQN without HER can only solve the task for n < 13 while DQN with HER easily solves the task for n up to 50. See Appendix A for the details of the experimental setup. Note that this approach combined with powerful function approximators (e.g., deep neural networks) allows the agent to learn how to achieve the goal g even if it has never observed it during training. We more formally describe our approach in the following sections. # 3.2. Multi-goal RL
1707.01495#12
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
13
We more formally describe our approach in the following sections. # 3.2. Multi-goal RL We are interested in training agents which learn to achieve multiple different goals. We follow the approach from Universal Value Function Approximators (Schaul et al., 2015a), i.e. we train policies and value functions which take as input not only a state s € S but also a goal g € G. Moreover, we show that training an agent to perform multiple tasks can be easier than training it to perform only one task (see Sec. 4.3 for details) and therefore our approach may be applicable even if there is only one task we would like the agent to perform (a similar situation was recently observed by Pinto and Gupta (2016)). We assume that every goal g € G corresponds to some predicate f, : S — {0,1} and that the agent’s goal is to achieve any state s that satisfies f,(s) = 1. In the case when we want to exactly specify the desired state of the system we may use S = G and f,(s) = [s = g]. The goals can also specify only some properties of the state, e.g. suppose that S = R? and we want to be able to achieve an arbitrary state with the given value of x coordinate. In this case G = R and f,((x,y)) = [x = g].
1707.01495#13
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
14
Moreover, we assume that given a state s we can easily find a goal g which is satisfied in this state. More formally, we assume that there is given a mapping m : S > G s.t. Vses fm(s)(s) = 1. Notice that this assumption is not very restrictive and can usually be satisfied. In the case where each goal corresponds to a state we want to achieve, ie. G = S and f(s) = [s = g], the mapping m is just an identity. For the case of 2-dimensional state and 1-dimensional goals from the previous paragraph this mapping is also very simple m((x,y)) = 2. A universal policy can be trained using an arbitrary RL algorithm by sampling goals and initial states from some distributions, running the agent for some number of timesteps and giving it a negative reward at every timestep when the goal is not achieved, i.e. rg(s,a) = —[fy(s) = 0]. This does not however work very well in practice because this reward function is sparse and not very informative. In order to solve this problem we introduce the technique of Hindsight Experience Replay which is the crux of our approach. # 3.3 Algorithm
1707.01495#14
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
15
In order to solve this problem we introduce the technique of Hindsight Experience Replay which is the crux of our approach. # 3.3 Algorithm The idea behind Hindsight Experience Replay (HER) is very simple: after experiencing some episode So, S1,.--, 87 We Store in the replay buffer every transition s; — s;, not only with the original goal used for this episode but also with a subset of other goals. Notice that the goal being pursued influences the agent’s actions but not the environment dynamics and therefore we can replay each trajectory with an arbitrary goal assuming that we use an off-policy RL algorithm like DQN (Mnih et al., 2015), DDPG (Lillicrap et al., 2015), NAF (Gu et al., 2016) or SDQN (Metz et al., 2017). One choice which has to be made in order to use HER is the set of additional goals used for replay. In the simplest version of our algorithm we replay each trajectory with the goal m(sr), i.e. the goal which is achieved in the final state of the episode. We experimentally compare different types and quantities of additional goals for replay in Sec. 4.5. In all cases we also replay each trajectory with the original goal pursued in the episode. See Alg. 1 for a more formal description of the algorithm. Algorithm 1 Hindsight Experience Replay (HER) # Given:
1707.01495#15
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
17
an off-policy RL algorithm A, > e.g. DQN, DDPG, NAF, SDQN astrategy S for sampling goals for replay, peg. S(so,---, 87) = m(sr) areward functionr: Sx AxG—>R. peg. r(s,a,9) = —[fg(s) = 0] Initialize A > e.g. initialize neural networks Initialize replay buffer R episode =1, / do Sample a goal g and an initial state so. fort = 0,T—1do Sample an action a; using the behavioral policy from A: ar — 7(S:|[g) > || denotes concatenation Execute the action a; and observe a new state s;41 end for fort = 0,T—1do re = 1 (St, Gt, 9) Store the transition (s;||g, az, Te, $¢41||g) in R > standard experience replay Sample a set of additional goals for replay G := S(current episode) for g' € Gdo r’ := 7(81, 41,9’) Store the transition (s;||g’, ae, 7’, Se41||g’) in R > HER end
1707.01495#17
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
19
# e # e # for # end for HER may be seen as a form of implicit curriculum as the goals used for replay naturally shift from ones which are simple to achieve even by a random agent to more difficult ones. However, in contrast to explicit curriculum, HER does not require having any control over the distribution of initial environment states. Not only does HER learn with extremely sparse rewards, in our experiments it also performs better with sparse rewards than with shaped ones (See Sec. 4.4). These results are indicative of the practical challenges with reward shaping, and that shaped rewards would often constitute a compromise on the metric we truly care about (such as binary success/failure). # 4 Experiments The video presenting our experiments is available at https: //goo.gl/SMrQnI. This section is organized as follows. In Sec. 4.1 we introduce multi-goal RL environments we use for the experiments as well as our training procedure. In Sec. 4.2 we compare the performance of DDPG with and without HER. In Sec. 4.3 we check if HER improves performance in the single-goal setup. In Sec. 4.4 we analyze the effects of using shaped reward functions. In Sec. 4.5 we compare different strategies for sampling additional goals for HER. In Sec. 4.6 we show the results of the experiments on the physical robot. # 4.1 Environments
1707.01495#19
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
20
# 4.1 Environments The are no standard environments for multi-goal RL and therefore we created our own environments. We decided to use manipulation environments based on an existing hardware robot to ensure that the challenges we face correspond as closely as possible to the real world. In all experiments we use a 7-DOF Fetch Robotics arm which has a two-fingered parallel gripper. The robot is simulated using the MuJoCo (Todorov et al., 2012) physics engine. The whole training procedure is performed in the simulation but we show in Sec. 4.6 that the trained policies perform well on the physical robot without any finetuning.
1707.01495#20
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
21
Figure 2: Different tasks: pushing (top row), sliding (middle row) and pick-and-place (bottom row). The red ball denotes the goal position. Policies are represented as Multi-Layer Perceptrons (MLPs) with Rectified Linear Unit (ReLU) activation functions. Training is performed using the DDPG algorithm (Lillicrap et al., 2015) with Adam (Kingma and Ba, 2014) as the optimizer. For improved efficiency we use 8 workers which average the parameters after every update. See Appendix A for more details and the values of all hyperparameters. We consider 3 different tasks: 1. Pushing. In this task a box is placed on a table in front of the robot and the task is to move it to the target location on the table. The robot fingers are locked to prevent grasping. The learned behaviour is a mixture of pushing and rolling. 2. Sliding. In this task a puck is placed on a long slippery table and the target position is outside of the robot’s reach so that it has to hit the puck with such a force that it slides and then stops in the appropriate place due to friction.
1707.01495#21
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
22
3. Pick-and-place. This task is similar to pushing but the target position is in the air and the fingers are not locked. To make exploration in this task easier we recorded a single state in which the box is grasped and start half of the training episodes from this state’. States: The state of the system is represented in the MuJoCo physics engine and consists of angles and velocities of all robot joints as well as positions, rotations and velocities (linear and angular) of all objects. Goals: Goals describe the desired position of the object (a box or a puck depending on the task) with some fixed tolerance of € ic. G = R® and f,(s) = [|g — Sobject| < €], where Sopject is the position of the object in the state s. The mapping from states to goals used in HER is simply m(s) = SobjectRewards: Unless stated otherwise we use binary and sparse rewards r(s, a, 9) = —[fg(s’) = 0] where s’ if the state after the execution of the action a in the state s. We compare sparse and shaped reward functions in Sec. 4.4. State-goal distributions: For all tasks the initial position of the gripper is fixed, while the initial position of the object and the target are randomized. See Appendix A for details.
1707.01495#22
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
23
State-goal distributions: For all tasks the initial position of the gripper is fixed, while the initial position of the object and the target are randomized. See Appendix A for details. >This was necessary because we could not successfully train any policies for this task without using the demonstration state. We have later discovered that training is possible without this trick if only the goal position is sometimes on the table and sometimes in the air. Observations: In this paragraph relative means relative to the current gripper position. The policy is given as input the absolute position of the gripper, the relative position of the object and the target*, as well as the distance between the fingers. The Q-function is additionally given the linear velocity of the gripper and fingers as well as relative linear and angular velocity of the object. We decided to restrict the input to the policy in order to make deployment on the physical robot easier. Actions: of the problems we consider require gripper rotation and therefore we keep it fixed. Action space is 4-dimensional. Three dimensions specify the desired relative gripper position at the next timestep. We use MuJoCo constraints to move the gripper towards the desired position but Jacobian-based control could be used instead*. The last dimension specifies the desired distance between the 2 fingers which are position controlled.
1707.01495#23
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
24
Strategy S for sampling goals for replay: Unless stated otherwise HER uses replay with the goal corresponding to the final state in each episode, i.e. S(so,..., 87) = m(sr). We compare different strategies for choosing which goals to replay with in Sec. 4.5. # 4.2 Does HER improve performance? In order to verify if HER improves performance we evaluate DDPG with and without HER on all 3 tasks. Moreover, we compare against DDPG with count-based exploration® (Strehl and Littman, 2005; Kolter and Ng, 2009; Tang et al., 2016; Bellemare et al., 2016; Ostrovski et al., 2017). For HER we store each transition in the replay buffer twice: once with the goal used for the generation of the episode and once with the goal corresponding to the final state from the episode (we call this strategy final). In Sec. 4.5 we perform ablation studies of different strategies S for choosing goals for replay, here we include the best version from Sec. 4.5 in the plot for comparison.
1707.01495#24
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
25
== DDPG —— DDPG+count-based exploration ——- DDPG+HER —— DDPG+HER (version from Sec. 4.5) pushing sliding pick-and-place 100% 100% 100% = 80% 80% 80% 60% | J 60% 60% 40% 40% 40% success rate 20% 20% 20% 0% 0% 0% 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 800x50 timesteps) Figure 3: Learning curves for multi-goal setup. An episode is considered successful if the distance between the object and the goal at the end of the episode is less than 7cm for pushing and pick-and- place and less than 20cm for sliding. The results are averaged across 5 random seeds and shaded areas represent one standard deviation. The red curves correspond to the future strategy with k = 4 from Sec. 4.5 while the blue one corresponds to the final strategy. From Fig. 3 it is clear that DDPG without HER is unable to solve any of the tasks’ and DDPG with count-based exploration is only able to make some progress on the sliding task. On the other hand, DDPG with HER solves all tasks almost perfectly. It confirms that HER is a crucial element which makes learning from sparse, binary rewards possible.
1707.01495#25
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
26
“The target position is relative to the current object position. >The successful deployment on a physical robot (Sec. 4.6) confirms that our control model produces movements which are reproducible on the physical robot despite not being fully physically plausible. ° We discretize the state space and use an intrinsic reward of the form a/VN, where a is a hyper- parameter and N is the number of times the given state was visited. The discretization works as fol- lows. We take the relative position of the box and the target and then discretize every coordinate using a grid with a stepsize 6 which is a hyperparameter. We have performed a hyperparameter search over a € {0.032, 0.064, 0.125, 0.25, 0.5, 1, 2, 4,8, 16,32}, B € {1cm,2cm, 4cm, 8cm}. The best results were obtained using a = 1 and 6 = 1cm and these are the results we report. 7We also evaluated DQN (without HER) on our tasks and it was not able to solve any of them. 2 © % 3 8 8 s a
1707.01495#26
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
27
7We also evaluated DQN (without HER) on our tasks and it was not able to solve any of them. 2 © % 3 8 8 s a = - DDPG — DDPG+countbased exploration —— DDPG+HER pushing sliding pick-and-place 100% 100% 100% 80% < 80% 80% 60% 60% 60% 40% 40% - 40% 20% 20% — 20% 0% 0% — 0% 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 800x50 timesteps) Figure 4: Learning curves for the single-goal case. # 4.3 Does HER improve performance even if there is only one goal we care about? In this section we evaluate whether HER improves performance in the case where there is only one goal we care about. To this end, we repeat the experiments from the previous section but the goal state is identical in all episodes. From Fig. 4 it is clear that DDPG+HER performs much better than pure DDPG even if the goal state is identical in all episodes. More importantly, comparing Fig. 3 and Fig. 4 we can also notice that HER learns faster if training episodes contain multiple goals, so in practice it is advisable to train on multiple goals even if we care only about one of them. # 4.4 How does HER interact with reward shaping?
1707.01495#27
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
28
# 4.4 How does HER interact with reward shaping? So far we only considered binary rewards of the form r(s,a,g) = —[|g — Sobject| > €]. In this section we check how the performance of DDPG with and without HER changes if we replace this reward with one which is shaped. We considered reward functions of the form r(s,a,g) = Alg — Sopject|” — |g — Sopject|”> where s’ is the state of the environment after the execution of the action a in the state s and A € {0,1}, p € {1, 2} are hyperparameters. Fig. 5 shows the results. Surprisingly neither DDPG, nor DDPG+HER was able to successfully solve any of the tasks with any of these reward functions®.Our results are consistent with the fact that successful applications of RL to difficult manipulation tasks which does not use demonstrations usually have more complicated reward functions than the ones we tried (e.g. Popov et al. (2017)).
1707.01495#28
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
29
The following two reasons can cause shaped rewards to perform so poorly: (1) There is a huge discrepancy between what we optimize (i.e. a shaped reward function) and the success condition (i.e.: is the object within some radius from the goal at the end of the episode); (2) Shaped rewards penalize for inappropriate behaviour (e.g. moving the box in a wrong direction) which may hinder exploration. It can cause the agent to learn not to touch the box at all if it can not manipulate it precisely and we noticed such behaviour in some of our experiments. Our results suggest that domain-agnostic reward shaping does not work well (at least in the simple forms we have tried). Of course for every problem there exists a reward which makes it easy (Ng et al., 1999) but designing such shaped rewards requires a lot of domain knowledge and may in some cases not be much easier than directly scripting the policy. This strengthens our belief that learning from sparse, binary rewards is an important problem. # 4.5 How many goals should we replay each trajectory with and how to choose them? In this section we experimentally evaluate different strategies (i.e. S in Alg. 1) for choosing goals to use with HER. So far the only additional goals we used for replay were the ones corresponding to 8
1707.01495#29
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
30
8 We also tried to rescale the distances, so that the range of rewards is similar as in the case of binary rewards, clipping big distances and adding a simple (linear or quadratic) term encouraging the gripper to move towards the object but none of these techniques have led to successful training. -- DDPG — DDPG+HER} pushing sliding pick-and-place 100% 100% 100% 80% 80% 80% £ 60% 60% 60% 2 8 8 40% 40% 40% g a 20% 20% =e 20% | oles QO 0% 0% = 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 80x50 timesteps) 2 7 : ; _ , 24 Figure 5: Learning curves for the shaped reward r(s,a,g) = —|g — Sopject| (it performed best among the shaped rewards we have tried). Both algorithms fail on all tasks.
1707.01495#30
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
31
== noHER —= final —-®= random —@®= episode —®= future pushing sliding pick-and-place 1.0 I 1.0 DD ——$—$—$»—— 2 f @ 08 08 0.8 8 8 06 0.6 0.6 8 3 2 04 04 | 0.4 — % 3 2 20.2 02 02 . ee, —o— 00 - - 0.0 —- 00 0.0 —----- ~~ 1 2 4 8 16 all 1 2 4 8 16 all 1 2 4 8 16 all pushing sliding pick-and-place 1.0 1.0 1.0 2 B08 08 0.8 ra 3 8 06 0.6 0.6 8 a o 04 — 04 04 S £ 202 0.2 0.2 & 0.0 Feta - 0.0 bo ee = 0.0 —----— -S = 1 2 4 8 16 all 1 2 4 8 16 all 1 2 4 8 16 all number of additional goals used to replay each transition with Figure 6: Ablation study of different strategies for choosing additional goals for replay. The top row shows the highest (across the training epochs) test performance and the bottom row shows the average test performance across all training epochs. On the right top plot the curves for final, episode and future coincide as all these strategies achieve perfect performance on this task.
1707.01495#31
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
32
the final state of the environment and we will call this strategy final. Apart from it we consider the following strategies: e future — replay with k random states which come from the same episode as the transition being replayed and were observed after it, e episode — replay with k random states coming from the same episode as the transition being replayed, e random — replay with k random states encountered so far in the whole training procedure. All of these strategies have a hyperparameter k which controls the ratio of HER data to data coming from normal experience replay in the replay buffer. The plots comparing different strategies and different values of k can be found in Fig. 6. We can see from the plots that all strategies apart from random solve pushing and pick-and-place almost perfectly regardless of the values of k. In all cases future with k equal 4 or 8 performs best and it is the only strategy which is able to solve the sliding task almost perfectly. The learning curves for
1707.01495#32
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
33
Figure 7: The pick-and-place policy deployed on the physical robot. future with k = 4 can be found in Fig. 3. It confirms that the most valuable goals for replay are the ones which are going to be achieved in the near future’. Notice that increasing the values of k above 8 degrades performance because the fraction of normal replay data in the buffer becomes very low. # 4.6 Deployment on a physical robot We took a policy for the pick-and-place task trained in the simulator (version with the future strategy and k = 4 from Sec. 4.5) and deployed it on a physical fetch robot without any finetuning. The box position was predicted using a separately trained CNN using raw fetch head camera images. See Appendix B for details. Initially the policy succeeded in 2 out of 5 trials. It was not robust to small errors in the box position estimation because it was trained on perfect state coming from the simulation. After retraining the policy with gaussian noise (std=Icm) added to observations!° the success rate increased to 5/5. The video showing some of the trials is available at https: //goo.g1/SMrQnI. # 5 Related work
1707.01495#33
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
34
# 5 Related work The technique of experience replay has been introduced in Lin (1992) and became very popular after it was used in the DQN agent playing Atari (Mnih et al., 2015). Prioritized experience replay (Schaul et al., 2015b) is an improvement to experience replay which prioritizes transitions in the replay buffer in order to speed up training. It it orthogonal to our work and both approaches can be easily combined. Learning simultaneously policies for multiple tasks have been heavily explored in the context of policy search, e.g. Schmidhuber and Huber (1990); Caruana (1998); Da Silva et al. (2012); Kober et al. (2012); Devin et al. (2016); Pinto and Gupta (2016). Learning off-policy value functions for multiple tasks was investigated by Foster and Dayan (2002) and Sutton et al. (2011). Our work is most heavily based on Schaul et al. (2015a) who considers training a single neural network approximating multiple value functions. Learning simultaneously to perform multiple tasks has been also investigated for a long time in the context of Hierarchical Reinforcement Learning, e.g. Bakker and Schmidhuber (2004); Vezhnevets et al. (2017).
1707.01495#34
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
35
Our approach may be seen as a form of implicit curriculum learning (Elman, 1993; Bengio et al., 2009). While curriculum is now often used for training neural networks (e.g. Zaremba and Sutskever (2014); Graves et al. (2016)), the curriculum is almost always hand-crafted. The problem of automatic curriculum generation was approached by Schmidhuber (2004) who constructed an asymptotically optimal algorithm for this problem using program search. Another interesting approach is PowerPlay (Schmidhuber, 2013; Srivastava et al., 2013) which is a general framework for automatic task selection. Graves et al. (2017) consider a setup where there is a fixed discrete set of tasks and empirically evaluate different strategies for automatic curriculum generation in this settings. Another approach investigated by Sukhbaatar et al. (2017) and Held et al. (2017) uses self-play between the policy and a task-setter in order to automatically generate goal states which are on the border of what the current policy can achieve. Our approach is orthogonal to these techniques and can be combined with them. °We have also tried replaying the goals which are close to the ones achieved in the near future but it has not performed better than the future strategy
1707.01495#35
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
36
°We have also tried replaying the goals which are close to the ones achieved in the near future but it has not performed better than the future strategy ‘The Q-function approximator was trained using exact observations. It does not have to be robust to noisy observations because it is not used during the deployment on the physical robot. 10 # 6 Conclusions We introduced a novel technique called Hindsight Experience Replay which makes possible applying RL algorithms to problems with sparse and binary rewards. Our technique can be combined with an arbitrary off-policy RL algorithm and we experimentally demonstrated that with DQN and DDPG. We showed that HER allows training policies which push, slide and pick-and-place objects with a robotic arm to the specified positions while the vanilla RL algorithm fails to solve these tasks. We also showed that the policy for the pick-and-place task performs well on the physical robot without any finetuning. As far as we know, it is the first time so complicated behaviours were learned using only sparse, binary rewards. # Acknowledgments We would like to thank Ankur Handa, Jonathan Ho, John Schulman, Matthias Plappert, Tim Salimans, and Vikash Kumar for providing feedback on the previous versions of this manuscript. We would also like to thank Rein Houthooft and the whole OpenAI team for fruitful discussions as well as Bowen Baker for performing some additional experiments. # References
1707.01495#36
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
37
# References Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467. Bakker, B. and Schmidhuber, J. (2004). Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization. In Proc. of the 8-th Conf. on Intelligent Autonomous Systems, pages 438-445. Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. (2016). Unifying count- based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471-1479. Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48. ACM. Caruana, R. (1998). Multitask learning. In Learning to learn, pages 95-133. Springer.
1707.01495#37
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
38
Caruana, R. (1998). Multitask learning. In Learning to learn, pages 95-133. Springer. Chebotar, Y., Kalakrishnan, M., Yahya, A., Li, A., Schaal, S., and Levine, S. (2016). Path integral guided policy search. arXiv preprint arXiv: 1610.00529. Da Silva, B., Konidaris, G., and Barto, A. (2012). Learning parameterized skills. arXiv preprint arXiv: 1206.6398. Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2016). Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088. Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71-99. Foster, D. and Dayan, P. (2002). Structure in the space of value functions. Machine Learning, 49(2):325-346.
1707.01495#38
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
39
Foster, D. and Dayan, P. (2002). Structure in the space of value functions. Machine Learning, 49(2):325-346. Graves, A., Bellemare, M. G., Menick, J., Munos, R., and Kavukcuoglu, K. (2017). Automated curriculum learning for neural networks. arXiv preprint arXiv: 1704.03003. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwiriska, A., Colmenarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):47 1-476. Gu, S., Lillicrap, T., Sutskever, IL, and Levine, S. (2016). Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv: 1603.00748. Held, D., Geng, X., Florensa, C., and Abbeel, P. (2017). Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv: 1705.06366.
1707.01495#39
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
40
Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109- 1117. 11 Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980. Kober, J., Wilhelm, A., Oztop, E., and Peters, J. (2012). Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4):361-379. Kolter, J. Z. and Ng, A. Y. (2009). Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 513-520. ACM. Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2015). End-to-end training of deep visuomotor policies. arXiv preprint arXiv: 1504.00702.
1707.01495#40
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
41
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321. Metz, L., Ibarz, J., Jaitly, N., and Davidson, J. (2017). Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv: 1705.05035. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529-533.
1707.01495#41
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
42
Ng, A., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., and Liang, E. (2006). Autonomous inverted helicopter flight via reinforcement learning. Experimental Robotics IX, pages 363-372. Ng, A. Y., Harada, D., and Russell, S. (1999). Policy invariance under reward transformations: Theory and application to reward shaping. In JCML, volume 99, pages 278-287. Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. (2016). Deep exploration via bootstrapped dqn. In Advances In Neural Information Processing Systems, pages 4026-4034. Ostrovski, G., Bellemare, M. G., Oord, A. v. d., and Munos, R. (2017). Count-based exploration with neural density models. arXiv preprint arXiv: 1703.01310. Peters, J. and Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682-697. Pinto, L. and Gupta, A. (2016). Learning to push by grasping: Using multiple tasks for effective learning. arXiv preprint arXiv: 1609.09025.
1707.01495#42
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
43
Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855. Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and Riedmiller, M. (2017). Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv: 1704.03073. Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015a). Universal value function approximators. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1312-1320. Schaul, T., Quan, J., Antonoglou, L., and Silver, D. (2015b). Prioritized experience replay. arXiv preprint arXiv:1511.05952. Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54(3):211-254.
1707.01495#43
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
44
Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54(3):211-254. Schmidhuber, J. (2013). Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Frontiers in psychology, 4. Schmidhuber, J. and Huber, R. (1990). Learning to generate focus trajectories for attentive vision. Institut fiir Informatik. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, L, Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489. Srivastava, R. K., Steunebrink, B. R., and Schmidhuber, J. (2013). First experiments with powerplay. Neural Networks, 41:130-136. 12 Strehl, A. L. and Littman, M. L. (2005). A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning, pages 856-863. ACM.
1707.01495#44
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
45
Sukhbaatar, S., Kostrikov, I., Szlam, A., and Fergus, R. (2017). Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv: 1703.05407. Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 761-768. International Foundation for Autonomous Agents and Multiagent Systems. Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). # exploration: A study of count-based exploration for deep reinforcement learning. arXiv preprint arXiv:1611.04717.
1707.01495#45
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
46
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. arXiv preprint arXiv:1703.06907. Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026-5033. IEEE. Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv: 1703.01161. Zaremba, W. and Sutskever, I. (2014). Learning to execute. arXiv preprint arXiv:1410.4615. 13 # A Experiment details In this section we provide more details on our experimental setup and hyperparameters used.
1707.01495#46
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
47
13 # A Experiment details In this section we provide more details on our experimental setup and hyperparameters used. Bit-flipping experiment: We used a network with 1 hidden layer with 256 neurons. The length of each episode was equal to the number of bits and the episode was considered successful if the goal state was achieved at an arbitrary timestep during the episode. All other hyperparameters used were the same as in the case of DDPG experiments. State-goal distributions: For all tasks the initial position of the gripper is fixed, for the pushing and sliding tasks it is located just above the table surface and for pushing it is located 20cm above the table. The object is placed randomly on the table in the 30cm x 30cm (20c x 20cm for sliding) square with the center directly under the gripper (both objects are 5cm wide). For pushing, the goal state is sampled uniformly from the same square as the box position. In the pick-and-place task the target is located in the air in order to force the robot to grasp (and not just push). The x and y coordinates of the goal position are sampled uniformly from the mentioned square and the height is sampled uniformly between 10cm and 45cm. For sliding the goal position is sampled from a 60cm x 60cm square centered 40cm away from the initial gripper position. For all tasks we discard initial state-goal pairs in which the goal is already satisfied.
1707.01495#47
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
48
Network architecture: Both actor and critic networks have 3 hidden layers with 64 hidden units in each layer. Hidden layers use ReLu activation function and the actor output layer uses tanh. The output of the tanh is then rescaled so that it lies in the range [—5cm, 5cm]. In order to prevent tanh saturation and vanishing gradients we add the square of the their preactivations to the actor’s cost function. Training procedure: We train for 200 epochs. Each epoch consists of 50 cycles where each cycle consists of running the policy for 16 episodes and then performing 40 optimization steps on minibatches of size 128 sampled uniformly from a replay buffer consisting of 10° transitions. We update the target networks after every cycle using the decay coefficient of 0.95. Apart from using the target network for computing Q-targets for the critic we also use it in testing episodes as it is more stable than the main network. The whole training procedure is distributed over 8 threads. For the Adam optimization algorithm we use the learning rate of 0.001 and the default values from Tensorflow framework (Abadi et al., 2016) for the other hyperparameters. We use the discount factor of y = 0.98 for all transitions including the ones ending an episode. Moreover, we clip the targets used to train the critic to the range of possible values, i.e. [— cn 0).
1707.01495#48
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
49
Input scaling: Neural networks have problems dealing with inputs of different magnitudes and therefore it is crucial to scale them properly. To this end, we rescale inputs to neural networks so that they have mean zero and standard deviation equal to one and then clip them to the range [—5, 5]. Means and standard deviations used for rescaling are computed using all the observations encountered so far in the training. Exploration: The behavioral policy we use for exploration works as follows. With probability 20% we sample (uniformly) a random action from the hypercube of valid actions. Otherwise, we take the output of the policy network and add independently to every coordinate normal noise with standard deviation equal to 5% of the total range of allowed values on this coordinate. Simulation: Every episode consists of 50 environment timesteps, each of which consists of 10 MuJoCo steps with At = 0.002s. MuJoCo uses soft constraints for contacts and therefore object penetration is possible. It can be minimized by using a small timestep and more constraint solver epochs but it would slow down the simulation. We encountered some penetration in the pushing task (the agent learnt to push the box into the table in a way that it is pushed out by contact forces onto the target). In order to void this behaviour we added to the reward a term penalizing the squared depth of penetration for every contact pair. 14
1707.01495#49
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01495
50
14 Training time: Training for 200 epochs took us approximately 2.5h for pushing and the pick-and- place tasks and 6h for sliding (because physics simulation was slower for this task) using 8 cpu cores. # B_ Deployment on the physical robot We have trained a convolutional neural network (CNN) which predicts the box position given the raw image from the fetch head camera. The CNN was trained using only images coming from the Mujoco renderer. Despite the fact that training images were not photorealistic, the trained network performs well on real world data thanks to a high degree of randomization of textures, lightning and other visual parameters in training. This approach called domain randomization is described in more detail in Tobin et al. (2017). At the beginning of each episode we initialize a simulated environment using the box position predicted by the CNN and robot state coming from the physical robot. From this point we run the policy in the simulator. After each timestep we send the simulated robot joint angles to the real one which is position-controlled and uses the simulated data as targets. 15
1707.01495#50
Hindsight Experience Replay
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
http://arxiv.org/pdf/1707.01495
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba
cs.LG, cs.AI, cs.NE, cs.RO
null
null
cs.LG
20170705
20180223
[ { "id": "1511.05952" }, { "id": "1611.04717" }, { "id": "1703.06907" }, { "id": "1509.02971" } ]
1707.01067
1
In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environ- ments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a minia- ture version of StarCraft, captures key game dynamics and runs at 40K frame- per-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end- to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environ- ments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] cou- pled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong per- formance is also achieved on
1707.01067#1
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games
In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU and Batch Normalization coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than $70\%$ of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open-sourced at https://github.com/facebookresearch/ELF.
http://arxiv.org/pdf/1707.01067
Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick
cs.AI
NIPS 2017 oral
null
cs.AI
20170704
20171110
[ { "id": "1605.02097" }, { "id": "1511.06410" }, { "id": "1609.05521" }, { "id": "1602.01783" } ]
1707.01083
1
# Megvii Inc (Face++) {zhangxiangyu,zxy,linmengxiao,sunjian}@megvii.com # Abstract We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuf- fle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior perfor- mance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on Ima- geNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ∼13× actual speedup over AlexNet while main- taining comparable accuracy.
1707.01083#1
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy.
http://arxiv.org/pdf/1707.01083
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun
cs.CV
null
null
cs.CV
20170704
20171207
[ { "id": "1602.07360" }, { "id": "1611.06473" }, { "id": "1502.03167" }, { "id": "1503.02531" }, { "id": "1602.07261" }, { "id": "1608.04337" }, { "id": "1606.06160" }, { "id": "1702.03044" }, { "id": "1608.08021" }, { "id": "1710.05941" }, { "id": "1707.07012" }, { "id": "1611.05431" }, { "id": "1603.04467" }, { "id": "1704.04861" }, { "id": "1610.02357" }, { "id": "1709.01507" }, { "id": "1510.00149" } ]
1707.01083
2
tions to reduce computation complexity of 1 × 1 convolu- tions. To overcome the side effects brought by group con- volutions, we come up with a novel channel shuffle opera- tion to help the information flowing across feature channels. Based on the two techniques, we build a highly efficient ar- chitecture called ShuffleNet. Compared with popular struc- tures like [30, 9, 40], for a given computation complexity budget, our ShuffleNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks. We evaluate our models on the challenging ImageNet classification [4, 29] and MS COCO object detection [23] tasks. A series of controlled experiments shows the effec- tiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet [12], ShuffleNet achieves superior performance by a significant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs. # 1. Introduction
1707.01083#2
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy.
http://arxiv.org/pdf/1707.01083
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun
cs.CV
null
null
cs.CV
20170704
20171207
[ { "id": "1602.07360" }, { "id": "1611.06473" }, { "id": "1502.03167" }, { "id": "1503.02531" }, { "id": "1602.07261" }, { "id": "1608.04337" }, { "id": "1606.06160" }, { "id": "1702.03044" }, { "id": "1608.08021" }, { "id": "1710.05941" }, { "id": "1707.07012" }, { "id": "1611.05431" }, { "id": "1603.04467" }, { "id": "1704.04861" }, { "id": "1610.02357" }, { "id": "1709.01507" }, { "id": "1510.00149" } ]
1707.01067
3
# Introduction Game environments are commonly used for research in Reinforcement Learning (RL), i.e. how to train intelligent agents to behave properly from sparse rewards [4, 6, 5, 14, 29]. Compared to the real world, game environments offer an infinite amount of highly controllable, fully reproducible, and automatically labeled data. Ideally, a game environment for fundamental RL research is: • Extensive: The environment should capture many diverse aspects of the real world, such as rich dynamics, partial information, delayed/long-term rewards, concurrent actions with different granularity, etc. Having an extensive set of features and properties increases the potential for trained agents to generalize to diverse real-world scenarios. • Lightweight: A platform should be fast and capable of generating samples hundreds or thousands of times faster than real-time with minimal computational resources (e.g., a sin- gle machine). Lightweight and efficient platforms help accelerate academic research of RL algorithms, particularly for methods which are heavily data-dependent. • Flexible: A platform that is easily customizable at different levels, including rich choices of environment content, easy manipulation of game parameters, accessibility of internal variables, and flexibility of training architectures. All are important for fast exploration of different algorithms. For example, changing environment parameters [35], as well as using internal data [15, 19] have been shown to substantially accelerate training.
1707.01067#3
ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games
In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU and Batch Normalization coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than $70\%$ of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open-sourced at https://github.com/facebookresearch/ELF.
http://arxiv.org/pdf/1707.01067
Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick
cs.AI
NIPS 2017 oral
null
cs.AI
20170704
20171110
[ { "id": "1605.02097" }, { "id": "1511.06410" }, { "id": "1609.05521" }, { "id": "1602.01783" } ]
1707.01083
3
# 1. Introduction Building deeper and larger convolutional neural net- works (CNNs) is a primary trend for solving major visual recognition tasks [21, 9, 33, 5, 28, 24]. The most accu- rate CNNs usually have hundreds of layers and thousands of channels [9, 34, 32, 40], thus requiring computation at billions of FLOPs. This report examines the opposite ex- treme: pursuing the best accuracy in very limited compu- tational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works [16, 22, 43, 42, 38, 27] focus on pruning, compressing, or low-bit represent- ing a “basic” network architecture. Here we aim to explore a highly efficient basic architecture specially designed for our desired computing ranges.
1707.01083#3
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy.
http://arxiv.org/pdf/1707.01083
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun
cs.CV
null
null
cs.CV
20170704
20171207
[ { "id": "1602.07360" }, { "id": "1611.06473" }, { "id": "1502.03167" }, { "id": "1503.02531" }, { "id": "1602.07261" }, { "id": "1608.04337" }, { "id": "1606.06160" }, { "id": "1702.03044" }, { "id": "1608.08021" }, { "id": "1710.05941" }, { "id": "1707.07012" }, { "id": "1611.05431" }, { "id": "1603.04467" }, { "id": "1704.04861" }, { "id": "1610.02357" }, { "id": "1709.01507" }, { "id": "1510.00149" } ]