doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1509.03005
29
Our main result is that the deviator’s value gradient is compatible with the policy gradient of each unit in the actor-network – considered as an actor in its own right: Theorem 6 (deep compatible function approximation) Suppose that all units are rectilinear or linear. Then for each Actor-unit in the Actor- network there exists a reparametrization of the value-gradient approximator, GW, that sat- isfies the compatibility conditions in Theorem 2. The actor-network is thus a collection of interdependent agents that individually fol- low the correct policy gradients. The experiments below show that they also collectively converge on useful behaviors. Overview of the proof. The next few subsections prove Theorem 6. We provide a brief overview before diving into the details. Guarantees for temporal difference learning and policy gradients are typically based on the assumption that the value-function approximation is a linear function of the learned parameters. However, we are interested in the case where Actor, Critic and Deviator are all neural networks, and are therefore highly nonlinear functions of their parameters. The goal is thus to relate the representations learned by neural networks to the prior work on linear function approximations. To do so, we build on the following observation, implicit in (Srivastava et al., 2014):
1509.03005#29
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
30
To do so, we build on the following observation, implicit in (Srivastava et al., 2014): Remark 3 (active submodels) A neural network of n linear and rectilinear units can be considered as a set of 2n submodels, corresponding to different subsets of units. The active submodel at time t consists in the active units (that is, the linear units and the rectifiers that do not output 0). The active submodel has two important properties: • it is a linear function from inputs to outputs, since rectifiers are linear when active, and • at each time step, learning only occurs over the active submodels, since only active units update their weights. The feedforward sweep of a rectifier network can thus be disentangled into two steps (Bal- duzzi, 2015). The first step, which is highly nonlinear, applies a gating operation that selects the active submodel – by rendering various units inactive. The second step computes the output of the neural network via matrix multiplication. It is important to emphasize that although the active submodel is a linear function from inputs to outputs, it is not a linear function of the weights.
1509.03005#30
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
31
The strategy of the proof is to decompose the Actor-network in an interacting collection of agents, referred to as Actor-units. That is, we model each unit in the Actor-network as 12 Compatible Value Gradients for Deep Reinforcement Learning an Actor in its own right that. On each time step that an Actor-unit is active, it interacts with the Deviator-submodel corresponding to the current active submodel of the Deviator- network. The proof shows that each Actor-unit has compatible function approximation. # 5.1 Error backpropagation on rectilinear neural networks First, we recall some basic facts about backpropagation in the case of rectilinear units. Recent work has shown that replacing sigmoid functions with rectifiers S(x) = max(0, x) improves the performance of neural networks (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013). Let us establish some notation. The output of a rectifier with weight vector w is Sw(x) := S((w,x)) := max(0, (w, x)).
1509.03005#31
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
32
Sw(x) := S((w,x)) := max(0, (w, x)). The rectifier is active if (w,x) > 0. We use rectifiers because they perform well in prac- tice and have the nice property that units are linear when they are active. The rectifier subgradient is the indicator function 1(x) := ∇ S(x) = 1 x > 0 else. 0 Consider a neural network of n units, each equipped with a weight vector w/ € Hj; C R4. Hidden units are rectifiers; output units are linear. There are n units in total. It is convenient to combine all the weight vectors into a single object; let WC H = ITj- 1H; C RY where N = via d;. The network is a function FW R™ | R¢: xi FW (xin) =: Xout. The network has error function €(xXout, y) with gradient g = Vx,,, €. Let x’ denote the output of unit 7 and @/ (xin) = (2") fisi-+j} denote its input, so that a7 = S((w/, @) (xin). Note that ¢? depends on W (specifically, the weights of lower units) but this is supressed from the notation.
1509.03005#32
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
33
Definition 3 (influence) The influence of unit j on unit k at time t is πj,k influence of unit j on the output layer is the vector πj . ark The influence of unit j on unit k at time t is atk = ot (Balduzzi et al., 2015). The . iam influence of unit j on the output layer is the vector 7m, = (72°) peout' The following lemma summarizes an analysis of the feedforward and feedback sweep of neural nets. Lemma 7 (structure of neural network gradients) The following properties hold # a. Influence. A path is active at time t if all units on the path are firing. The influence of j on k is the sum of products of weights over all active paths from j to k: alk = > wre de > wren? vee > wrkgk {alja} {Bla—+B} {wlwk} where α, β, . . . , ω refer to units along the path from j to k. 13 Balduzzi and Ghifary # b. Output decomposition. The output of a neural network decomposes, relative to the output of unit j, as
1509.03005#33
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
34
13 Balduzzi and Ghifary # b. Output decomposition. The output of a neural network decomposes, relative to the output of unit j, as F W(xin) = πj · xj + π−j · xin, where π−j is the (m × d)-matrix whose (ik)th entry is the sum over all active paths from input unit i to output unit k that do not intersect unit j. # c. Output gradient. Fix an input Xin € R™ and consider the network as a function from parameters to outputs F*(Xin) > H > R¢: Ws FW (xin) whose gradient is an (N xd)-matrix. The (ij)"*-entry of the gradient is the input to the unit times its influence: (vwFW (xin)) 2] _ 9 (Xin) a if unit j is active 0 else. # d. Backpropagated error. Fix xin € R™” and consider the function E(W) = E(F*(xin),y) : H > R: We E(EW (xin), y). Let g = Vou E(Xouts ¥)- The gradient of the error function is
1509.03005#34
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
35
(Vw), = (8: (Vw (xin)) ;;) = 8) (VwF™ (xin) ;; = 5! - 6! (Xin) where the backpropagated error signal 5! received by unit j decomposes as 5) = (g, ww). Proof Direct computation. The lemma holds generically for networks of rectifier and linear units. We apply it to actor, critic and deviator networks below. # 5.2 A minimal DAC model This subsection proves condition C1 of compatible function approximation for a minimal, linear Deviator-Actor-Critic model. The next subsection shows how the minimal model arises at the level of Actor-units. Definition 4 (minimal model) The minimal model of a Deviator-Actor-Critic consists in an Actor with linear policy Lio(s) = (0,0(s)) + €, where 6 is an m-vector and € is a noisy scalar. The Critic and Deviator together output: Q”’(s, n9(s), €) = QY(s) + GY (wW9(s), ©) = (P(S), v) + Ho(s) - (e,w), ——— —S Critic Deviator
1509.03005#35
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
36
where v is an m-vector, w is a scalar, and (e,w) is simply scalar multiplication. 14 # a Compatible Value Gradients for Deep Reinforcement Learning The Critic in the minimal model is standard. However, the Deviator has been reduced it learns a single scalar parameter, w, that is used to train the actor. to almost nothing: The minimal model is thus too simple to be much use as a standalone algorithm. Lemma 8 (compatible function approximation for the minimal model) There exists a reparametrization of the gradient estimate of the minimal model G(s, €) = G" (9(s), €) such that compatibility condition C1 in Theorem 2 is satisifed: VG (8.6) = (Y Hols). ¥). Proof Let w :=w- 67 and construct G(s, €) := (w- @(s),€). Clearly, G (5,6) = (w- 8" G(8),€) = p(s) + (w,€) = G"(uo(s)-0) Observe that V. G¥(s, €) = w- ye(s) and that, similarly, (VY o(s), W) = w- po(s) (VY o(s), W) = w- po(s) as required.
1509.03005#36
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
37
(VY o(s), W) = w- po(s) as required. # 5.3 Proof of Theorem 6 The proof proceeds by showing that the compatibility conditions in Theorem 2 hold for each Actor-unit. The key step is to relate the Actor-units to the minimal model introduced above. Lemma 9 (reduction to minimal model) Actor-units in a DAC neural network are equivalent to minimal model Actors. Proof Let πj time t. When unit j is active, Lemma 7ab implies we can write µΘt(st) = πj µΘ−j the Actor-network that do not intersect unit j. Following Remark 3, the active subnetwork of the Deviator-network at time t is a linear transform which, by abuse of notation, we denote by W}. Combine the last two points to obtain GW(8,) = Wy (1 - (0, 61 (51)) + Ho (S1)) = (W}.- 7?) - (0, 6 (s:)) + terms that can be omitted. Observe that (W; - n) is a d-vector. We have therefore reduced Actor-unit j’s interaction with the Deviator-network to d copies of the minimal model. a Theorem 6 follows from combining the above Lemmas. 15 a Balduzzi and Ghifary
1509.03005#37
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
38
Theorem 6 follows from combining the above Lemmas. 15 a Balduzzi and Ghifary Proof Compatibility condition C1 follows from Lemmas 8 and 9. Compatibility condition C2 holds since the Critic and Deviator minimize the Bellman gradient error with respect to W and V which also, implicitly, minimizes the Bellman gradient error with respect to the corresponding reparametrized ˜w’s for each Actor-unit. Theorem 6 shows that each Actor-unit satisfies the conditions for compatible function approximation and so follows the correct gradient when performing weight updates. # 5.4 Structural credit assignment for multiagent learning It is interesting to relate our approach to the literature on multiagent reinforcement learning (Guestrin et al., 2002; Agogino and Tumer, 2004, 2008). In particular, (HolmesParker et al., 2014) consider the structural credit assignment problem within populations of interacting agents: How to reward individual agents in a population for rewards based on their collective behavior? They propose to train agents within populations with a difference-based objective of the form Dj = Q(z) − Q(z−j, cj) (5) where Q is the objective function to be maximized; zj and z−j are the system variables that are and are not under the control of agent j respective, and cj is a fixed counterfactual action.
1509.03005#38
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
39
In our setting, the gradient used by Actor-unit j to update its weights can be described explicitly: Lemma 10 (local policy gradients) Actor-unit j follows policy gradient V J[Moi] = V Hoi (s )- (I, GY) |, where (wi, GWs )) & = D,iQU(S) is Deviator’s estimate of the directional derivative of the value ee in the direction of Actor-unit j’s influence. Proof Follows from Lemma 7b. Notice that ∇zj Q = ∇zj Dj in Eq. (5). It follows that training the Actor-network via GProp causes the Actor-units to optimize the difference-based objective – without requiring to compute the difference explicitly. Although the topic is beyond the scope of the current paper, it is worth exploring how suitably adapted variants of backpropagation can be applied to the reinforcement learning problems in the multiagent setting. # 5.5 Comparison with related work Comparison with COPDAC-Q. Extending the standard value function approximation in Example 1 to the setting where Actor is a neural network yields the following representation, which is used in (Silver et al., 2014) when applying COPDAC-Q to the octopus arm task: 16 # a # Compatible Value Gradients for Deep Reinforcement Learning
1509.03005#39
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
40
16 # a # Compatible Value Gradients for Deep Reinforcement Learning Example 2 (extension of standard value approximation to neural networks) Let µΘ : S → A and QV : S → R be an Actor and Critic neural network respectively. the total number of entries in Θ). It Suppose the Actor-network has N parameters (i.e. follows that the Jacobian ∇Θ µΘ(s) is an (N × d)-matrix. The value function approximation is then QYW(s,a) = (a— He(s))"- Vote(s)" w+ QY(s). SES advantage function Critic where w is an N -vector. Weight updates under COPDAC-Q, with the function approximation above, are therefore as described in Algorithm 2. Algorithm 2: Compatible Deterministic Actor-Critic (COPDAC-Q).
1509.03005#40
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
41
Algorithm 2: Compatible Deterministic Actor-Critic (COPDAC-Q). for rounds t= 1,2,...,T do Network gets state s;, responds a; = fo,(s:) + € where € ~ N(0,0? - Iy), gets reward ry 5p — Tr + YQY*(St41) — QY*(st) — (Vo Meo, (St) - €, wi) Or41 <— ©: + nf - Vo Me, (si) - Vo Me, (St) - we Visi — Vit ne - 54: Vv QY*(s:) Wry <— Wer ne “O° Vo Ho, (st) 7€ Let us compare GProp with COPDAC-Q, considering the three updates in turn: • Actor updates. Under GProp, the Actor backpropagates the value-gradient estimate. In contrast under COPDAC-Q the Actor performs a complicated update that combines the policy gradient ∇Θµ(s) with the advantage function’s weights – and differs substantively from backprop. • Deviator / advantage-function updates.
1509.03005#41
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
42
• Deviator / advantage-function updates. Under GProp, the Deviator backpropagates the perturbed TDG-error. In contrast, COPDAC-Q uses the gradient of the Actor to update the weight vector w of the advan- tage function. By Lemma 7d, backprop takes the form g™ - Vo 9(s) where g is a d-vector. In contrast, the advantage function requires computing Vo “e(s)™ - w, where w is an N-vector. Although the two formulae appear similarly superficially, they carry very different computational costs. The first consequence is that the parameters of w must exactly line up with those of the policy. The second consequence is that, by Lemma 7c, the advantage function requires access to (∇ΘµΘ(s))ij = φij(s) · πj 0 if unit j is active else, where φij(s) is the input from unit i to unit j. Thus, the advantage function requires access to the input φj(s) and the influence πj of every unit in the Actor-network. 17 Balduzzi and Ghifary • Critic updates. The critic updates for the two algorithms are essentially identical, with the TD-error replaced with the TDG-error.
1509.03005#42
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
43
17 Balduzzi and Ghifary • Critic updates. The critic updates for the two algorithms are essentially identical, with the TD-error replaced with the TDG-error. In short, the approximation in Example 2 that is used by COPDAC-Q is thus not well- adapted to deep learning. The main reason is that learning the advantage function requires coupling the vector w with the parameters Θ of the actor. Comparison with computing the gradient of the value-function approximation. Perhaps the most natural approach to estimating the gradient is to simply estimate the value function, and then use its gradient as an estimate of the derivative (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). The main problem with this approach is that, to date, it has not been show that the resulting updates of the Critic and the Actor are compatible.
1509.03005#43
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
44
There are also no guarantees that the gradient of the Critic will be a good approximation to the gradient of the value function – although it is intuitively plausible. The problem becomes particularly severe when the value-function is estimated via a neural network that uses activation functions that are not smooth such as rectifers. Rectifiers are becoming increasingly popular due to their superior empirical performance (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013). # 6. Experiments We evaluate GProp on three tasks: two highly nonlinear contextual bandit tasks constructed from benchmark datasets for nonparametric regression, and the octopus arm. We do not evaluate GProp on other standard reinforcement learning benchmarks such as Mountain Car, Pendulum or Puddle World, since these can already be handled by linear actor-critic algorithms. The contribution of GProp is the ability to learn representations suited to nonlinear problems. Cloning and replay. Temporal difference learning can be unstable when run over a neural network. A recent innovation introduced in (Mnih et al., 2015) that stabilizes TD- learning is to clone a separate network Q ˜V to compute the targets rt + γQ ˜V(˜st+1). The parameters of the cloned network are updated periodically.
1509.03005#44
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
45
We implement a similar modification of the TDG-error in Algorithm 1. We also use experience replay (Mnih et al., 2015). GProp is well-suited to replay, since the critic and deviator can learn values and gradients over the full range of previously observed state- action pairs offline. Cloning and replay were also applied to COPDAC-Q. Both algorithms were implemented in Theano (Bergstra et al., 2010; Bastien et al., 2012). # 6.1 Contextual Bandit Tasks The goal of the contextual bandit tasks is to probe the ability of reinforcement learning algorithms to accurately estimate gradients. The experimental setting may thus be of independent interest. 18 # Compatible Value Gradients for Deep Reinforcement Learning Contextual Bandit (SARCOS) 0.00 -0.02 Qa é a 2-0.04 G a zg ©-0.06 Ss z -0.08 0-209 100 200 300 400 500 epochs
1509.03005#45
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
46
Contextual Bandit (SARCOS) 0.00 Contextual Bandit (Barrett) 0.00 -0.02 Qa 3 2-0.04 £ < -0.04 a Vv S ©-0.06 o oH $-0.06 gv -0.08 -0.08 0-209 100 200 300 400 500 ~0.10 epochs ( 500 1000 1500 epochs Contextual Bandit (Barrett) 0.00 3 £ < -0.04 Vv S o oH $-0.06 gv -0.08 ~0.10 ( 500 1000 1500 epochs Figure 1: Performance on contextual bandit tasks. The reward (negative normalized test MSE) for 10 runs are shown and averaged (thick lines). Performance variation for GProp is barely visible. Epochs refer to multiples of dataset; algorithms are ultimately trained on the same number of random samples for both datasets. Description. We converted two robotics datasets, SARCOS3 and Barrett WAM4, into contextual bandit problems via the supervised-to-contextual-bandit transform in (Dud´ık et al., 2014). The datasets have 44,484 and 12,000 training points respectively, both with 21 features corresponding to the positions, velocities and accelerations of seven joints. Labels are 7-dimensional vectors corresponding to the torques of the 7 joints.
1509.03005#46
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
47
In the contextual bandit task, the agent samples 21-dimensional state vectors i.i.d. from either the SARCOS or Barrett training data and executes 7-dimensional actions. The reward r(s,a) = —|ly(s ) — all? is the negative mean-square distance from the action to the label. Note that the reward is a scalar, whereas the correct label is a 7-dimensional vector. The gradient of the reward 1 2 ∇ a r(s, a) = y(s) − a is the direction from the action to the correct label. In the supervised setting, the gradient can be computed. In the bandit setting, the reward is a zeroth-order black box. The agent thus receives far less information in the bandit setting than in the fully supervised setting. Intuitively, the negative distance r(s, a) “tells” the algorithm that the correct label lies on the surface of a sphere in the 7-dimensional action space that is centred on the most recent action. By contrast, in the supervised setting, the algorithm is given the position of the label in the action space. In the bandit setting, the algorithm must estimate the position of the label on the surface of the sphere. Equivalently, the algorithm must estimate the label’s direction relative to the center of the sphere – which is given by the gradient of the value function.
1509.03005#47
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
48
3. Taken from www.gaussianprocess.org/gpml/data/. 4. Taken from http://www.ausy.tu-darmstadt.de/Miscellaneous/Miscellaneous. 19 Balduzzi and Ghifary The goal of the contextual bandit task is thus to simultaneously solve seven nonpara- metric regression problems when observing distances-to-labels instead of directly observing labels. The value function is relatively easy to learn in contextual bandit setting since the task is not sequential. However, both the value function and its gradient are highly nonlinear, and it is precisely the gradient that specifies where labels lie on the spheres. Network architectures. GProp and COPDAC-Q were implemented on an actor and devi- ator network of two layers (300 and 100 rectifiers) each and a critic with a hidden layers of 100 and 10 rectifiers. Updates were computed via RMSProp with momentum. The variance of the Gaussian noise σ was set to decrease linearly from σ2 = 1.0 until reaching σ2 = 0.1 at which point it remained fixed.
1509.03005#48
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
49
Performance. Figure 1 compares the test-set performance of policies learned by GProp against COPDAC-Q. The final policies trained by GProp achieved average mean-square test error of 0.013 and 0.014 on the seven SARCOS and Barrett benchmarks respectively. Remarkably, GProp is competitive with fully-supervised nonparametric regression algo- rithms on the SARCOS and Barrett datasets, see Figure 2bc in (Nguyen-Tuong et al., 2008) and the results in (Kpotufe and Boularias, 2013; Trivedi et al., 2014). It is important to note that the results reported in those papers are for algorithms that are given the labels and that solve one regression problem at a time. To the best of our knowledge, there are no prior examples of a bandit or reinforcement learning algorithm that is competitive with fully supervised methods on regression datasets. For comparison, we implemented Backprop on the Actor-network under full-supervision. Backprop converged to .006 and .005 on SARCOS and BARRETT, compared to 0.013 and 0.014 for GProp. Note that BackProp is trained on 7-dim labels whereas GProp receives 1-dim rewards.
1509.03005#49
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
50
Contextual Bandit Gradients (Barrett) 0.040 ~ 0.035 — COPDAC-Q , — GradProp 0.030 Ly 0.025 an = 0.020 mB # 0.015 0.010 0.005 0.000, 500 1000 1500 epochs Contextual Bandit Gradients (SARCOS) 0.040 0.035 COPDAC-Q , GradProp 0.030 th 0.025 Ss + 0.020 0.015 0.010 noo 0.000; 100 200 300 400 500 epochs Contextual Bandit Gradients (SARCOS) Contextual Bandit Gradients (Barrett) 0.040 0.040 ~ 0.035 COPDAC-Q 0.035 — COPDAC-Q , GradProp , — GradProp 0.030 0.030 th 0.025 Ly 0.025 Ss an + 0.020 = 0.020 mB 0.015 # 0.015 0.010 0.010 noo 0.005 0.000; 100 200 300 400 500 0.000, 500 1000 1500 epochs epochs Figure 2: Gradient estimates for contextual bandit tasks. The normalized MSE of the gradient estimates compared against the true gradients, i.e. 1 2, are shown for 10 runs of COPDAC-Q and GProp, along with their averages (thick lines). 20
1509.03005#50
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
51
20 # Compatible Value Gradients for Deep Reinforcement Learning Accuracy of gradient-estimates. The true value-gradients can be computed and com- pared with the algorithm’s estimates on the contextual bandit task. Fig. 2 shows the per- formance of the two algorithms. GProp’s gradient-error converges to < 0.005 on both tasks. COPDAC-Q’s gradient estimate, implicit in the advantage function, converges to 0.03 (SAR- COS) and 0.07 (BARRETT). This confirms that GProp yields significantly better gradient estimates. COPDAC-Q’s estimates are significantly worse for Barrett compared to SARCOS, in line with the worse performance of COPDAC-Q on Barrett in Fig. 1. It is unclear why COPDAC-Q’s gradient estimate gets worse on Barrett for some period of time. On the other hand, since there are no guarantees on COPDAC-Q’s estimates, it follows that its erratic behavior is perhaps not surprising.
1509.03005#51
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
52
Comparison with bandit task in (Silver et al., 2014). Note that although the contextual bandit problems investigated here are lower-dimensional (with 21-dimensional state spaces and 7-dimensional action spaces) than the bandit problem in (Silver et al., 2014) (with no state space and 10, 25 and 50-dimensional action spaces), they are nevertheless much harder. The optimal action in the bandit problem, in all cases, is the constant vector [4, . . . , 4] consisting of only 4s. In contrast, SARCOS and BARRETT are nontrivial benchmarks even when fully supervised. # 6.2 Octopus Arm The octopus arm task is a challenging environment that is high-dimensional, sequential and highly nonlinear.
1509.03005#52
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
53
# 6.2 Octopus Arm The octopus arm task is a challenging environment that is high-dimensional, sequential and highly nonlinear. Desciption. The objective is to learn to hit a target with a simulated octopus arm (Engel et al., 2005).5 Settings are taken from (Silver et al., 2014). Importantly, the action-space is not simplified using “macro-actions”. The arm has C = 6 compartments attached to a rotating base. There are 50 = 8C + 2 state variables (x, y position/velocity of nodes along the upper/lower side of the arm; angular position/velocity of the base) and 20 = 3C + 2 action variables controlling the clockwise and counter-clockwise rotation of the base and three muscles per compartment. After each step, the agent receives a reward of 10 · ∆dist, where ∆dist is the change in distance between the arm and the target. The final reward is +50 if the agent hits the target. An episode ends when the target is hit or after 300 steps. The arm initializes at eight positions relative to the target: ±45◦, ±75◦, ±105◦, ±135◦. See Appendix B for more details.
1509.03005#53
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
54
The arm initializes at eight positions relative to the target: ±45◦, ±75◦, ±105◦, ±135◦. See Appendix B for more details. Network architectures. We applied GProp to an actor-network with 100 hidden recti- fiers and linear output units clipped to lie in [0, 1]; and critic and deviator networks both with two hidden layers of 100 and 40 rectifiers, and linear output units. Updates were computed via RMSProp with step rate of 10−4, moving average decay, with Nesterov mo- mentum (Hinton et al., 2012) penalty of 0.9 and 0.9 respectively, and discount rate γ of 0.95. 5. Simulator taken from http://reinforcementlearningproject.googlecode.com/svn/trunk/FoundationsOfAI/ octopus-arm-simulator/octopus/ 21 # Balduzzi and Ghifary Octopus arm — COPDAC-Q — GradProp 300 250 200 steps to target e a 3 100 50 °% 50000 100000 150000 200000 250000 300000 # training actions
1509.03005#54
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
55
Octopus arm — COPDAC-Q — GradProp 300 250 200 steps to target e a 3 100 50 °% 50000 100000 150000 200000 250000 300000 # training actions Octopus arm — COPDAC-Q — GradProp Fal u > ° w a w ° reward per step N a 2.0 15 Se Poo 1.0 0.5 0.05 50000 100000 150000 200000 250000 300000 # training actions Octopus arm Octopus arm — COPDAC-Q — COPDAC-Q — GradProp — GradProp Fal u 300 > ° 250 w a w ° 200 steps to target e a 3 reward per step N a 2.0 100 15 Se Poo 1.0 50 0.5 °% 50000 100000 150000 200000 250000 300000 0.05 50000 100000 150000 200000 250000 300000 # training actions # training actions Figure 3: Performance on octopus arm task. Ten runs of GProp and COPDAC-Q on a 6-segment octopus arm with 20 action and 50 state dimensions. Thick lines depict average values. Left panel: number of steps/episode for the arm to reach the target. Right panel: corresponding average rewards/step.
1509.03005#55
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
56
The variance of the Gaussian noise was initialized to σ2 = 1.0. An explore/exploit tradeoff was implemented as follows. When the arm hit the target in more than 300 steps, we set σ2 ← σ2 · 1.3; otherwise σ2 ← σ2/1.3. A hard lower bound was fixed at σ2 = 0.3. We implemented COPDAC-Q on a variety of architectures; the best results are shown (also please see Figure 3 in (Silver et al., 2014)). They were obtained using a similar architecture to GProp, with sigmoidal hidden units and sigmoidal output units for the actor. Linear, rectilinear and clipped-linear output units were also tried. As for GProp, cloning and experience replay were used to increase stability.
1509.03005#56
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
57
Performance. Figure 3 shows the steps-to-target and average-reward-per-step on ten training runs. GProp converges rapidly and reliably (within ±170, 000 steps) to a stable policy that uses less than 50 steps to hit the target on average (see supplementary video for examples of the final policy in action). GProp converges quicker, and to a better solu- tion, than COPDAC-Q. The reader is strongly encouraged to compare our results with those reported in (Silver et al., 2014). To the best of our knowledge, GProp achieves the best performance to date on the octopus arm task. It is clear from the variability displayed in the figures that both the policy and Stability. the gradients learned by GProp are more stable than COPDAC-Q. Note that the higher vari- ability exhibited by GProp in the right-hand panel of Fig. 3 (rewards-per-step) is misleading. It arises because dividing by the number of steps – which is lower for GProp since it hits the target more quickly after training – inflates GProp’s apparent variability. # 7. Conclusion Value-Gradient Backpropagation (GProp) is the first deep reinforcement learning algorithm with compatible function approximation for continuous policies. It builds on the determinis22 # Compatible Value Gradients for Deep Reinforcement Learning
1509.03005#57
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
58
# Compatible Value Gradients for Deep Reinforcement Learning tic actor-critic, COPDAC-Q, developed in (Silver et al., 2014) with two decisive modifications. First, we incorporate an explicit estimate of the value gradient into the algorithm. Second, we construct a model that decouples the internal structure of the actor, critic, and deviator – so that all three can be trained via backpropagation. GProp achieves state-of-the-art performance on two contextual bandit problems where it simultaneously solves seven regression problems without observing labels. Note that GProp is competitive with recent fully supervised methods that solve a single regression problem at a time. Further, GProp outperforms the prior state-of-the-art on the octopus arm task, quickly converging onto policies that rapidly and fluidly hit the target. Acknowledgements. We thank Nicolas Heess for sharing the settings of the octopus arm experiments in (Silver et al., 2014). # References Adrian K Agogino and Kagan Tumer. Unifying Temporal and Structural Credit Assignment Problems. In AAMAS, 2004. Adrian K Agogino and Kagan Tumer. Analyzing and Visualizing Multiagent Rewards in Dynamic and Stochastic Environments. Journal of Autonomous Agents and Multi-Agent Systems, 17(2):320–338, 2008.
1509.03005#58
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
59
L C Baird. Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995. David Balduzzi. Deep Online Convex Optimization by Putting Forecaster to Sleep. arXiv:1509.01851, 2015. In David Balduzzi, Hastagiri Vanchinathan, and Joachim Buhmann. Kickback cuts Backprop’s red-tape: Biologically plausible credit assignment in neural networks. In AAAI, 2015. Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike Adapative Elements That Can Solve Difficult Learning Control Problems. IEEE Trans. Systems, Man, Cyb, 13(5):834–846, 1983. F Bastien, P Lamblin, R Pascanu, J Bergstra, I Goodfellow, A Bergeron, N Bouchard, and Y Bengio. Theano: new features and speed improvements. In NIPS Workshop: Deep Learning and Unsupervised Feature Learning, 2012.
1509.03005#59
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
60
J Bergstra, O Breuleux, F Bastien, P Lamblin, R Pascanu, G Desjardins, J Turian, D Warde- Farley, and Yoshua Bengio. Theano: A CPU and GPU Math Expression Compiler. In Proc. Python for Scientific Comp. Conf. (SciPy), 2010. George E Dahl, Tara N Sainath, and Geoffrey Hinton. Improving deep neural networks for LVCSR using rectified linear units and dropout. In IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP), 2013. Christoph Dann, Gerhard Neumann, and Jan Peters. Policy Evaluation with Temporal Differences: A Survey and Comparison. JMLR, 15:809–883, 2014. 23 Balduzzi and Ghifary Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A Survey on Policy Search for Robotics. Foundations and Trends in Machine Learning, 2(1-2):1–142, 2011. Miroslav Dud´ık, Dumitru Erhan, John Langford, and Lihong Li. Doubly Robust Policy Evaluation and Optimization. Statistical Science, 29(4):485–511, 2014.
1509.03005#60
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
61
Y Engel, P Szab´o, and D Volkinshtein. Learning to control an octopus arm with gaussian process temporal difference methods. In NIPS, 2005. Michael Fairbank and Eduardo Alonso. Value-Gradient Learning. In IEEE World Confer- ence on Computational Intelligence (WCCI), 2012. Michael Fairbank, Eduardo Alonso, and Daniel V Prokhorov. An Equivalence Between Adaptive Dynamic Programming With a Critic and Backpropagation Through Time. IEEE Trans. Neur. Net., 24(12):2088–2100, 2013. Abraham Flaxman, Adam Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: Gradient descent without a gradient. In SODA, 2005. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep Sparse Rectifier Neural Networks. In Proc. 14th Int Conference on Artificial Intelligence and Statistics (AISTATS), 2011. Carlos Guestrin, Michail Lagoudakis, and Ronald Parr. Coordinated Reinforcement Learn- ing. In ICML, 2002. Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control: Chal- lenges and benchmarks from technical process control. Machine Learning, 84:137–169, 2011.
1509.03005#61
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
62
Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control: Chal- lenges and benchmarks from technical process control. Machine Learning, 84:137–169, 2011. G Hinton, Nitish Srivastava, and Kevin Swersky. Lecture 6a: Overview of minibatch gra- dient descent. 2012. Chris HolmesParker, Adrian K Agogino, and Kagan Tumer. Combining Reward Shaping and Hierarchies for Scaling to Large Multiagent Systems. The Knowledge Engineering Review, 2014. Michael I Jordan and R A Jacobs. Learning to control an unstable system with forward modeling. In NIPS, 1990. Sham Kakade. A natural policy gradient. In NIPS, 2001. Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPS, 2000. Samory Kpotufe and Abdeslam Boularias. Gradient Weights help Nonparametric Regres- sors. In Advances in Neural Information Processing Systems (NIPS), 2013. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. arXiv:1504.00702, 2015. 24 Compatible Value Gradients for Deep Reinforcement Learning
1509.03005#62
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
63
24 Compatible Value Gradients for Deep Reinforcement Learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 02 2015. Vinod Nair and Geoffrey Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. In ICML, 2010. A S Nemirovski and D B Yudin. Problem complexity and method efficiency in optimization. Wiley-Interscience, 1983. Duy Nguyen-Tuong, Jan Peters, and Matthias Seeger. Local Gaussian Process Regression for Real Time Online Model Learning. In NIPS, 2008. Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2006.
1509.03005#63
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
64
Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2006. Daniel V Prokhorov and Donald C Wunsch. Adaptive Critic Designs. IEEE Trans. Neur. Net., 8(5):997–1007, 1997. Maxim Raginsky and Alexander Rakhlin. Information-Based Complexity, Feedback and Dynamics in Convex Programming. IEEE Trans. Inf. Theory, 57(10):7036–7056, 2011. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Ried- miller. Deterministic Policy Gradient Algorithms. In ICML, 2014. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 15:1929–1958, 2014. R S Sutton and A G Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. Richard Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1999.
1509.03005#64
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
65
Richard Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1999. Richard Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesv´ari, and Eric Wiewiora. Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation. In ICML, 2009a. Richard Sutton, Csaba Szepesv´ari, and Hamid Reza Maei. A convergent O(n) algorithm for off-policy temporal-difference learning with linear function approximation. In Adv in Neural Information Processing Systems (NIPS), 2009b. Shubhendu Trivedi, Jialei Wang, Samory Kpotufe, and Gregory Shakhnarovich. A Consis- tent Estimator of the Expected Gradient Outerproduct. In UAI, 2014. John Tsitsiklis and Benjamin Van Roy. An Analysis of Temporal-Difference Learning with Function Approximation. IEEE Trans. Aut. Control, 42(5):674–690, 1997. 25 Balduzzi and Ghifary
1509.03005#65
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
66
25 Balduzzi and Ghifary Niklas Wahlstr¨om, Thomas B. Sch¨on, and Marc Peter Deisenroth. From Pixels to Torques: Policy Learning with Deep Dynamical Models. arXiv:1502.02251, 2015. Y Wang and J Si. On-line learning control by association and reinforcement. IEEE Trans. Neur. Net., 12(2):264–276, 2001. Ronald J Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229–256, 1992. M D Zeiler, M Ranzato, R Monga, M Mao, K Yang, Q V Le, P Nguyen, A Senior, V Van- In houcke, J Dean, and G Hinton. On Rectified Linear Units for Speech Processing. ICASSP, 2013. # Appendices # A. Explicit weight updates under GProp It is instructive to describe the weight updates under GProp more explicitly.
1509.03005#66
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
67
# Appendices # A. Explicit weight updates under GProp It is instructive to describe the weight updates under GProp more explicitly. Let θj, wj and vj denote the weight vector of unit j, according to whether it belongs to the actor, deviator or critic network. Similarly, in each case πj or πj denotes the influence of unit j on the network’s output layer, where the influence is vector-valued for actor and deviator networks and scalar-valued for the critic network. Weight updates in the deviator-actor-critic model, where all three networks consist of rectifier units performing stochastic gradient descent, are then per Algorithm 3. Units that are not active on a round do not update their weights that round. Algorithm 3: GProp: Explicit updates.
1509.03005#67
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
68
Algorithm 3: GProp: Explicit updates. for rounds t= 1,2,...,T do Network gets state s;, responds &e — re + 7Q¥* (S41) — QV! (St for unit j = 1,2,...,n do if j is an active actor unit then | O41 <— Of +f - (aw: (5). 7/) - $i (sz) // backpropagate GW else if j is an active critic unit then | Vind VET (&, i) - bi (st) // backpropagate € else if j is an active deviator unit then | wy — wi taf - (& “€, 7) - bt (st) // backpropagate €-€ # B. Details for octopus arm experiments Listing 1 summarizes technical information with respect to the physical description and task setting used in the octopus arm simulator in XML format. 26 # Compatible Value Gradients for Deep Reinforcement Learning Listing 1 Physical description and task setting for the octopus arm (setting.xml). <c o n s t a n t s >
1509.03005#68
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
69
< f r i c t i o n T a n g e n t i a l >0.4</ f r i c t i o n T a n g e n t i a l > < f r i c t i o n P e r p e n d i c u l a r >1</ f r i c t i o n P e r p e n d i c u l a r > <p r e s s u r e >10</ p r e s s u r e > <g r a v i t y >0.01</ g r a v i t y > <s u r f a c e L e v e l >5</ s u r f a c e L e v e l > <buoyancy >0.08</ buoyancy> <m u s c l e A c t i v e >0.1</ m u s c l e A c t i v e > <m u s c l e P a s s i v e >0.04</ m u s c l e P a s s i v e > <m u s c l e N o r m a l i z e d M i n L e n g t h >0.1</ m u s c l e N o r m a l i z e d M i n L e n g t h >
1509.03005#69
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.03005
70
M i n L e n g t h >0.1</ m u s c l e N o r m a l i z e d M i n L e n g t h > <muscleDamping >−1</muscleDamping> <r e p u l s i o n C o n s t a n t >.01</ r e p u l s i o n C o n s t a n t > <r e p u l s i o n P o w e r >1</ r e p u l s i o n P o w e r > <r e p u l s i o n T h r e s h o l d >0.7</ r e p u l s i o n T h r e s h o l d > < t o r q u e C o e f f i c i e n t >0.025</ t o r q u e C o e f f i c i e n t >
1509.03005#70
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-critic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor's policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm.
http://arxiv.org/pdf/1509.03005
David Balduzzi, Muhammad Ghifary
cs.LG, cs.AI, cs.NE, stat.ML
27 pages
null
cs.LG
20150910
20150910
[ { "id": "1502.02251" }, { "id": "1509.01851" }, { "id": "1504.00702" } ]
1509.02971
0
9 1 0 2 l u J 5 ] G L . s c [ 6 v 1 7 9 2 0 . 9 0 5 1 : v i X r a Published as a conference paper at ICLR 2016 # CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING Timothy P. Lillicrap∗, Jonathan J. Hunt∗, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver & Daan Wierstra Google Deepmind London, UK {countzero, jjhunt, apritzel, heess, etom, tassa, davidsilver, wierstra} @ google.com # ABSTRACT
1509.02971#0
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
1
# ABSTRACT We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the de- terministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our al- gorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is com- petitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies “end-to-end”: directly from raw pixel in- puts. # INTRODUCTION
1509.02971#1
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
2
# INTRODUCTION One of the primary goals of the field of artificial intelligence is to solve complex tasks from unpro- cessed, high-dimensional, sensory input. Recently, significant progress has been made by combin- ing advances in deep learning for sensory processing (Krizhevsky et al., 2012) with reinforcement learning, resulting in the “Deep Q Network” (DQN) algorithm (Mnih et al., 2015) that is capable of human level performance on many Atari video games using unprocessed pixels for input. To do so, deep neural network function approximators were used to estimate the action-value function. However, while DQN solves problems with high-dimensional observation spaces, it can only handle discrete and low-dimensional action spaces. Many tasks of interest, most notably physical control tasks, have continuous (real valued) and high dimensional action spaces. DQN cannot be straight- forwardly applied to continuous domains since it relies on a finding the action that maximizes the action-value function, which in the continuous valued case requires an iterative optimization process at every step.
1509.02971#2
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
3
An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous domains is to to simply discretize the action space. However, this has many limitations, most no- tably the curse of dimensionality: the number of actions increases exponentially with the number of degrees of freedom. For example, a 7 degree of freedom system (as in the human arm) with the coarsest discretization ai ∈ {−k, 0, k} for each joint leads to an action space with dimensionality: 37 = 2187. The situation is even worse for tasks that require fine control of actions as they require a correspondingly finer grained discretization, leading to an explosion of the number of discrete actions. Such large action spaces are difficult to explore efficiently, and thus successfully training DQN-like networks in this context is likely intractable. Additionally, naive discretization of action spaces needlessly throws away information about the structure of the action domain, which may be essential for solving many problems. In this work we present a model-free, off-policy actor-critic algorithm using deep function approx- imators that can learn policies in high-dimensional, continuous action spaces. Our work is based ∗These authors contributed equally. 1 Published as a conference paper at ICLR 2016
1509.02971#3
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
4
∗These authors contributed equally. 1 Published as a conference paper at ICLR 2016 on the deterministic policy gradient (DPG) algorithm (Silver et al., 2014) (itself similar to NFQCA (Hafner & Riedmiller, 2011), and similar ideas can be found in (Prokhorov et al., 1997)). However, as we show below, a naive application of this actor-critic method with neural function approximators is unstable for challenging problems. Here we combine the actor-critic approach with insights from the recent success of Deep Q Network (DQN) (Mnih et al., 2013; 2015). Prior to DQN, it was generally believed that learning value functions using large, non-linear function approximators was difficult and unstable. DQN is able to learn value functions using such function approximators in a stable and robust way due to two innovations: 1. the network is trained off-policy with samples from a replay buffer to minimize correlations between samples; 2. the network is trained with a target Q network to give consistent targets during temporal difference backups. In this work we make use of the same ideas, along with batch normalization (Ioffe & Szegedy, 2015), a recent advance in deep learning.
1509.02971#4
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
5
In order to evaluate our method we constructed a variety of challenging physical control problems that involve complex multi-joint movements, unstable and rich contact dynamics, and gait behavior. Among these are classic problems such as the cartpole swing-up problem, as well as many new domains. A long-standing challenge of robotic control is to learn an action policy directly from raw sensory input such as video. Accordingly, we place a fixed viewpoint camera in the simulator and attempted all tasks using both low-dimensional observations (e.g. joint angles) and directly from pixels. Our model-free approach which we call Deep DPG (DDPG) can learn competitive policies for all of our tasks using low-dimensional observations (e.g. cartesian coordinates or joint angles) using the same hyper-parameters and network structure. In many cases, we are also able to learn good policies directly from pixels, again keeping hyperparameters and network structure constant 1.
1509.02971#5
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
6
A key feature of the approach is its simplicity: it requires only a straightforward actor-critic archi- tecture and learning algorithm with very few “moving parts”, making it easy to implement and scale to more difficult problems and larger networks. For the physical control problems we compare our results to a baseline computed by a planner (Tassa et al., 2012) that has full access to the underly- ing simulated dynamics and its derivatives (see supplementary information). Interestingly, DDPG can sometimes find policies that exceed the performance of the planner, in some cases even when learning from pixels (the planner always plans over the underlying low-dimensional state space). # 2 BACKGROUND We consider a standard reinforcement learning setup consisting of an agent interacting with an en- vironment E in discrete timesteps. At each timestep t the agent receives an observation xt, takes an action at and receives a scalar reward rt. In all the environments considered here the actions are real-valued at ∈ IRN . In general, the environment may be partially observed so that the entire history of the observation, action pairs st = (x1, a1, ..., at−1, xt) may be required to describe the state. Here, we assumed the environment is fully-observed so st = xt.
1509.02971#6
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
7
An agent’s behavior is defined by a policy, 7, which maps states to a probability distribution over the actions 7: S — P(A). The environment, E, may also be stochastic. We model it as a Markov decision process with a state space S, action space A = JRN,, an initial state distribution p(s1), transition dynamics p(s;41|s,, @,), and reward function r(s;, a¢). The return from a state is defined as the sum of discounted future reward Ry = an 9 r(si, ai) with a discounting factor y € [0, 1]. Note that the return depends on the actions chosen, and therefore on the policy 7, and may be stochastic. The goal in reinforcement learning is to learn a policy which maximizes the expected return from the start distribution J = E,, 5,.£,a;~7 [Ri]. We denote the discounted state visitation distribution for a policy 7 as p”. The action-value function is used in many reinforcement learning algorithms. It describes the ex- pected return after taking an action at in state st and thereafter following policy π: Qπ(st, at) = Eri≥t,si>t∼E,ai>t∼π [Rt|st, at] (1)
1509.02971#7
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
8
Qπ(st, at) = Eri≥t,si>t∼E,ai>t∼π [Rt|st, at] (1) 1You can view a movie of some of the learned policies at https://goo.gl/J4PIAz 2 Published as a conference paper at ICLR 2016 Many approaches in reinforcement learning make use of the recursive relationship known as the Bellman equation: Q" (81,41) = Evy seyioe [P(8e, ar) + Y¥ Ea en [Q” (se41, ar41)]] (2) If the target policy is deterministic we can describe it as a function µ : S ← A and avoid the inner expectation: Qµ(st, at) = Ert,st+1∼E [r(st, at) + γQµ(st+1, µ(st+1))] The expectation depends only on the environment. This means that it is possible to learn Qµ off- policy, using transitions which are generated from a different stochastic behavior policy β.
1509.02971#8
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
9
The expectation depends only on the environment. This means that it is possible to learn Qµ off- policy, using transitions which are generated from a different stochastic behavior policy β. Q-learning (Watkins & Dayan, 1992), a commonly used off-policy algorithm, uses the greedy policy µ(s) = arg maxa Q(s, a). We consider function approximators parameterized by θQ, which we optimize by minimizing the loss: 2 L(62) = Ey, p8 a~B,rewE [(Q(s1, ar) —y) | (4) where yt = r(st, at) + γQ(st+1, µ(st+1)|θQ). (5) While yt is also dependent on θQ, this is typically ignored.
1509.02971#9
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
10
While yt is also dependent on θQ, this is typically ignored. The use of large, non-linear function approximators for learning value or action-value functions has often been avoided in the past since theoretical performance guarantees are impossible, and prac- tically learning tends to be unstable. Recently, (Mnih et al., 2013; 2015) adapted the Q-learning algorithm in order to make effective use of large neural networks as function approximators. Their algorithm was able to learn to play Atari games from pixels. In order to scale Q-learning they intro- duced two major changes: the use of a replay buffer, and a separate target network for calculating yt. We employ these in the context of DDPG and explain their implementation in the next section. # 3 ALGORITHM
1509.02971#10
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
11
# 3 ALGORITHM It is not possible to straightforwardly apply Q-learning to continuous action spaces, because in con- tinuous spaces finding the greedy policy requires an optimization of at at every timestep; this opti- mization is too slow to be practical with large, unconstrained function approximators and nontrivial action spaces. Instead, here we used an actor-critic approach based on the DPG algorithm (Silver et al., 2014). The DPG algorithm maintains a parameterized actor function µ(s|θµ) which specifies the current policy by deterministically mapping states to a specific action. The critic Q(s, a) is learned using the Bellman equation as in Q-learning. The actor is updated by following the applying the chain rule to the expected return from the start distribution J with respect to the actor parameters: Vow d © Esa [Vor Q(s, a|0%)| s=s,.0=n(s1|0")| Ex,x08 [VaQ(s; 210% )|s=se,a=pi(se) Vo, 4(5|9") |s=se] (6) Silver et al. (2014) proved that this is the policy gradient, the gradient of the policy’s performance 2.
1509.02971#11
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
12
Silver et al. (2014) proved that this is the policy gradient, the gradient of the policy’s performance 2. As with Q learning, introducing non-linear function approximators means that convergence is no longer guaranteed. However, such approximators appear essential in order to learn and generalize on large state spaces. NFQCA (Hafner & Riedmiller, 2011), which uses the same update rules as DPG but with neural network function approximators, uses batch learning for stability, which is intractable for large networks. A minibatch version of NFQCA which does not reset the policy at each update, as would be required to scale to large networks, is equivalent to the original DPG, which we compare to here. Our contribution here is to provide modifications to DPG, inspired by the success of DQN, which allow it to use neural network function approximators to learn in large state and action spaces online. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1). 2In practice, as in commonly done in policy gradient implementations, we ignored the discount in the state- visitation distribution ρβ. 3 (3) Published as a conference paper at ICLR 2016
1509.02971#12
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
13
3 (3) Published as a conference paper at ICLR 2016 One challenge when using neural networks for reinforcement learning is that most optimization al- gorithms assume that the samples are independently and identically distributed. Obviously, when the samples are generated from exploring sequentially in an environment this assumption no longer holds. Additionally, to make efficient use of hardware optimizations, it is essential to learn in mini- batches, rather than online. As in DQN, we used a replay buffer to address these issues. The replay buffer is a finite sized cache R. Transitions were sampled from the environment according to the exploration policy and the tuple (st, at, rt, st+1) was stored in the replay buffer. When the replay buffer was full the oldest samples were discarded. At each timestep the actor and critic are updated by sampling a minibatch uniformly from the buffer. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowing the algorithm to benefit from learning across a set of uncorrelated transitions.
1509.02971#13
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
14
Directly implementing Q learning (equation|4) with neural networks proved to be unstable in many environments. Since the network Q(s,a|9@) being updated is also used in calculating the target value (equation|5), the Q update is prone to divergence. Our solution is similar to the target network used in (Mnih et al.| but modified for actor-critic and using “soft” target updates, rather than directly copying the weights. We create a copy of the actor and critic networks, Q’(s, ale2’) and pe (s|o"’) respectively, that are used for calculating the target values. The weights of these target networks are then updated by having them slowly track the learned networks: 0’ + 70 + (1 — 7)’ with r < 1. This means that the target values are constrained to change slowly, greatly improving the stability of learning. This simple change moves the relatively unstable problem of learning the action-value function closer to the case of supervised learning, a problem for which robust solutions exist. We found that having both a target yu’ and Q’ was required to have stable targets y; in order to consistently train the critic without divergence. This may slow learning, since the target network delays the propagation of value estimations. However, in practice we found this was greatly outweighed by the stability of learning.
1509.02971#14
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
15
When learning from low dimensional feature vector observations, the different components of the observation may have different physical units (for example, positions versus velocities) and the ranges may vary across environments. This can make it difficult for the network to learn effec- tively and may make it difficult to find hyper-parameters which generalise across environments with different scales of state values. One approach to this problem is to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization (Ioffe & Szegedy, 2015). This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance. In addition, it maintains a run- ning average of the mean and variance to use for normalization during testing (in our case, during exploration or evaluation). In deep networks, it is used to minimize covariance shift during training, by ensuring that each layer receives whitened input. In the low-dimensional case, we used batch normalization on the state input and all layers of the µ network and all layers of the Q network prior to the action input (details of the networks are given in the supplementary material). With batch normalization, we were able to learn effectively across many different tasks with differing types of units, without needing to manually ensure the units were within a set range.
1509.02971#15
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
16
A major challenge of learning in continuous action spaces is exploration. An advantage of off- policies algorithms such as DDPG is that we can treat the problem of exploration independently from the learning algorithm. We constructed an exploration policy yu’ by adding noise sampled from a noise process NV to our actor policy # H(s1) = psilOf) (7) N can be chosen to suit the environment. As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efficiency in physical control problems with inertia (similar use of auto- correlated noise was introduced in (Wawrzy´nski, 2015)). # 4 RESULTS We constructed simulated physical environments of varying levels of difficulty to test our algorithm. This included classic reinforcement learning environments such as cartpole, as well as difficult, 4 Published as a conference paper at ICLR 2016 # Algorithm 1 DDPG algorithm
1509.02971#16
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
17
4 Published as a conference paper at ICLR 2016 # Algorithm 1 DDPG algorithm Randomly initialize critic network Q(s, a|9@) and actor :(s|0“) with weights 0? and 6". Initialize target network Q’ and jx’ with weights 02’ — 62, 9 ~ 9H Initialize replay buffer R for episode = 1,M do Initialize a random process N for action exploration Receive initial observation state s; fort=1,T do Select action a, = j1(s,|9") +N; according to the current policy and exploration noise Execute action a; and observe reward r; and observe new state 5,41 Store transition (s,, 41,7, 8:41) in R Sample a random minibatch of N transitions (s;,a;,7;, 5:41) from R Set yi = ri +7Q" (sin, Ml(si41|0"’)0%’) Update critic by minimizing the loss: L = 4 0 ,(yi — Q(si, ai|92))” Update the actor policy using the sampled policy gradient: 1 Vou) © x » VaQ(s, a|0%) |s—s,,a=p(s,) Von ( 5/0") |, Update the target networks:
1509.02971#17
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
18
1 Vou) © x » VaQ(s, a|0%) |s—s,,a=p(s,) Von ( 5/0") |, Update the target networks: 6? + 702 + (1—7)6? OH! ON 4 (l- r)oH" # end for end for high dimensional tasks such as gripper, tasks involving contacts such as puck striking (canada) and locomotion tasks such as cheetah (Wawrzy´nski, 2009). In all domains but cheetah the actions were torques applied to the actuated joints. These environments were simulated using MuJoCo (Todorov et al., 2012). Figure 1 shows renderings of some of the environments used in the task (the supplementary contains details of the environments and you can view some of the learned policies at https://goo.gl/J4PIAz).
1509.02971#18
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
19
In all tasks, we ran experiments using both a low-dimensional state description (such as joint angles and positions) and high-dimensional renderings of the environment. As in DQN (Mnih et al., 2013; 2015), in order to make the problems approximately fully observable in the high dimensional envi- ronment we used action repeats. For each timestep of the agent, we step the simulation 3 timesteps, repeating the agent’s action and rendering each time. Thus the observation reported to the agent contains 9 feature maps (the RGB of each of the 3 renderings) which allows the agent to infer veloc- ities using the differences between frames. The frames were downsampled to 64x64 pixels and the 8-bit RGB values were converted to floating point scaled to [0, 1]. See supplementary information for details of our network structure and hyperparameters. We evaluated the policy periodically during training by testing it without exploration noise. Figure 2 shows the performance curve for a selection of environments. We also report results with compo- nents of our algorithm (i.e. the target network or batch normalization) removed. In order to perform well across all tasks, both of these additions are necessary. In particular, learning without a target network, as in the original work with DPG, is very poor in many environments.
1509.02971#19
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
20
Surprisingly, in some simpler tasks, learning policies from pixels is just as fast as learning using the low-dimensional state descriptor. This may be due to the action repeats making the problem simpler. It may also be that the convolutional layers provide an easily separable representation of state space, which is straightforward for the higher layers to learn on quickly. Table 1 summarizes DDPG’s performance across all of the environments (results are averaged over 5 replicas). We normalized the scores using two baselines. The first baseline is the mean return from a naive policy which samples actions from a uniform distribution over the valid action space. The second baseline is iLQG (Todorov & Li, 2005), a planning based solver with full access to the 5 Published as a conference paper at ICLR 2016 underlying physical model and its derivatives. We normalize scores so that the naive policy has a mean score of 0 and iLQG has a mean score of 1. DDPG is able to learn good policies on many of the tasks, and in many cases some of the replicas learn policies which are superior to those found by iLQG, even when learning directly from pixels.
1509.02971#20
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
21
It can be challenging to learn accurate value estimates. Q-learning, for example, is prone to over- estimating values (Hasselt, 2010). We examined DDPG’s estimates empirically by comparing the values estimated by Q after training with the true returns seen on test episodes. Figure 3 shows that in simple tasks DDPG estimates returns accurately without systematic biases. For harder tasks the Q estimates are worse, but DDPG is still able learn good policies. To demonstrate the generality of our approach we also include Torcs, a racing game where the actions are acceleration, braking and steering. Torcs has previously been used as a testbed in other policy learning approaches (Koutn´ık et al., 2014b). We used an identical network architecture and learning algorithm hyper-parameters to the physics tasks but altered the noise process for exploration because of the very different time scales involved. On both low-dimensional and from pixels, some replicas were able to learn reasonable policies that are able to complete a circuit around the track though other replicas failed to learn a sensible policy.
1509.02971#21
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
22
Figure 1: Example screenshots of a sample of environments we attempt to solve with DDPG. In order from the left: the cartpole swing-up task, a reaching task, a gasp and move task, a puck-hitting task, a monoped balancing task, two locomotion tasks and Torcs (driving simulator). We tackle all tasks using both low-dimensional feature vector and high-dimensional pixel inputs. Detailed descriptions of the environments are provided in the supplementary. Movies of some of the learned policies are available at https://goo.gl/J4PIAz. Cart Pendulum Swing-up. Cartpole Swing-up Fixed Reacher Monoped Balancing qr 1 1 1 s] o| 0 0) 0 . oO . . Gripper Blockworld Puck Shooting Cheetah Moving Gripper pil a 1 g 1 ? é 0 3 o| Hy a ° 0) E 0 S 1 . . 20 1 0 1 0 1 ) 1 0 1 # Million Steps Figure 2: Performance curves for a selection of domains using variants of DPG: original DPG algorithm (minibatch NFQCA) with batch normalization (light grey), with target network (dark grey), with target networks and batch normalization (green), with target networks from pixel-only inputs (blue). Target networks are crucial. # 5 RELATED WORK
1509.02971#22
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
23
# 5 RELATED WORK The original DPG paper evaluated the algorithm with toy problems using tile-coding and linear function approximators. It demonstrated data efficiency advantages for off-policy DPG over both on- and off-policy stochastic actor critic. It also solved one more challenging task in which a multi- jointed octopus arm had to strike a target with any part of the limb. However, that paper did not demonstrate scaling the approach to large, high-dimensional observation spaces as we have here. It has often been assumed that standard policy search methods such as those explored in the present work are simply too fragile to scale to difficult problems (Levine et al., 2015). Standard policy search 6 Published as a conference paper at ICLR 2016 Pendulum Cartpole Cheetah o ral ov A a £ 4 | a uu Return Return Figure 3: Density plot showing estimated Q values versus observed returns sampled from test episodes on 5 replicas. In simple domains such as pendulum and cartpole the Q values are quite accurate. In more complex tasks, the Q estimates are less accurate, but can still be used to learn competent policies. Dotted line indicates unity, units are arbitrary.
1509.02971#23
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
24
Table 1: Performance after training across all environments for at most 2.5 million steps. We report both the average and best observed (across 5 runs). All scores, except Torcs, are normalized so that a random agent receives 0 and a planning algorithm 1; for Torcs we present the raw reward score. We include results from the DDPG algorithn in the low-dimensional (lowd) version of the environment and high-dimensional (pix). For comparision we also include results from the original DPG algorithm with a replay buffer and batch normalization (cntrl). Rav,lowd Rbest,lowd Rbest,pix Rav,cntrl Rbest,cntrl -0.080 -0.139 0.125 -0.045 0.343 0.244 -0.468 0.197 0.143 0.583 -0.008 0.259 0.290 0.620 0.461 0.557 -0.031 0.078 0.198 0.416 0.099 0.231 0.204 -0.046 1.010 0.393 -911.034
1509.02971#24
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
25
is thought to be difficult because it deals simultaneously with complex environmental dynamics and a complex policy. Indeed, most past work with actor-critic and policy optimization approaches have had difficulty scaling up to more challenging problems (Deisenroth et al., 2013). Typically, this is due to instability in learning wherein progress on a problem is either destroyed by subsequent learning updates, or else learning is too slow to be practical. Recent work with model-free policy search has demonstrated that it may not be as fragile as previ- ously supposed. Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013) has trained stochastic policies 7 Published as a conference paper at ICLR 2016
1509.02971#25
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
26
7 Published as a conference paper at ICLR 2016 in an actor-critic framework with a replay buffer. Concurrent with our work, Balduzzi & Ghifary (2015) extended the DPG algorithm with a “deviator” network which explicitly learns ∂Q/∂a. How- ever, they only train on two low-dimensional domains. Heess et al. (2015) introduced SVG(0) which also uses a Q-critic but learns a stochastic policy. DPG can be considered the deterministic limit of SVG(0). The techniques we described here for scaling DPG are also applicable to stochastic policies by using the reparametrization trick (Heess et al., 2015; Schulman et al., 2015a). Another approach, trust region policy optimization (TRPO) (Schulman et al., 2015b), directly con- structs stochastic neural network policies without decomposing problems into optimal control and supervised phases. This method produces near monotonic improvements in return by making care- fully chosen updates to the policy parameters, constraining updates to prevent the new policy from diverging too far from the existing policy. This approach does not require learning an action-value function, and (perhaps as a result) appears to be significantly less data efficient.
1509.02971#26
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
27
To combat the challenges of the actor-critic approach, recent work with guided policy search (GPS) algorithms (e.g., (Levine et al., 2015)) decomposes the problem into three phases that are rela- tively easy to solve: first, it uses full-state observations to create locally-linear approximations of the dynamics around one or more nominal trajectories, and then uses optimal control to find the locally-linear optimal policy along these trajectories; finally, it uses supervised learning to train a complex, non-linear policy (e.g. a deep neural network) to reproduce the state-to-action mapping of the optimized trajectories.
1509.02971#27
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
28
This approach has several benefits, including data efficiency, and has been applied successfully to a variety of real-world robotic manipulation tasks using vision. In these tasks GPS uses a similar convolutional policy network to ours with 2 notable exceptions: 1. it uses a spatial softmax to reduce the dimensionality of visual features into a single (x, y) coordinate for each feature map, and 2. the policy also receives direct low-dimensional state information about the configuration of the robot at the first fully connected layer in the network. Both likely increase the power and data efficiency of the algorithm and could easily be exploited within the DDPG framework. PILCO (Deisenroth & Rasmussen, 2011) uses Gaussian processes to learn a non-parametric, proba- bilistic model of the dynamics. Using this learned model, PILCO calculates analytic policy gradients and achieves impressive data efficiency in a number of control problems. However, due to the high computational demand, PILCO is “impractical for high-dimensional problems” (Wahlstr¨om et al., 2015). It seems that deep function approximators are the most promising approach for scaling rein- forcement learning to large, high-dimensional domains.
1509.02971#28
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
29
Wahlstr¨om et al. (2015) used a deep dynamical model network along with model predictive control to solve the pendulum swing-up task from pixel input. They trained a differentiable forward model and encoded the goal state into the learned latent space. They use model-predictive control over the learned model to find a policy for reaching the target. However, this approach is only applicable to domains with goal states that can be demonstrated to the algorithm. Recently, evolutionary approaches have been used to learn competitive policies for Torcs from pixels using compressed weight parametrizations (Koutn´ık et al., 2014a) or unsupervised learning (Koutn´ık et al., 2014b) to reduce the dimensionality of the evolved weights. It is unclear how well these approaches generalize to other problems. # 6 CONCLUSION The work combines insights from recent advances in deep learning and reinforcement learning, re- sulting in an algorithm that robustly solves challenging problems across a variety of domains with continuous action spaces, even when using raw pixels for observations. As with most reinforcement learning algorithms, the use of non-linear function approximators nullifies any convergence guar- antees; however, our experimental results demonstrate that stable learning without the need for any modifications between environments.
1509.02971#29
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
30
Interestingly, all of our experiments used substantially fewer steps of experience than was used by DQN learning to find solutions in the Atari domain. Nearly all of the problems we looked at were solved within 2.5 million steps of experience (and usually far fewer), a factor of 20 fewer steps than 8 Published as a conference paper at ICLR 2016 DQN requires for good Atari solutions. This suggests that, given more simulation time, DDPG may solve even more difficult problems than those considered here. A few limitations to our approach remain. Most notably, as with most model-free reinforcement approaches, DDPG requires a large number of training episodes to find solutions. However, we believe that a robust model-free approach may be an important component of larger systems which may attack these limitations (Gl¨ascher et al., 2010). # REFERENCES Balduzzi, David and Ghifary, Muhammad. Compatible value gradients for reinforcement learning of continuous deep policies. arXiv preprint arXiv:1509.03005, 2015. Deisenroth, Marc and Rasmussen, Carl E. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML- 11), pp. 465–472, 2011.
1509.02971#30
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
31
Deisenroth, Marc Peter, Neumann, Gerhard, Peters, Jan, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1–142, 2013. Gl¨ascher, Jan, Daw, Nathaniel, Dayan, Peter, and O’Doherty, John P. States versus rewards: dis- sociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585–595, 2010. Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier networks. In Proceed- ings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume, volume 15, pp. 315–323, 2011. Hafner, Roland and Riedmiller, Martin. Reinforcement learning in feedback control. Machine learning, 84(1-2):137–169, 2011. Hasselt, Hado V. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010.
1509.02971#31
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
32
Hasselt, Hado V. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010. Heess, N., Hunt, J. J, Lillicrap, T. P, and Silver, D. Memory-based control with recurrent neural networks. NIPS Deep Reinforcement Learning Workshop (arXiv:1512.04455), 2015. Heess, Nicolas, Wayne, Gregory, Silver, David, Lillicrap, Tim, Erez, Tom, and Tassa, Yuval. Learn- ing continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2926–2934, 2015. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
1509.02971#32
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
33
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving deep unsupervised convolu- tional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary computation, pp. 541–548. ACM, 2014a. Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Online evolution of deep convolutional network for vision-based reinforcement learning. In From Animals to Animats 13, pp. 260–269. Springer, 2014b. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. 9 Published as a conference paper at ICLR 2016
1509.02971#33
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
34
9 Published as a conference paper at ICLR 2016 Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wier- stra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human- level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. Prokhorov, Danil V, Wunsch, Donald C, et al. Adaptive critic designs. Neural Networks, IEEE Transactions on, 8(5):997–1007, 1997. Schulman, John, Heess, Nicolas, Weber, Theophane, and Abbeel, Pieter. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3510– 3522, 2015a.
1509.02971#34
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
35
Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015b. Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In ICML, 2014. Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906–4913. IEEE, 2012. Todorov, Emanuel and Li, Weiwei. A generalized iterative lqg method for locally-optimal feed- back control of constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings of the 2005, pp. 300–306. IEEE, 2005. Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026– 5033. IEEE, 2012.
1509.02971#35
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
36
Uhlenbeck, George E and Ornstein, Leonard S. On the theory of the brownian motion. Physical review, 36(5):823, 1930. Wahlstr¨om, Niklas, Sch¨on, Thomas B, and Deisenroth, Marc Peter. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv:1502.02251, 2015. Watkins, Christopher JCH and Dayan, Peter. Q-learning. Machine learning, 8(3-4):279–292, 1992. Wawrzy´nski, Paweł. Real-time reinforcement learning by sequential actor–critics and experience replay. Neural Networks, 22(10):1484–1497, 2009. Wawrzy´nski, Paweł. Control policy with autocorrelated noise in reinforcement learning for robotics. International Journal of Machine Learning and Computing, 5:91–95, 2015. Wawrzy´nski, Paweł and Tanwani, Ajay Kumar. Autonomous reinforcement learning with experience replay. Neural Networks, 41:156–167, 2013. 10 Published as a conference paper at ICLR 2016 # Supplementary Information: Continuous control with deep reinforcement learning 7 EXPERIMENT DETAILS
1509.02971#36
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
37
We used Adam (Kingma & Ba, 2014) for learning the neural network parameters with a learning rate of 10−4 and 10−3 for the actor and critic respectively. For Q we included L2 weight decay of 10−2 and used a discount factor of γ = 0.99. For the soft target updates we used τ = 0.001. The neural networks used the rectified non-linearity (Glorot et al., 2011) for all hidden layers. The final output layer of the actor was a tanh layer, to bound the actions. The low-dimensional networks had 2 hidden layers with 400 and 300 units respectively (≈ 130,000 parameters). Actions were not included until the 2nd hidden layer of Q. When learning from pixels we used 3 convolutional layers (no pooling) with 32 filters at each layer. This was followed by two fully connected layers with 200 units (≈ 430,000 parameters). The final layer weights and biases of both the actor and critic were initialized from a uniform distribution [−3 × 10−3, 3 × 10−3] and [3 × 10−4, 3 × 10−4] for the low dimensional and pixel cases respectively. This was to ensure the initial
1509.02971#37
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
38
3 × 10−3] and [3 × 10−4, 3 × 10−4] for the low dimensional and pixel cases respectively. This was to ensure the initial outputs for the policy and value estimates were near zero. The other layers were initialized from uniform distributions [− 1√ f ] where f is the fan-in of the layer. The actions were not included until the fully-connected layers. We trained with minibatch sizes of 64 for the low dimensional problems and 16 on pixels. We used a replay buffer size of 106.
1509.02971#38
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
39
For the exploration noise process we used temporally correlated noise in order to explore well in physical environments that have momentum. We used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) with θ = 0.15 and σ = 0.2. The Ornstein-Uhlenbeck process models the velocity of a Brownian particle with friction, which results in temporally correlated values centered around 0. # 8 PLANNING ALGORITHM Our planner is implemented as a model-predictive controller (Tassa et al., 2012): at every time step we run a single iteration of trajectory optimization (using iLQG, (Todorov & Li, 2005)), starting from the true state of the system. Every single trajectory optimization is planned over a horizon between 250ms and 600ms, and this planning horizon recedes as the simulation of the world unfolds, as is the case in model-predictive control.
1509.02971#39
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
40
The iLQG iteration begins with an initial rollout of the previous policy, which determines the nom- inal trajectory. We use repeated samples of simulated dynamics to approximate a linear expansion of the dynamics around every step of the trajectory, as well as a quadratic expansion of the cost function. We use this sequence of locally-linear-quadratic models to integrate the value function backwards in time along the nominal trajectory. This back-pass results in a putative modification to the action sequence that will decrease the total cost. We perform a derivative-free line-search over this direction in the space of action sequences by integrating the dynamics forward (the forward- pass), and choose the best trajectory. We store this action sequence in order to warm-start the next iLQG iteration, and execute the first action in the simulator. This results in a new state, which is used as the initial state in the next iteration of trajectory optimization. # 9 ENVIRONMENT DETAILS 9.1 TORCS ENVIRONMENT For the Torcs environment we used a reward function which provides a positive reward at each step for the velocity of the car projected along the track direction and a penalty of −1 for collisions. Episodes were terminated if progress was not made along the track after 500 frames. 11 Published as a conference paper at ICLR 2016 # 9.2 MUJOCO ENVIRONMENTS
1509.02971#40
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
41
11 Published as a conference paper at ICLR 2016 # 9.2 MUJOCO ENVIRONMENTS For physical control tasks we used reward functions which provide feedback at every step. In all tasks, the reward contained a small action cost. For all tasks that have a static goal state (e.g. pendulum swingup and reaching) we provide a smoothly varying reward based on distance to a goal state, and in some cases an additional positive reward when within a small radius of the target state. For grasping and manipulation tasks we used a reward with a term which encourages movement towards the payload and a second component which encourages moving the payload to the target. In locomotion tasks we reward forward action and penalize hard impacts to encourage smooth rather than hopping gaits (Schulman et al., 2015b). In addition, we used a negative reward and early termination for falls which were determined by simple threshholds on the height and torso angle (in the case of walker2d). Table 2 states the dimensionality of the problems and below is a summary of all the physics envi- ronments.
1509.02971#41
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
42
Table 2 states the dimensionality of the problems and below is a summary of all the physics envi- ronments. task name blockworld1 blockworld3da canada canada2d cart cartpole cartpoleBalance cartpoleParallelDouble cartpoleParallelTriple cartpoleSerialDouble cartpoleSerialTriple cheetah fixedReacher fixedReacherDouble fixedReacherSingle gripper gripperRandom hardCheetah hardCheetahNice hopper hyq hyqKick movingGripper movingGripperRandom pendulum reacher reacher3daFixedTarget reacher3daRandomTarget reacherDouble reacherObstacle reacherSingle walker2d dim(s) 18 31 22 14 2 4 4 6 8 6 8 18 10 8 6 18 18 18 18 14 37 37 22 22 2 10 20 20 6 18 6 18 dim(a) 5 9 7 3 1 1 1 1 1 1 1 6 3 2 1 5 5 6 6 4 12 12 7 7 1 3 7 7 1 5 1 6 dim(o) 43 102 62 29 3 14 14 16 23 14 23 17 23 18 13 43 43 17 17 14 37 37 49 49 3 23 61 61 13 38 13 41
1509.02971#42
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
43
Table 2: Dimensionality of the MuJoCo tasks: the dimensionality of the underlying physics model dim(s), number of action dimensions dim(a) and observation dimensions dim(o). task name Brief Description blockworld1 Agent is required to use an arm with gripper constrained to the 2D plane to grab a falling block and lift it against gravity to a fixed target position. 12 Published as a conference paper at ICLR 2016
1509.02971#43
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
44
blockworld3da Agent is required to use a human-like arm with 7-DOF and a simple gripper to grab a block and lift it against gravity to a fixed target posi- tion. canada Agent is required to use a 7-DOF arm with hockey-stick like appendage to hit a ball to a target. canada2d Agent is required to use an arm with hockey-stick like appendage to hit a ball initialzed to a random start location to a random target location. cart Agent must move a simple mass to rest at 0. The mass begins each trial in random positions and with random velocities. cartpole The classic cart-pole swing-up task. Agent must balance a pole at- tached to a cart by applying forces to the cart alone. The pole starts each episode hanging upside-down. cartpoleBalance The classic cart-pole balance task. Agent must balance a pole attached to a cart by applying forces to the cart alone. The pole starts in the upright positions at the beginning of each episode. cartpoleParallelDouble Variant on the classic cart-pole. Two poles, both attached to the cart, should be kept upright as much as possible. cartpoleSerialDouble Variant on the classic cart-pole. Two poles, one attached to the cart and the second attached
1509.02971#44
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
45
should be kept upright as much as possible. cartpoleSerialDouble Variant on the classic cart-pole. Two poles, one attached to the cart and the second attached to the end of the first, should be kept upright as much as possible. cartpoleSerialTriple Variant on the classic cart-pole. Three poles, one attached to the cart, the second attached to the end of the first, and the third attached to the end of the second, should be kept upright as much as possible. cheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013). fixedReacher Agent is required to move a 3-DOF arm to a fixed target position. fixedReacherDouble Agent is required to move a 2-DOF arm to a fixed target position. fixedReacherSingle Agent is required to move a simple 1-DOF arm to a fixed target position. gripper Agent must use an arm with gripper appendage to grasp an object and
1509.02971#45
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
47
hardCheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013), but has been made much more difficult by removing the stabalizing joint stiffness from the model. # hopper Agent must balance a multiple degree of freedom monoped to keep it from falling. hyq Agent is required to keep a quadroped model based on the hyq robot from falling. 13 Published as a conference paper at ICLR 2016
1509.02971#47
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.02971
48
13 Published as a conference paper at ICLR 2016 movingGripper Agent must use an arm with gripper attached to a moveable platform to grasp an object and move it to a fixed target. movingGripperRandom The same as the movingGripper environment except that the object po- sition, target position, and arm state are initialized randomly. pendulum The classic pendulum swing-up problem. The pendulum should be brought to the upright position and balanced. Torque limits prevent the agent from swinging the pendulum up directly. reacher3daFixedTarget Agent is required to move a 7-DOF human-like arm to a fixed target position. reacher3daRandomTarget Agent is required to move a 7-DOF human-like arm from random start- ing locations to random target positions. reacher Agent is required to move a 3-DOF arm from random starting locations to random target positions. reacherSingle Agent is required to move a simple 1-DOF arm from random starting locations to random target positions. reacherObstacle Agent is required to move a 5-DOF arm around an obstacle to a ran- domized target position. walker2d Agent should move forward as quickly as possible with a bipedal walker constrained to the plane without falling down or pitching the torso too far forward or backward. 14
1509.02971#48
Continuous control with deep reinforcement learning
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
http://arxiv.org/pdf/1509.02971
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
cs.LG, stat.ML
10 pages + supplementary
null
cs.LG
20150909
20190705
[ { "id": "1502.03167" }, { "id": "1512.04455" }, { "id": "1502.05477" }, { "id": "1509.03005" }, { "id": "1502.02251" }, { "id": "1504.00702" } ]
1509.00685
0
5 1 0 2 p e S 3 ] L C . s c [ 2 v 5 8 6 0 0 . 9 0 5 1 : v i X r a # A Neural Attention Model for Abstractive Sentence Summarization Alexander M. Rush Facebook AI Research / Harvard SEAS [email protected] Sumit Chopra Facebook AI Research [email protected] Jason Weston Facebook AI Research [email protected] # Abstract # Abstract Summarization based on text extraction is inherently limited, but generation-style ab- stractive methods have proven challeng- In this work, we propose ing to build. a fully data-driven approach to abstrac- tive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary con- ditioned on the input sentence. While the model is structurally simple, it can eas- ily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines. # Introduction 2 ee FESS FSFE SE || <s> russian defense || ivanov a called [| sunday led for the — = = creation | || a = ee Hs for _ aed | terrorism
1509.00685#0
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
1
Figure 1: Example output of the attention-based summa- rization (ABS) system. The heatmap represents a soft align- ment between the input (right) and the generated summary (top). The columns represent the distribution over the input after generating each word. Summarization is an important challenge of natu- ral language understanding. The aim is to produce a condensed representation of an input text that captures the core meaning of the original. Most successful summarization systems utilize extrac- tive approaches that crop out and stitch together portions of the text to produce a condensed ver- In contrast, abstractive summarization at- sion. tempts to produce a bottom-up summary, aspects of which may not appear as part of the original. We focus on the task of sentence-level sum- marization. While much work on this task has looked at deletion-based sentence compression techniques (Knight and Marcu (2002), among many others), studies of human summarizers show that it is common to apply various other operations while condensing, such as paraphrasing, general- ization, and reordering (Jing, 2002). Past work has modeled this abstractive summarization prob- lem either using linguistically-inspired constraints (Dorr et al., 2003; Zajic et al., 2004) or with syn- tactic transformations of the input text (Cohn and
1509.00685#1
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
2
Lapata, 2008; Woodsend et al., 2010). These ap- proaches are described in more detail in Section 6. We instead explore a fully data-driven approach for generating abstractive summaries. Inspired by the recent success of neural machine translation, we combine a neural language model with a con- textual input encoder. Our encoder is modeled off of the attention-based encoder of Bahdanau et al. (2014) in that it learns a latent soft alignment over the input text to help inform the summary (as shown in Figure 1). Crucially both the encoder and the generation model are trained jointly on the sentence summarization task. The model is de- scribed in detail in Section 3. Our model also in- corporates a beam-search decoder as well as addi- tional features to model extractive elements; these aspects are discussed in Sections 4 and 5. This approach to summarization, which we call Attention-Based Summarization (ABS), incorpo- rates less linguistic structure than comparable ab- stractive summarization approaches, but can easily Input (x1, . . . , x18). First sentence of article: russian defense minister ivanov called sunday for the creation of a joint front for combating global terrorism Output (y1, . . . , y8). Generated headline: russia calls for joint front against terrorism
1509.00685#2
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
3
Figure 2: Example input sentence and the generated summary. The score of generating yi+1 (terrorism) is based on the context yc (for . . . against) as well as the input x1 . . . x18. Note that the summary generated is abstractive which makes it possible to generalize (russian defense minister to russia) and paraphrase (for combating to against), in addition to compressing (dropping the creation of), see Jing (2002) for a survey of these editing operations. scale to train on a large amount of data. Since our system makes no assumptions about the vocabu- lary of the generated summary it can be trained directly on any document-summary pair.1 This allows us to train a summarization model for headline-generation on a corpus of article pairs from Gigaword (Graff et al., 2003) consisting of around 4 million articles. An example of genera- tion is given in Figure 2, and we discuss the details of this task in Section 7.
1509.00685#3
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
4
To test the effectiveness of this approach we run extensive comparisons with multiple abstrac- tive and extractive baselines, including traditional integer linear program- syntax-based systems, constrained systems, information-retrieval style approaches, as well as statistical phrase-based ma- chine translation. Section 8 describes the results of these experiments. Our approach outperforms a machine translation system trained on the same large-scale dataset and yields a large improvement over the highest scoring system in the DUC-2004 competition. a sequence y1, . . . , yN . Note that in contrast to related tasks, like machine translation, we will as- sume that the output length N is fixed, and that the system knows the length of the summary be- fore generation.2 gen- Next erating set Y ⊂ ({0, 1}V , . . . , {0, 1}V ) as all possible sentences of length N , i.e. for all i and y ∈ Y, yi is an indicator. We say a system is abstractive if it tries to find the optimal sequence from this set Y, arg max y∈Y s(x, y), (1) under a scoring function s : Y x Y ++ R. Contrast this to a fully extractive sentence summary? which transfers words from the input:
1509.00685#4
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
5
under a scoring function s : Y x Y ++ R. Contrast this to a fully extractive sentence summary? which transfers words from the input: arg max m∈{1,...M }N s(x, x[m1,...,mN ]), (2) or to the related problem of sentence compression that concentrates on deleting words from the input: # 2 Background arg max m∈{1,...M }N ,mi−1<mi s(x, x[m1,...,mN ]). (3) We begin by defining the sentence summarization task. Given an input sentence, the goal is to pro- duce a condensed summary. Let the input con- sist of a sequence of M words x1, . . . , xM com- ing from a fixed vocabulary V of size |V| = V . We will represent each word as an indicator vector xi ∈ {0, 1}V for i ∈ {1, . . . , M }, sentences as a sequence of indicators, and X as the set of possi- ble inputs. Furthermore define the notation x[i,j,k] to indicate the sub-sequence of elements i, j, k.
1509.00685#5
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
6
While abstractive summarization poses a more dif- ficult generation challenge, the lack of hard con- straints gives the system more freedom in genera- tion and allows it to fit with a wider range of train- ing data. In this work we focus on factored scoring func- tions, s, that take into account a fixed window of previous words: A summarizer takes x as input and outputs a shortened sentence y of length N < M . We will assume that the words in the summary also come from the same vocabulary V and that the output is N-1 s(x,y) ¥ > g(yin1. Ye); (4) i=0 2For the DUC-2004 evaluation, it is actually the number of bytes of the output that is capped. More detail is given in Section 7. 1In contrast to a large-scale sentence compression sys- tems like Filippova and Altun (2013) which require mono- tonic aligned compressions. 3Unfortunately the literature is inconsistent on the formal definition of this distinction. Some systems self-described as abstractive would be extractive under our definition. of size C. log- probability of a summary given the input, s(x, y) = log p(y|x; θ). We can write this as:
1509.00685#6
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
7
of size C. log- probability of a summary given the input, s(x, y) = log p(y|x; θ). We can write this as: N-1 log p(y|x;0) ~ Slog p(yisi|x, Yes), i=0 where we make a Markov assumption on the length of the context as size C' and assume for i < 1, y; is a special start symbol (S). With this scoring function in mind, our main focus will be on modelling the local conditional distribution: p(yi+1|x, yc; θ). The next section defines a parameterization for this distribution, in Section 4, we return to the question of generation for factored models, and in Section 5 we introduce a modified factored scoring function. # 3 Model The distribution of interest, p(yi+1|x, yc; θ), is a conditional language model based on the in- put sentence x. Past work on summarization and compression has used a noisy-channel approach to split and independently estimate a language model and a conditional summarization model (Banko et al., 2000; Knight and Marcu, 2002; Daum´e III and Marcu, 2002), i.e., arg max y log p(y|x) = arg max y log p(y)p(x|y)
1509.00685#7
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
8
arg max y log p(y|x) = arg max y log p(y)p(x|y) where p(y) and p(x|y) are estimated separately. Here we instead follow work in neural machine translation and directly parameterize the original distribution as a neural network. The network con- tains both a neural probabilistic language model and an encoder which acts as a conditional sum- marization model. # 3.1 Neural Language Model The core of our parameterization is a language model for estimating the contextual probability of the next word. The language model is adapted from a standard feed-forward neural network lan- guage model (NNLM), particularly the class of NNLMs described by Bengio et al. (2003). The full model is: p(yi+1|yc, x; θ) ∝ exp(Vh + Wenc(x, yc)), ˜yc = [Eyi−C+1, . . . , Eyi], h = tanh(U˜yc). P(¥i+1|X, Ye ;0) \V (a) Figure 3: (a) A network diagram for the NNLM decoder with additional encoder element. (b) A network diagram for the attention-based encoder enc3.
1509.00685#8
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]
1509.00685
9
Figure 3: (a) A network diagram for the NNLM decoder with additional encoder element. (b) A network diagram for the attention-based encoder enc3. The parameters are θ = (E, U, V, W) where E ∈ RD×V is a word embedding matrix, U ∈ R(CD)×H , V ∈ RV ×H , W ∈ RV ×H are weight matrices,4 D is the size of the word embeddings, and h is a hidden layer of size H. The black-box function enc is a contextual encoder term that re- turns a vector of size H representing the input and current context; we consider several possible vari- ants, described subsequently. Figure 3a gives a schematic representation of the decoder architec- ture. # 3.2 Encoders Note that without the encoder term this represents a standard language model. By incorporating in enc and training the two elements jointly we cru- cially can incorporate the input text into genera- tion. We discuss next several possible instantia- tions of the encoder. Bag-of-Words Encoder Our most basic model simply uses the bag-of-words of the input sentence embedded down to size H, while ignoring proper- ties of the original order or relationships between neighboring words. We write this model as:
1509.00685#9
A Neural Attention Model for Abstractive Sentence Summarization
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.
http://arxiv.org/pdf/1509.00685
Alexander M. Rush, Sumit Chopra, Jason Weston
cs.CL, cs.AI
Proceedings of EMNLP 2015
null
cs.CL
20150902
20150903
[]