id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
sequencelengths 1
1
|
---|---|---|---|---|---|---|
1509.03005#39 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 15 a Balduzzi and Ghifary Proof Compatibility condition C1 follows from Lemmas 8 and 9. Compatibility condition C2 holds since the Critic and Deviator minimize the Bellman gradient error with respect to W and V which also, implicitly, minimizes the Bellman gradient error with respect to the corresponding reparametrized Ë wâ s for each Actor-unit. Theorem 6 shows that each Actor-unit satisï¬ es the conditions for compatible function approximation and so follows the correct gradient when performing weight updates. # 5.4 Structural credit assignment for multiagent learning It is interesting to relate our approach to the literature on multiagent reinforcement learning (Guestrin et al., 2002; Agogino and Tumer, 2004, 2008). In particular, (HolmesParker et al., 2014) consider the structural credit assignment problem within populations of interacting agents: How to reward individual agents in a population for rewards based on their collective behavior? They propose to train agents within populations with a diï¬ erence-based objective of the form | 1509.03005#38 | 1509.03005#40 | 1509.03005 | [
"1502.02251"
] |
1509.03005#40 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Dj = Q(z) â Q(zâ j, cj) (5) where Q is the objective function to be maximized; zj and zâ j are the system variables that are and are not under the control of agent j respective, and cj is a ï¬ xed counterfactual action. In our setting, the gradient used by Actor-unit j to update its weights can be described explicitly: Lemma 10 (local policy gradients) Actor-unit j follows policy gradient V J[Moi] = V Hoi (s )- (I, GY) |, where (wi, GWs )) & = D,iQU(S) is Deviatorâ s estimate of the directional derivative of the value ee in the direction of Actor-unit jâ s influence. Proof Follows from Lemma 7b. Notice that â zj Q = â zj Dj in Eq. (5). | 1509.03005#39 | 1509.03005#41 | 1509.03005 | [
"1502.02251"
] |
1509.03005#41 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | It follows that training the Actor-network via GProp causes the Actor-units to optimize the diï¬ erence-based objective â without requiring to compute the diï¬ erence explicitly. Although the topic is beyond the scope of the current paper, it is worth exploring how suitably adapted variants of backpropagation can be applied to the reinforcement learning problems in the multiagent setting. # 5.5 Comparison with related work Comparison with COPDAC-Q. Extending the standard value function approximation in Example 1 to the setting where Actor is a neural network yields the following representation, which is used in (Silver et al., 2014) when applying COPDAC-Q to the octopus arm task: | 1509.03005#40 | 1509.03005#42 | 1509.03005 | [
"1502.02251"
] |
1509.03005#42 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 16 # a # Compatible Value Gradients for Deep Reinforcement Learning Example 2 (extension of standard value approximation to neural networks) Let µΠ: S â A and QV : S â R be an Actor and Critic neural network respectively. the total number of entries in Î ). It Suppose the Actor-network has N parameters (i.e. follows that the Jacobian â ΠµΠ(s) is an (N à d)-matrix. The value function approximation is then QYW(s,a) = (aâ He(s))"- Vote(s)" w+ QY(s). SES advantage function Critic where w is an N -vector. Weight updates under COPDAC-Q, with the function approximation above, are therefore as described in Algorithm 2. Algorithm 2: Compatible Deterministic Actor-Critic (COPDAC-Q). for rounds t= 1,2,...,T do Network gets state s;, responds a; = fo,(s:) + â | 1509.03005#41 | 1509.03005#43 | 1509.03005 | [
"1502.02251"
] |
1509.03005#43 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | ¬ where â ¬ ~ N(0,0? - Iy), gets reward ry 5p â Tr + YQY*(St41) â QY*(st) â (Vo Meo, (St) - â ¬, wi) Or41 <â ©: + nf - Vo Me, (si) - Vo Me, (St) - we Visi â Vit ne - 54: Vv QY*(s:) Wry <â Wer ne â O° Vo Ho, (st) 7â ¬ Let us compare GProp with COPDAC-Q, considering the three updates in turn: â ¢ Actor updates. Under GProp, the Actor backpropagates the value-gradient estimate. In contrast under COPDAC-Q the Actor performs a complicated update that combines the policy gradient â | 1509.03005#42 | 1509.03005#44 | 1509.03005 | [
"1502.02251"
] |
1509.03005#44 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Πµ(s) with the advantage functionâ s weights â and diï¬ ers substantively from backprop. â ¢ Deviator / advantage-function updates. Under GProp, the Deviator backpropagates the perturbed TDG-error. In contrast, COPDAC-Q uses the gradient of the Actor to update the weight vector w of the advan- tage function. By Lemma 7d, backprop takes the form gâ ¢ - Vo 9(s) where g is a d-vector. In contrast, the advantage function requires computing Vo â e(s)â ¢ - w, where w is an N-vector. Although the two formulae appear similarly superficially, they carry very different computational costs. The ï¬ rst consequence is that the parameters of w must exactly line up with those of the policy. The second consequence is that, by Lemma 7c, the advantage function requires access to | 1509.03005#43 | 1509.03005#45 | 1509.03005 | [
"1502.02251"
] |
1509.03005#45 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | (â ΠµΠ(s))ij = Ï ij(s) · Ï j 0 if unit j is active else, where Ï ij(s) is the input from unit i to unit j. Thus, the advantage function requires access to the input Ï j(s) and the inï¬ uence Ï j of every unit in the Actor-network. 17 Balduzzi and Ghifary â ¢ Critic updates. The critic updates for the two algorithms are essentially identical, with the TD-error replaced with the TDG-error. In short, the approximation in Example 2 that is used by COPDAC-Q is thus not well- adapted to deep learning. The main reason is that learning the advantage function requires coupling the vector w with the parameters Î | 1509.03005#44 | 1509.03005#46 | 1509.03005 | [
"1502.02251"
] |
1509.03005#46 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | of the actor. Comparison with computing the gradient of the value-function approximation. Perhaps the most natural approach to estimating the gradient is to simply estimate the value function, and then use its gradient as an estimate of the derivative (Jordan and Jacobs, 1990; Prokhorov and Wunsch, 1997; Wang and Si, 2001; Hafner and Riedmiller, 2011; Fairbank and Alonso, 2012; Fairbank et al., 2013). The main problem with this approach is that, to date, it has not been show that the resulting updates of the Critic and the Actor are compatible. There are also no guarantees that the gradient of the Critic will be a good approximation to the gradient of the value function â although it is intuitively plausible. The problem becomes particularly severe when the value-function is estimated via a neural network that uses activation functions that are not smooth such as rectifers. | 1509.03005#45 | 1509.03005#47 | 1509.03005 | [
"1502.02251"
] |
1509.03005#47 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Rectiï¬ ers are becoming increasingly popular due to their superior empirical performance (Nair and Hinton, 2010; Glorot et al., 2011; Zeiler et al., 2013; Dahl et al., 2013). # 6. Experiments We evaluate GProp on three tasks: two highly nonlinear contextual bandit tasks constructed from benchmark datasets for nonparametric regression, and the octopus arm. We do not evaluate GProp on other standard reinforcement learning benchmarks such as Mountain Car, Pendulum or Puddle World, since these can already be handled by linear actor-critic algorithms. The contribution of GProp is the ability to learn representations suited to nonlinear problems. | 1509.03005#46 | 1509.03005#48 | 1509.03005 | [
"1502.02251"
] |
1509.03005#48 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Cloning and replay. Temporal diï¬ erence learning can be unstable when run over a neural network. A recent innovation introduced in (Mnih et al., 2015) that stabilizes TD- learning is to clone a separate network Q Ë V to compute the targets rt + γQ Ë V(Ë st+1). The parameters of the cloned network are updated periodically. We implement a similar modiï¬ cation of the TDG-error in Algorithm 1. We also use experience replay (Mnih et al., 2015). GProp is well-suited to replay, since the critic and deviator can learn values and gradients over the full range of previously observed state- action pairs oï¬ | 1509.03005#47 | 1509.03005#49 | 1509.03005 | [
"1502.02251"
] |
1509.03005#49 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | ine. Cloning and replay were also applied to COPDAC-Q. Both algorithms were implemented in Theano (Bergstra et al., 2010; Bastien et al., 2012). # 6.1 Contextual Bandit Tasks The goal of the contextual bandit tasks is to probe the ability of reinforcement learning algorithms to accurately estimate gradients. The experimental setting may thus be of independent interest. 18 # Compatible Value Gradients for Deep Reinforcement Learning Contextual Bandit (SARCOS) 0.00 -0.02 Qa é a 2-0.04 G a zg ©-0.06 Ss z -0.08 0-209 100 200 300 400 500 epochs Contextual Bandit (SARCOS) 0.00 Contextual Bandit (Barrett) 0.00 -0.02 Qa 3 2-0.04 £ < -0.04 a Vv S ©-0.06 o oH $-0.06 gv -0.08 -0.08 0-209 100 200 300 400 500 ~0.10 epochs ( 500 1000 1500 epochs Contextual Bandit (Barrett) 0.00 3 £ < -0.04 Vv S o oH $-0.06 gv -0.08 ~0.10 ( 500 1000 1500 epochs Figure 1: Performance on contextual bandit tasks. The reward (negative normalized test MSE) for 10 runs are shown and averaged (thick lines). Performance variation for GProp is barely visible. Epochs refer to multiples of dataset; algorithms are ultimately trained on the same number of random samples for both datasets. | 1509.03005#48 | 1509.03005#50 | 1509.03005 | [
"1502.02251"
] |
1509.03005#50 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Description. We converted two robotics datasets, SARCOS3 and Barrett WAM4, into contextual bandit problems via the supervised-to-contextual-bandit transform in (Dud´ık et al., 2014). The datasets have 44,484 and 12,000 training points respectively, both with 21 features corresponding to the positions, velocities and accelerations of seven joints. Labels are 7-dimensional vectors corresponding to the torques of the 7 joints. In the contextual bandit task, the agent samples 21-dimensional state vectors i.i.d. from either the SARCOS or Barrett training data and executes 7-dimensional actions. The reward r(s,a) = â |ly(s ) â all? is the negative mean-square distance from the action to the label. Note that the reward is a scalar, whereas the correct label is a 7-dimensional vector. The gradient of the reward | 1509.03005#49 | 1509.03005#51 | 1509.03005 | [
"1502.02251"
] |
1509.03005#51 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 1 2 â a r(s, a) = y(s) â a is the direction from the action to the correct label. In the supervised setting, the gradient can be computed. In the bandit setting, the reward is a zeroth-order black box. The agent thus receives far less information in the bandit setting than in the fully supervised setting. Intuitively, the negative distance r(s, a) â tellsâ the algorithm that the correct label lies on the surface of a sphere in the 7-dimensional action space that is centred on the most recent action. By contrast, in the supervised setting, the algorithm is given the position of the label in the action space. In the bandit setting, the algorithm must estimate the position of the label on the surface of the sphere. Equivalently, the algorithm must estimate the labelâ s direction relative to the center of the sphere â which is given by the gradient of the value function. 3. Taken from www.gaussianprocess.org/gpml/data/. 4. Taken from http://www.ausy.tu-darmstadt.de/Miscellaneous/Miscellaneous. | 1509.03005#50 | 1509.03005#52 | 1509.03005 | [
"1502.02251"
] |
1509.03005#52 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 19 Balduzzi and Ghifary The goal of the contextual bandit task is thus to simultaneously solve seven nonpara- metric regression problems when observing distances-to-labels instead of directly observing labels. The value function is relatively easy to learn in contextual bandit setting since the task is not sequential. However, both the value function and its gradient are highly nonlinear, and it is precisely the gradient that speciï¬ es where labels lie on the spheres. Network architectures. GProp and COPDAC-Q were implemented on an actor and devi- ator network of two layers (300 and 100 rectiï¬ ers) each and a critic with a hidden layers of 100 and 10 rectiï¬ ers. | 1509.03005#51 | 1509.03005#53 | 1509.03005 | [
"1502.02251"
] |
1509.03005#53 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Updates were computed via RMSProp with momentum. The variance of the Gaussian noise Ï was set to decrease linearly from Ï 2 = 1.0 until reaching Ï 2 = 0.1 at which point it remained ï¬ xed. Performance. Figure 1 compares the test-set performance of policies learned by GProp against COPDAC-Q. The ï¬ nal policies trained by GProp achieved average mean-square test error of 0.013 and 0.014 on the seven SARCOS and Barrett benchmarks respectively. Remarkably, GProp is competitive with fully-supervised nonparametric regression algo- rithms on the SARCOS and Barrett datasets, see Figure 2bc in (Nguyen-Tuong et al., 2008) and the results in (Kpotufe and Boularias, 2013; Trivedi et al., 2014). It is important to note that the results reported in those papers are for algorithms that are given the labels and that solve one regression problem at a time. To the best of our knowledge, there are no prior examples of a bandit or reinforcement learning algorithm that is competitive with fully supervised methods on regression datasets. For comparison, we implemented Backprop on the Actor-network under full-supervision. Backprop converged to .006 and .005 on SARCOS and BARRETT, compared to 0.013 and 0.014 for GProp. Note that BackProp is trained on 7-dim labels whereas GProp receives 1-dim rewards. Contextual Bandit Gradients (Barrett) 0.040 ~ 0.035 â COPDAC-Q , â GradProp 0.030 Ly 0.025 an = 0.020 mB # 0.015 0.010 0.005 0.000, 500 1000 1500 epochs Contextual Bandit Gradients (SARCOS) 0.040 0.035 COPDAC-Q , GradProp 0.030 th 0.025 Ss + 0.020 0.015 0.010 noo 0.000; 100 200 300 400 500 epochs Contextual Bandit Gradients (SARCOS) Contextual Bandit Gradients (Barrett) 0.040 0.040 ~ 0.035 COPDAC-Q 0.035 â | 1509.03005#52 | 1509.03005#54 | 1509.03005 | [
"1502.02251"
] |
1509.03005#54 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | COPDAC-Q , GradProp , â GradProp 0.030 0.030 th 0.025 Ly 0.025 Ss an + 0.020 = 0.020 mB 0.015 # 0.015 0.010 0.010 noo 0.005 0.000; 100 200 300 400 500 0.000, 500 1000 1500 epochs epochs Figure 2: Gradient estimates for contextual bandit tasks. The normalized MSE of the gradient estimates compared against the true gradients, i.e. 1 2, are shown for 10 runs of COPDAC-Q and GProp, along with their averages (thick lines). | 1509.03005#53 | 1509.03005#55 | 1509.03005 | [
"1502.02251"
] |
1509.03005#55 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 20 # Compatible Value Gradients for Deep Reinforcement Learning Accuracy of gradient-estimates. The true value-gradients can be computed and com- pared with the algorithmâ s estimates on the contextual bandit task. Fig. 2 shows the per- formance of the two algorithms. GPropâ s gradient-error converges to < 0.005 on both tasks. COPDAC-Qâ s gradient estimate, implicit in the advantage function, converges to 0.03 (SAR- COS) and 0.07 (BARRETT). This conï¬ rms that GProp yields signiï¬ cantly better gradient estimates. COPDAC-Qâ s estimates are signiï¬ cantly worse for Barrett compared to SARCOS, in line with the worse performance of COPDAC-Q on Barrett in Fig. 1. It is unclear why COPDAC-Qâ s gradient estimate gets worse on Barrett for some period of time. On the other hand, since there are no guarantees on COPDAC-Qâ s estimates, it follows that its erratic behavior is perhaps not surprising. Comparison with bandit task in (Silver et al., 2014). Note that although the contextual bandit problems investigated here are lower-dimensional (with 21-dimensional state spaces and 7-dimensional action spaces) than the bandit problem in (Silver et al., 2014) (with no state space and 10, 25 and 50-dimensional action spaces), they are nevertheless much harder. The optimal action in the bandit problem, in all cases, is the constant vector [4, . . . , 4] consisting of only 4s. In contrast, SARCOS and BARRETT are nontrivial benchmarks even when fully supervised. # 6.2 Octopus Arm The octopus arm task is a challenging environment that is high-dimensional, sequential and highly nonlinear. Desciption. The objective is to learn to hit a target with a simulated octopus arm (Engel et al., 2005).5 Settings are taken from (Silver et al., 2014). Importantly, the action-space is not simpliï¬ ed using â macro-actionsâ . | 1509.03005#54 | 1509.03005#56 | 1509.03005 | [
"1502.02251"
] |
1509.03005#56 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | The arm has C = 6 compartments attached to a rotating base. There are 50 = 8C + 2 state variables (x, y position/velocity of nodes along the upper/lower side of the arm; angular position/velocity of the base) and 20 = 3C + 2 action variables controlling the clockwise and counter-clockwise rotation of the base and three muscles per compartment. After each step, the agent receives a reward of 10 · â dist, where â dist is the change in distance between the arm and the target. The ï¬ nal reward is +50 if the agent hits the target. An episode ends when the target is hit or after 300 steps. The arm initializes at eight positions relative to the target: ±45â ¦, ±75â ¦, ±105â ¦, ±135â ¦. See Appendix B for more details. Network architectures. We applied GProp to an actor-network with 100 hidden recti- ï¬ ers and linear output units clipped to lie in [0, 1]; and critic and deviator networks both with two hidden layers of 100 and 40 rectiï¬ ers, and linear output units. Updates were computed via RMSProp with step rate of 10â 4, moving average decay, with Nesterov mo- mentum (Hinton et al., 2012) penalty of 0.9 and 0.9 respectively, and discount rate γ of 0.95. 5. Simulator taken from http://reinforcementlearningproject.googlecode.com/svn/trunk/FoundationsOfAI/ octopus-arm-simulator/octopus/ | 1509.03005#55 | 1509.03005#57 | 1509.03005 | [
"1502.02251"
] |
1509.03005#57 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 21 # Balduzzi and Ghifary Octopus arm â COPDAC-Q â GradProp 300 250 200 steps to target e a 3 100 50 °% 50000 100000 150000 200000 250000 300000 # training actions Octopus arm â COPDAC-Q â GradProp Fal u > ° w a w ° reward per step N a 2.0 15 Se Poo 1.0 0.5 0.05 50000 100000 150000 200000 250000 300000 # training actions Octopus arm Octopus arm â COPDAC-Q â COPDAC-Q â GradProp â GradProp Fal u 300 > ° 250 w a w ° 200 steps to target e a 3 reward per step N a 2.0 100 15 Se Poo 1.0 50 0.5 °% 50000 100000 150000 200000 250000 300000 0.05 50000 100000 150000 200000 250000 300000 # training actions # training actions Figure 3: Performance on octopus arm task. Ten runs of GProp and COPDAC-Q on a 6-segment octopus arm with 20 action and 50 state dimensions. Thick lines depict average values. Left panel: number of steps/episode for the arm to reach the target. Right panel: corresponding average rewards/step. | 1509.03005#56 | 1509.03005#58 | 1509.03005 | [
"1502.02251"
] |
1509.03005#58 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | The variance of the Gaussian noise was initialized to Ï 2 = 1.0. An explore/exploit tradeoï¬ was implemented as follows. When the arm hit the target in more than 300 steps, we set Ï 2 â Ï 2 · 1.3; otherwise Ï 2 â Ï 2/1.3. A hard lower bound was ï¬ xed at Ï 2 = 0.3. We implemented COPDAC-Q on a variety of architectures; the best results are shown (also please see Figure 3 in (Silver et al., 2014)). They were obtained using a similar architecture to GProp, with sigmoidal hidden units and sigmoidal output units for the actor. Linear, rectilinear and clipped-linear output units were also tried. As for GProp, cloning and experience replay were used to increase stability. Performance. Figure 3 shows the steps-to-target and average-reward-per-step on ten training runs. GProp converges rapidly and reliably (within ±170, 000 steps) to a stable policy that uses less than 50 steps to hit the target on average (see supplementary video for examples of the ï¬ nal policy in action). GProp converges quicker, and to a better solu- tion, than COPDAC-Q. The reader is strongly encouraged to compare our results with those reported in (Silver et al., 2014). To the best of our knowledge, GProp achieves the best performance to date on the octopus arm task. It is clear from the variability displayed in the ï¬ gures that both the policy and Stability. the gradients learned by GProp are more stable than COPDAC-Q. Note that the higher vari- ability exhibited by GProp in the right-hand panel of Fig. 3 (rewards-per-step) is misleading. It arises because dividing by the number of steps â which is lower for GProp since it hits the target more quickly after training â inï¬ ates GPropâ s apparent variability. # 7. Conclusion | 1509.03005#57 | 1509.03005#59 | 1509.03005 | [
"1502.02251"
] |
1509.03005#59 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Value-Gradient Backpropagation (GProp) is the ï¬ rst deep reinforcement learning algorithm with compatible function approximation for continuous policies. It builds on the determinis- 22 # Compatible Value Gradients for Deep Reinforcement Learning tic actor-critic, COPDAC-Q, developed in (Silver et al., 2014) with two decisive modiï¬ cations. First, we incorporate an explicit estimate of the value gradient into the algorithm. Second, we construct a model that decouples the internal structure of the actor, critic, and deviator â so that all three can be trained via backpropagation. GProp achieves state-of-the-art performance on two contextual bandit problems where it simultaneously solves seven regression problems without observing labels. Note that GProp is competitive with recent fully supervised methods that solve a single regression problem at a time. Further, GProp outperforms the prior state-of-the-art on the octopus arm task, quickly converging onto policies that rapidly and ï¬ uidly hit the target. | 1509.03005#58 | 1509.03005#60 | 1509.03005 | [
"1502.02251"
] |
1509.03005#60 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Acknowledgements. We thank Nicolas Heess for sharing the settings of the octopus arm experiments in (Silver et al., 2014). # References Adrian K Agogino and Kagan Tumer. Unifying Temporal and Structural Credit Assignment Problems. In AAMAS, 2004. Adrian K Agogino and Kagan Tumer. Analyzing and Visualizing Multiagent Rewards in Dynamic and Stochastic Environments. Journal of Autonomous Agents and Multi-Agent Systems, 17(2):320â 338, 2008. L C Baird. | 1509.03005#59 | 1509.03005#61 | 1509.03005 | [
"1502.02251"
] |
1509.03005#61 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995. David Balduzzi. Deep Online Convex Optimization by Putting Forecaster to Sleep. arXiv:1509.01851, 2015. In David Balduzzi, Hastagiri Vanchinathan, and Joachim Buhmann. Kickback cuts Backpropâ s red-tape: Biologically plausible credit assignment in neural networks. In AAAI, 2015. Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike Adapative Elements That Can Solve Diï¬ cult Learning Control Problems. IEEE Trans. Systems, Man, Cyb, 13(5):834â 846, 1983. F Bastien, P Lamblin, R Pascanu, J Bergstra, I Goodfellow, A Bergeron, N Bouchard, and Y Bengio. Theano: new features and speed improvements. In NIPS Workshop: Deep Learning and Unsupervised Feature Learning, 2012. J Bergstra, O Breuleux, F Bastien, P Lamblin, R Pascanu, G Desjardins, J Turian, D Warde- Farley, and Yoshua Bengio. Theano: A CPU and GPU Math Expression Compiler. In Proc. Python for Scientiï¬ c Comp. Conf. (SciPy), 2010. | 1509.03005#60 | 1509.03005#62 | 1509.03005 | [
"1502.02251"
] |
1509.03005#62 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | George E Dahl, Tara N Sainath, and Geoï¬ rey Hinton. Improving deep neural networks for LVCSR using rectiï¬ ed linear units and dropout. In IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP), 2013. Christoph Dann, Gerhard Neumann, and Jan Peters. Policy Evaluation with Temporal Diï¬ erences: A Survey and Comparison. JMLR, 15:809â 883, 2014. 23 Balduzzi and Ghifary Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters. | 1509.03005#61 | 1509.03005#63 | 1509.03005 | [
"1502.02251"
] |
1509.03005#63 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | A Survey on Policy Search for Robotics. Foundations and Trends in Machine Learning, 2(1-2):1â 142, 2011. Miroslav Dud´ık, Dumitru Erhan, John Langford, and Lihong Li. Doubly Robust Policy Evaluation and Optimization. Statistical Science, 29(4):485â 511, 2014. Y Engel, P Szab´o, and D Volkinshtein. | 1509.03005#62 | 1509.03005#64 | 1509.03005 | [
"1502.02251"
] |
1509.03005#64 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Learning to control an octopus arm with gaussian process temporal diï¬ erence methods. In NIPS, 2005. Michael Fairbank and Eduardo Alonso. Value-Gradient Learning. In IEEE World Confer- ence on Computational Intelligence (WCCI), 2012. Michael Fairbank, Eduardo Alonso, and Daniel V Prokhorov. An Equivalence Between Adaptive Dynamic Programming With a Critic and Backpropagation Through Time. IEEE Trans. Neur. Net., 24(12):2088â 2100, 2013. | 1509.03005#63 | 1509.03005#65 | 1509.03005 | [
"1502.02251"
] |
1509.03005#65 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Abraham Flaxman, Adam Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: Gradient descent without a gradient. In SODA, 2005. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep Sparse Rectiï¬ er Neural Networks. In Proc. 14th Int Conference on Artiï¬ cial Intelligence and Statistics (AISTATS), 2011. Carlos Guestrin, Michail Lagoudakis, and Ronald Parr. Coordinated Reinforcement Learn- ing. In ICML, 2002. | 1509.03005#64 | 1509.03005#66 | 1509.03005 | [
"1502.02251"
] |
1509.03005#66 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control: Chal- lenges and benchmarks from technical process control. Machine Learning, 84:137â 169, 2011. G Hinton, Nitish Srivastava, and Kevin Swersky. Lecture 6a: Overview of minibatch gra- dient descent. 2012. Chris HolmesParker, Adrian K Agogino, and Kagan Tumer. Combining Reward Shaping and Hierarchies for Scaling to Large Multiagent Systems. The Knowledge Engineering Review, 2014. Michael I Jordan and R A Jacobs. Learning to control an unstable system with forward modeling. In NIPS, 1990. | 1509.03005#65 | 1509.03005#67 | 1509.03005 | [
"1502.02251"
] |
1509.03005#67 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Sham Kakade. A natural policy gradient. In NIPS, 2001. Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPS, 2000. Samory Kpotufe and Abdeslam Boularias. Gradient Weights help Nonparametric Regres- sors. In Advances in Neural Information Processing Systems (NIPS), 2013. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. arXiv:1504.00702, 2015. | 1509.03005#66 | 1509.03005#68 | 1509.03005 | [
"1502.02251"
] |
1509.03005#68 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 24 Compatible Value Gradients for Deep Reinforcement Learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis. | 1509.03005#67 | 1509.03005#69 | 1509.03005 | [
"1502.02251"
] |
1509.03005#69 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 02 2015. Vinod Nair and Geoï¬ rey Hinton. Rectiï¬ ed Linear Units Improve Restricted Boltzmann Machines. In ICML, 2010. A S Nemirovski and D B Yudin. Problem complexity and method eï¬ ciency in optimization. Wiley-Interscience, 1983. Duy Nguyen-Tuong, Jan Peters, and Matthias Seeger. | 1509.03005#68 | 1509.03005#70 | 1509.03005 | [
"1502.02251"
] |
1509.03005#70 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Local Gaussian Process Regression for Real Time Online Model Learning. In NIPS, 2008. Jan Peters and Stefan Schaal. Policy Gradient Methods for Robotics. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2006. Daniel V Prokhorov and Donald C Wunsch. Adaptive Critic Designs. IEEE Trans. Neur. Net., 8(5):997â 1007, 1997. Maxim Raginsky and Alexander Rakhlin. Information-Based Complexity, Feedback and Dynamics in Convex Programming. IEEE Trans. Inf. Theory, 57(10):7036â 7056, 2011. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Ried- miller. Deterministic Policy Gradient Algorithms. In ICML, 2014. | 1509.03005#69 | 1509.03005#71 | 1509.03005 | [
"1502.02251"
] |
1509.03005#71 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Nitish Srivastava, Geoï¬ rey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. Dropout: A Simple Way to Prevent Neural Networks from Overï¬ tting. JMLR, 15:1929â 1958, 2014. R S Sutton and A G Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. Richard Sutton, David McAllester, Satinder Singh, and Yishay Mansour. | 1509.03005#70 | 1509.03005#72 | 1509.03005 | [
"1502.02251"
] |
1509.03005#72 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1999. Richard Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesv´ari, and Eric Wiewiora. Fast Gradient-Descent Methods for Temporal-Diï¬ erence Learning with Linear Function Approximation. In ICML, 2009a. Richard Sutton, Csaba Szepesv´ari, and Hamid Reza Maei. A convergent O(n) algorithm for oï¬ -policy temporal-diï¬ erence learning with linear function approximation. In Adv in Neural Information Processing Systems (NIPS), 2009b. Shubhendu Trivedi, Jialei Wang, Samory Kpotufe, and Gregory Shakhnarovich. A Consis- tent Estimator of the Expected Gradient Outerproduct. In UAI, 2014. | 1509.03005#71 | 1509.03005#73 | 1509.03005 | [
"1502.02251"
] |
1509.03005#73 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | John Tsitsiklis and Benjamin Van Roy. An Analysis of Temporal-Diï¬ erence Learning with Function Approximation. IEEE Trans. Aut. Control, 42(5):674â 690, 1997. 25 Balduzzi and Ghifary Niklas Wahlstr¨om, Thomas B. Sch¨on, and Marc Peter Deisenroth. From Pixels to Torques: Policy Learning with Deep Dynamical Models. arXiv:1502.02251, 2015. Y Wang and J Si. On-line learning control by association and reinforcement. IEEE Trans. Neur. Net., 12(2):264â 276, 2001. Ronald J Williams. | 1509.03005#72 | 1509.03005#74 | 1509.03005 | [
"1502.02251"
] |
1509.03005#74 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229â 256, 1992. M D Zeiler, M Ranzato, R Monga, M Mao, K Yang, Q V Le, P Nguyen, A Senior, V Van- In houcke, J Dean, and G Hinton. On Rectiï¬ ed Linear Units for Speech Processing. ICASSP, 2013. # Appendices # A. Explicit weight updates under GProp It is instructive to describe the weight updates under GProp more explicitly. Let θj, wj and vj denote the weight vector of unit j, according to whether it belongs to the actor, deviator or critic network. Similarly, in each case Ï j or Ï j denotes the inï¬ uence of unit j on the networkâ s output layer, where the inï¬ uence is vector-valued for actor and deviator networks and scalar-valued for the critic network. Weight updates in the deviator-actor-critic model, where all three networks consist of rectiï¬ er units performing stochastic gradient descent, are then per Algorithm 3. Units that are not active on a round do not update their weights that round. Algorithm 3: GProp: Explicit updates. for rounds t= 1,2,...,T do Network gets state s;, responds &e â re + 7QÂ¥* (S41) â QV! (St for unit j = 1,2,...,n do if j is an active actor unit then | O41 <â Of +f - (aw: (5). 7/) - $i (sz) // backpropagate GW else if j is an active critic unit then | Vind VET (&, i) - bi (st) // backpropagate â ¬ else if j is an active deviator unit then | wy â wi taf - (& â â ¬, 7) - bt (st) // backpropagate â ¬-â ¬ # B. Details for octopus arm experiments Listing 1 summarizes technical information with respect to the physical description and task setting used in the octopus arm simulator in XML format. | 1509.03005#73 | 1509.03005#75 | 1509.03005 | [
"1502.02251"
] |
1509.03005#75 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 26 # Compatible Value Gradients for Deep Reinforcement Learning Listing 1 Physical description and task setting for the octopus arm (setting.xml). <c o n s t a n t s > < f r i c t i o n T a n g e n t i a l >0.4</ f r i c t i o n T a n g e n t i a l > < f r i c t i o n P e r p e n d i c u l a r >1</ f r i c t i o n P e r p e n d i c u l a r > <p r e s s u r e >10</ p r e s s u r e > <g r a v i t y >0.01</ g r a v i t y > <s u r f a c e L e v e l >5</ s u r f a c e L e v e l > <buoyancy >0.08</ buoyancy> <m u s c l e A c t i v e >0.1</ m u s c l e A c t i v e > <m u s c l e P a s s i v e >0.04</ m u s c l e P a s s i v e > <m u s c l e N o r m a l i z e d M i n L e n g t h >0.1</ m u s c l e N o r m a l i z e d M i n L e n g t h > <muscleDamping >â | 1509.03005#74 | 1509.03005#76 | 1509.03005 | [
"1502.02251"
] |
1509.03005#76 | Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies | 1</muscleDamping> <r e p u l s i o n C o n s t a n t >.01</ r e p u l s i o n C o n s t a n t > <r e p u l s i o n P o w e r >1</ r e p u l s i o n P o w e r > <r e p u l s i o n T h r e s h o l d >0.7</ r e p u l s i o n T h r e s h o l d > < t o r q u e C o e f f i c i e n t >0.025</ t o r q u e C o e f f i c i e n t > <t a r g e t T a s k t i m e L i m i t =â 300â s t e p R e w a r d=â 1â > <t a r g e t p o s i t i o n =â â 3.25 â 3.25â r e w a r d =â 50â /> </ t a r g e t T a s k > 27 | 1509.03005#75 | 1509.03005 | [
"1502.02251"
] |
|
1509.02971#0 | Continuous control with deep reinforcement learning | 9 1 0 2 l u J 5 ] G L . s c [ 6 v 1 7 9 2 0 . 9 0 5 1 : v i X r a Published as a conference paper at ICLR 2016 # CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING Timothy P. Lillicrapâ , Jonathan J. Huntâ , Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver & Daan Wierstra Google Deepmind London, UK {countzero, jjhunt, apritzel, heess, etom, tassa, davidsilver, wierstra} @ google.com # ABSTRACT We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the de- terministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our al- gorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to ï¬ nd policies whose performance is com- petitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies â end-to-endâ : directly from raw pixel in- puts. # INTRODUCTION One of the primary goals of the ï¬ eld of artiï¬ cial intelligence is to solve complex tasks from unpro- cessed, high-dimensional, sensory input. Recently, signiï¬ cant progress has been made by combin- ing advances in deep learning for sensory processing (Krizhevsky et al., 2012) with reinforcement learning, resulting in the â Deep Q Networkâ (DQN) algorithm (Mnih et al., 2015) that is capable of human level performance on many Atari video games using unprocessed pixels for input. To do so, deep neural network function approximators were used to estimate the action-value function. However, while DQN solves problems with high-dimensional observation spaces, it can only handle discrete and low-dimensional action spaces. | 1509.02971#1 | 1509.02971 | [
"1502.03167"
] |
|
1509.02971#1 | Continuous control with deep reinforcement learning | Many tasks of interest, most notably physical control tasks, have continuous (real valued) and high dimensional action spaces. DQN cannot be straight- forwardly applied to continuous domains since it relies on a ï¬ nding the action that maximizes the action-value function, which in the continuous valued case requires an iterative optimization process at every step. An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous domains is to to simply discretize the action space. However, this has many limitations, most no- tably the curse of dimensionality: the number of actions increases exponentially with the number of degrees of freedom. For example, a 7 degree of freedom system (as in the human arm) with the coarsest discretization ai â | 1509.02971#0 | 1509.02971#2 | 1509.02971 | [
"1502.03167"
] |
1509.02971#2 | Continuous control with deep reinforcement learning | {â k, 0, k} for each joint leads to an action space with dimensionality: 37 = 2187. The situation is even worse for tasks that require ï¬ ne control of actions as they require a correspondingly ï¬ ner grained discretization, leading to an explosion of the number of discrete actions. Such large action spaces are difï¬ cult to explore efï¬ ciently, and thus successfully training DQN-like networks in this context is likely intractable. Additionally, naive discretization of action spaces needlessly throws away information about the structure of the action domain, which may be essential for solving many problems. In this work we present a model-free, off-policy actor-critic algorithm using deep function approx- imators that can learn policies in high-dimensional, continuous action spaces. | 1509.02971#1 | 1509.02971#3 | 1509.02971 | [
"1502.03167"
] |
1509.02971#3 | Continuous control with deep reinforcement learning | Our work is based â These authors contributed equally. 1 Published as a conference paper at ICLR 2016 on the deterministic policy gradient (DPG) algorithm (Silver et al., 2014) (itself similar to NFQCA (Hafner & Riedmiller, 2011), and similar ideas can be found in (Prokhorov et al., 1997)). However, as we show below, a naive application of this actor-critic method with neural function approximators is unstable for challenging problems. Here we combine the actor-critic approach with insights from the recent success of Deep Q Network (DQN) (Mnih et al., 2013; 2015). Prior to DQN, it was generally believed that learning value functions using large, non-linear function approximators was difï¬ | 1509.02971#2 | 1509.02971#4 | 1509.02971 | [
"1502.03167"
] |
1509.02971#4 | Continuous control with deep reinforcement learning | cult and unstable. DQN is able to learn value functions using such function approximators in a stable and robust way due to two innovations: 1. the network is trained off-policy with samples from a replay buffer to minimize correlations between samples; 2. the network is trained with a target Q network to give consistent targets during temporal difference backups. In this work we make use of the same ideas, along with batch normalization (Ioffe & Szegedy, 2015), a recent advance in deep learning. In order to evaluate our method we constructed a variety of challenging physical control problems that involve complex multi-joint movements, unstable and rich contact dynamics, and gait behavior. Among these are classic problems such as the cartpole swing-up problem, as well as many new domains. A long-standing challenge of robotic control is to learn an action policy directly from raw sensory input such as video. | 1509.02971#3 | 1509.02971#5 | 1509.02971 | [
"1502.03167"
] |
1509.02971#5 | Continuous control with deep reinforcement learning | Accordingly, we place a ï¬ xed viewpoint camera in the simulator and attempted all tasks using both low-dimensional observations (e.g. joint angles) and directly from pixels. Our model-free approach which we call Deep DPG (DDPG) can learn competitive policies for all of our tasks using low-dimensional observations (e.g. cartesian coordinates or joint angles) using the same hyper-parameters and network structure. In many cases, we are also able to learn good policies directly from pixels, again keeping hyperparameters and network structure constant 1. A key feature of the approach is its simplicity: it requires only a straightforward actor-critic archi- tecture and learning algorithm with very few â moving partsâ , making it easy to implement and scale to more difï¬ cult problems and larger networks. For the physical control problems we compare our results to a baseline computed by a planner (Tassa et al., 2012) that has full access to the underly- ing simulated dynamics and its derivatives (see supplementary information). Interestingly, DDPG can sometimes ï¬ nd policies that exceed the performance of the planner, in some cases even when learning from pixels (the planner always plans over the underlying low-dimensional state space). # 2 BACKGROUND We consider a standard reinforcement learning setup consisting of an agent interacting with an en- vironment E in discrete timesteps. At each timestep t the agent receives an observation xt, takes an action at and receives a scalar reward rt. In all the environments considered here the actions are real-valued at â | 1509.02971#4 | 1509.02971#6 | 1509.02971 | [
"1502.03167"
] |
1509.02971#6 | Continuous control with deep reinforcement learning | IRN . In general, the environment may be partially observed so that the entire history of the observation, action pairs st = (x1, a1, ..., atâ 1, xt) may be required to describe the state. Here, we assumed the environment is fully-observed so st = xt. An agentâ s behavior is defined by a policy, 7, which maps states to a probability distribution over the actions 7: S â P(A). The environment, E, may also be stochastic. We model it as a Markov decision process with a state space S, action space A = JRN,, an initial state distribution p(s1), transition dynamics p(s;41|s,, @,), and reward function r(s;, a¢). The return from a state is defined as the sum of discounted future reward Ry = an 9 r(si, ai) with a discounting factor y â ¬ [0, 1]. Note that the return depends on the actions chosen, and therefore on the policy 7, and may be stochastic. The goal in reinforcement learning is to learn a policy which maximizes the expected return from the start distribution J = E,, 5,.£,a;~7 [Ri]. We denote the discounted state visitation distribution for a policy 7 as pâ | 1509.02971#5 | 1509.02971#7 | 1509.02971 | [
"1502.03167"
] |
1509.02971#7 | Continuous control with deep reinforcement learning | . The action-value function is used in many reinforcement learning algorithms. It describes the ex- pected return after taking an action at in state st and thereafter following policy Ï : QÏ (st, at) = Eriâ ¥t,si>tâ ¼E,ai>tâ ¼Ï [Rt|st, at] (1) 1You can view a movie of some of the learned policies at https://goo.gl/J4PIAz 2 Published as a conference paper at ICLR 2016 Many approaches in reinforcement learning make use of the recursive relationship known as the Bellman equation: | 1509.02971#6 | 1509.02971#8 | 1509.02971 | [
"1502.03167"
] |
1509.02971#8 | Continuous control with deep reinforcement learning | Q" (81,41) = Evy seyioe [P(8e, ar) + YÂ¥ Ea en [Qâ (se41, ar41)]] (2) If the target policy is deterministic we can describe it as a function µ : S â A and avoid the inner expectation: Qµ(st, at) = Ert,st+1â ¼E [r(st, at) + γQµ(st+1, µ(st+1))] The expectation depends only on the environment. This means that it is possible to learn Qµ off- policy, using transitions which are generated from a different stochastic behavior policy β. Q-learning (Watkins & Dayan, 1992), a commonly used off-policy algorithm, uses the greedy policy µ(s) = arg maxa Q(s, a). We consider function approximators parameterized by θQ, which we optimize by minimizing the loss: 2 L(62) = Ey, p8 a~B,rewE [(Q(s1, ar) â y) | (4) where yt = r(st, at) + γQ(st+1, µ(st+1)|θQ). (5) While yt is also dependent on θQ, this is typically ignored. The use of large, non-linear function approximators for learning value or action-value functions has often been avoided in the past since theoretical performance guarantees are impossible, and prac- tically learning tends to be unstable. Recently, (Mnih et al., 2013; 2015) adapted the Q-learning algorithm in order to make effective use of large neural networks as function approximators. Their algorithm was able to learn to play Atari games from pixels. In order to scale Q-learning they intro- duced two major changes: the use of a replay buffer, and a separate target network for calculating yt. We employ these in the context of DDPG and explain their implementation in the next section. # 3 ALGORITHM It is not possible to straightforwardly apply Q-learning to continuous action spaces, because in con- tinuous spaces ï¬ nding the greedy policy requires an optimization of at at every timestep; this opti- mization is too slow to be practical with large, unconstrained function approximators and nontrivial action spaces. | 1509.02971#7 | 1509.02971#9 | 1509.02971 | [
"1502.03167"
] |
1509.02971#9 | Continuous control with deep reinforcement learning | Instead, here we used an actor-critic approach based on the DPG algorithm (Silver et al., 2014). The DPG algorithm maintains a parameterized actor function µ(s|θµ) which speciï¬ es the current policy by deterministically mapping states to a speciï¬ c action. The critic Q(s, a) is learned using the Bellman equation as in Q-learning. The actor is updated by following the applying the chain rule to the expected return from the start distribution J with respect to the actor parameters: Vow d © Esa [Vor Q(s, a|0%)| s=s,.0=n(s1|0")| Ex,x08 [VaQ(s; 210% )|s=se,a=pi(se) Vo, 4(5|9") |s=se] (6) Silver et al. (2014) proved that this is the policy gradient, the gradient of the policyâ s performance 2. As with Q learning, introducing non-linear function approximators means that convergence is no longer guaranteed. However, such approximators appear essential in order to learn and generalize on large state spaces. NFQCA (Hafner & Riedmiller, 2011), which uses the same update rules as DPG but with neural network function approximators, uses batch learning for stability, which is intractable for large networks. A minibatch version of NFQCA which does not reset the policy at each update, as would be required to scale to large networks, is equivalent to the original DPG, which we compare to here. | 1509.02971#8 | 1509.02971#10 | 1509.02971 | [
"1502.03167"
] |
1509.02971#10 | Continuous control with deep reinforcement learning | Our contribution here is to provide modiï¬ cations to DPG, inspired by the success of DQN, which allow it to use neural network function approximators to learn in large state and action spaces online. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1). 2In practice, as in commonly done in policy gradient implementations, we ignored the discount in the state- visitation distribution Ï Î². 3 (3) Published as a conference paper at ICLR 2016 One challenge when using neural networks for reinforcement learning is that most optimization al- gorithms assume that the samples are independently and identically distributed. Obviously, when the samples are generated from exploring sequentially in an environment this assumption no longer holds. | 1509.02971#9 | 1509.02971#11 | 1509.02971 | [
"1502.03167"
] |
1509.02971#11 | Continuous control with deep reinforcement learning | Additionally, to make efï¬ cient use of hardware optimizations, it is essential to learn in mini- batches, rather than online. As in DQN, we used a replay buffer to address these issues. The replay buffer is a ï¬ nite sized cache R. Transitions were sampled from the environment according to the exploration policy and the tuple (st, at, rt, st+1) was stored in the replay buffer. When the replay buffer was full the oldest samples were discarded. At each timestep the actor and critic are updated by sampling a minibatch uniformly from the buffer. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowing the algorithm to beneï¬ t from learning across a set of uncorrelated transitions. Directly implementing Q learning (equation|4) with neural networks proved to be unstable in many environments. Since the network Q(s,a|9@) being updated is also used in calculating the target value (equation|5), the Q update is prone to divergence. Our solution is similar to the target network used in (Mnih et al.| but modified for actor-critic and using â | 1509.02971#10 | 1509.02971#12 | 1509.02971 | [
"1502.03167"
] |
1509.02971#12 | Continuous control with deep reinforcement learning | softâ target updates, rather than directly copying the weights. We create a copy of the actor and critic networks, Qâ (s, ale2â ) and pe (s|o"â ) respectively, that are used for calculating the target values. The weights of these target networks are then updated by having them slowly track the learned networks: 0â + 70 + (1 â 7)â with r < 1. This means that the target values are constrained to change slowly, greatly improving the stability of learning. This simple change moves the relatively unstable problem of learning the action-value function closer to the case of supervised learning, a problem for which robust solutions exist. We found that having both a target yuâ and Qâ was required to have stable targets y; in order to consistently train the critic without divergence. This may slow learning, since the target network delays the propagation of value estimations. However, in practice we found this was greatly outweighed by the stability of learning. When learning from low dimensional feature vector observations, the different components of the observation may have different physical units (for example, positions versus velocities) and the ranges may vary across environments. This can make it difï¬ cult for the network to learn effec- tively and may make it difï¬ cult to ï¬ | 1509.02971#11 | 1509.02971#13 | 1509.02971 | [
"1502.03167"
] |
1509.02971#13 | Continuous control with deep reinforcement learning | nd hyper-parameters which generalise across environments with different scales of state values. One approach to this problem is to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization (Ioffe & Szegedy, 2015). This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance. In addition, it maintains a run- ning average of the mean and variance to use for normalization during testing (in our case, during exploration or evaluation). In deep networks, it is used to minimize covariance shift during training, by ensuring that each layer receives whitened input. In the low-dimensional case, we used batch normalization on the state input and all layers of the µ network and all layers of the Q network prior to the action input (details of the networks are given in the supplementary material). With batch normalization, we were able to learn effectively across many different tasks with differing types of units, without needing to manually ensure the units were within a set range. A major challenge of learning in continuous action spaces is exploration. An advantage of off- policies algorithms such as DDPG is that we can treat the problem of exploration independently from the learning algorithm. We constructed an exploration policy yuâ by adding noise sampled from a noise process NV to our actor policy | 1509.02971#12 | 1509.02971#14 | 1509.02971 | [
"1502.03167"
] |
1509.02971#14 | Continuous control with deep reinforcement learning | # H(s1) = psilOf) (7) N can be chosen to suit the environment. As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efï¬ ciency in physical control problems with inertia (similar use of auto- correlated noise was introduced in (Wawrzy´nski, 2015)). # 4 RESULTS We constructed simulated physical environments of varying levels of difï¬ culty to test our algorithm. This included classic reinforcement learning environments such as cartpole, as well as difï¬ cult, 4 | 1509.02971#13 | 1509.02971#15 | 1509.02971 | [
"1502.03167"
] |
1509.02971#15 | Continuous control with deep reinforcement learning | Published as a conference paper at ICLR 2016 # Algorithm 1 DDPG algorithm Randomly initialize critic network Q(s, a|9@) and actor :(s|0â ) with weights 0? and 6". Initialize target network Qâ and jxâ with weights 02â â 62, 9 ~ 9H Initialize replay buffer R for episode = 1,M do Initialize a random process N for action exploration Receive initial observation state s; fort=1,T do Select action a, = j1(s,|9") +N; according to the current policy and exploration noise Execute action a; and observe reward r; and observe new state 5,41 Store transition (s,, 41,7, 8:41) in R Sample a random minibatch of N transitions (s;,a;,7;, 5:41) from R Set yi = ri +7Q" (sin, Ml(si41|0"â )0%â ) Update critic by minimizing the loss: L = 4 0 ,(yi â Q(si, ai|92))â | 1509.02971#14 | 1509.02971#16 | 1509.02971 | [
"1502.03167"
] |
1509.02971#16 | Continuous control with deep reinforcement learning | Update the actor policy using the sampled policy gradient: 1 Vou) © x » VaQ(s, a|0%) |sâ s,,a=p(s,) Von ( 5/0") |, Update the target networks: 6? + 702 + (1â 7)6? OH! ON 4 (l- r)oH" # end for end for high dimensional tasks such as gripper, tasks involving contacts such as puck striking (canada) and locomotion tasks such as cheetah (Wawrzy´nski, 2009). In all domains but cheetah the actions were torques applied to the actuated joints. These environments were simulated using MuJoCo (Todorov et al., 2012). Figure 1 shows renderings of some of the environments used in the task (the supplementary contains details of the environments and you can view some of the learned policies at https://goo.gl/J4PIAz). In all tasks, we ran experiments using both a low-dimensional state description (such as joint angles and positions) and high-dimensional renderings of the environment. As in DQN (Mnih et al., 2013; 2015), in order to make the problems approximately fully observable in the high dimensional envi- ronment we used action repeats. For each timestep of the agent, we step the simulation 3 timesteps, repeating the agentâ s action and rendering each time. Thus the observation reported to the agent contains 9 feature maps (the RGB of each of the 3 renderings) which allows the agent to infer veloc- ities using the differences between frames. The frames were downsampled to 64x64 pixels and the 8-bit RGB values were converted to ï¬ oating point scaled to [0, 1]. See supplementary information for details of our network structure and hyperparameters. We evaluated the policy periodically during training by testing it without exploration noise. Figure 2 shows the performance curve for a selection of environments. We also report results with compo- nents of our algorithm (i.e. the target network or batch normalization) removed. In order to perform well across all tasks, both of these additions are necessary. In particular, learning without a target network, as in the original work with DPG, is very poor in many environments. | 1509.02971#15 | 1509.02971#17 | 1509.02971 | [
"1502.03167"
] |
1509.02971#17 | Continuous control with deep reinforcement learning | Surprisingly, in some simpler tasks, learning policies from pixels is just as fast as learning using the low-dimensional state descriptor. This may be due to the action repeats making the problem simpler. It may also be that the convolutional layers provide an easily separable representation of state space, which is straightforward for the higher layers to learn on quickly. Table 1 summarizes DDPGâ s performance across all of the environments (results are averaged over 5 replicas). We normalized the scores using two baselines. | 1509.02971#16 | 1509.02971#18 | 1509.02971 | [
"1502.03167"
] |
1509.02971#18 | Continuous control with deep reinforcement learning | The ï¬ rst baseline is the mean return from a naive policy which samples actions from a uniform distribution over the valid action space. The second baseline is iLQG (Todorov & Li, 2005), a planning based solver with full access to the 5 Published as a conference paper at ICLR 2016 underlying physical model and its derivatives. We normalize scores so that the naive policy has a mean score of 0 and iLQG has a mean score of 1. DDPG is able to learn good policies on many of the tasks, and in many cases some of the replicas learn policies which are superior to those found by iLQG, even when learning directly from pixels. It can be challenging to learn accurate value estimates. Q-learning, for example, is prone to over- estimating values (Hasselt, 2010). | 1509.02971#17 | 1509.02971#19 | 1509.02971 | [
"1502.03167"
] |
1509.02971#19 | Continuous control with deep reinforcement learning | We examined DDPGâ s estimates empirically by comparing the values estimated by Q after training with the true returns seen on test episodes. Figure 3 shows that in simple tasks DDPG estimates returns accurately without systematic biases. For harder tasks the Q estimates are worse, but DDPG is still able learn good policies. To demonstrate the generality of our approach we also include Torcs, a racing game where the actions are acceleration, braking and steering. Torcs has previously been used as a testbed in other policy learning approaches (Koutn´ık et al., 2014b). We used an identical network architecture and learning algorithm hyper-parameters to the physics tasks but altered the noise process for exploration because of the very different time scales involved. On both low-dimensional and from pixels, some replicas were able to learn reasonable policies that are able to complete a circuit around the track though other replicas failed to learn a sensible policy. | 1509.02971#18 | 1509.02971#20 | 1509.02971 | [
"1502.03167"
] |
1509.02971#20 | Continuous control with deep reinforcement learning | Figure 1: Example screenshots of a sample of environments we attempt to solve with DDPG. In order from the left: the cartpole swing-up task, a reaching task, a gasp and move task, a puck-hitting task, a monoped balancing task, two locomotion tasks and Torcs (driving simulator). We tackle all tasks using both low-dimensional feature vector and high-dimensional pixel inputs. Detailed descriptions of the environments are provided in the supplementary. Movies of some of the learned policies are available at https://goo.gl/J4PIAz. | 1509.02971#19 | 1509.02971#21 | 1509.02971 | [
"1502.03167"
] |
1509.02971#21 | Continuous control with deep reinforcement learning | Cart Pendulum Swing-up. Cartpole Swing-up Fixed Reacher Monoped Balancing qr 1 1 1 s] o| 0 0) 0 . oO . . Gripper Blockworld Puck Shooting Cheetah Moving Gripper pil a 1 g 1 ? é 0 3 o| Hy a ° 0) E 0 S 1 . . 20 1 0 1 0 1 ) 1 0 1 # Million Steps Figure 2: Performance curves for a selection of domains using variants of DPG: original DPG algorithm (minibatch NFQCA) with batch normalization (light grey), with target network (dark grey), with target networks and batch normalization (green), with target networks from pixel-only inputs (blue). Target networks are crucial. # 5 RELATED WORK The original DPG paper evaluated the algorithm with toy problems using tile-coding and linear function approximators. It demonstrated data efï¬ ciency advantages for off-policy DPG over both on- and off-policy stochastic actor critic. It also solved one more challenging task in which a multi- jointed octopus arm had to strike a target with any part of the limb. However, that paper did not demonstrate scaling the approach to large, high-dimensional observation spaces as we have here. It has often been assumed that standard policy search methods such as those explored in the present work are simply too fragile to scale to difï¬ cult problems (Levine et al., 2015). Standard policy search | 1509.02971#20 | 1509.02971#22 | 1509.02971 | [
"1502.03167"
] |
1509.02971#22 | Continuous control with deep reinforcement learning | 6 Published as a conference paper at ICLR 2016 Pendulum Cartpole Cheetah o ral ov A a £ 4 | a uu Return Return Figure 3: Density plot showing estimated Q values versus observed returns sampled from test episodes on 5 replicas. In simple domains such as pendulum and cartpole the Q values are quite accurate. In more complex tasks, the Q estimates are less accurate, but can still be used to learn competent policies. Dotted line indicates unity, units are arbitrary. Table 1: Performance after training across all environments for at most 2.5 million steps. We report both the average and best observed (across 5 runs). All scores, except Torcs, are normalized so that a random agent receives 0 and a planning algorithm 1; for Torcs we present the raw reward score. We include results from the DDPG algorithn in the low-dimensional (lowd) version of the environment and high-dimensional (pix). For comparision we also include results from the original DPG algorithm with a replay buffer and batch normalization (cntrl). Rav,lowd Rbest,lowd Rbest,pix Rav,cntrl Rbest,cntrl -0.080 -0.139 0.125 -0.045 0.343 0.244 -0.468 0.197 0.143 0.583 -0.008 0.259 0.290 0.620 0.461 0.557 -0.031 0.078 0.198 0.416 0.099 0.231 0.204 -0.046 1.010 0.393 -911.034 is thought to be difï¬ | 1509.02971#21 | 1509.02971#23 | 1509.02971 | [
"1502.03167"
] |
1509.02971#23 | Continuous control with deep reinforcement learning | cult because it deals simultaneously with complex environmental dynamics and a complex policy. Indeed, most past work with actor-critic and policy optimization approaches have had difï¬ culty scaling up to more challenging problems (Deisenroth et al., 2013). Typically, this is due to instability in learning wherein progress on a problem is either destroyed by subsequent learning updates, or else learning is too slow to be practical. Recent work with model-free policy search has demonstrated that it may not be as fragile as previ- ously supposed. Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013) has trained stochastic policies | 1509.02971#22 | 1509.02971#24 | 1509.02971 | [
"1502.03167"
] |
1509.02971#24 | Continuous control with deep reinforcement learning | 7 Published as a conference paper at ICLR 2016 in an actor-critic framework with a replay buffer. Concurrent with our work, Balduzzi & Ghifary (2015) extended the DPG algorithm with a â deviatorâ network which explicitly learns â Q/â a. How- ever, they only train on two low-dimensional domains. Heess et al. (2015) introduced SVG(0) which also uses a Q-critic but learns a stochastic policy. DPG can be considered the deterministic limit of SVG(0). The techniques we described here for scaling DPG are also applicable to stochastic policies by using the reparametrization trick (Heess et al., 2015; Schulman et al., 2015a). Another approach, trust region policy optimization (TRPO) (Schulman et al., 2015b), directly con- structs stochastic neural network policies without decomposing problems into optimal control and supervised phases. This method produces near monotonic improvements in return by making care- fully chosen updates to the policy parameters, constraining updates to prevent the new policy from diverging too far from the existing policy. This approach does not require learning an action-value function, and (perhaps as a result) appears to be signiï¬ cantly less data efï¬ cient. | 1509.02971#23 | 1509.02971#25 | 1509.02971 | [
"1502.03167"
] |
1509.02971#25 | Continuous control with deep reinforcement learning | To combat the challenges of the actor-critic approach, recent work with guided policy search (GPS) algorithms (e.g., (Levine et al., 2015)) decomposes the problem into three phases that are rela- tively easy to solve: ï¬ rst, it uses full-state observations to create locally-linear approximations of the dynamics around one or more nominal trajectories, and then uses optimal control to ï¬ nd the locally-linear optimal policy along these trajectories; ï¬ nally, it uses supervised learning to train a complex, non-linear policy (e.g. a deep neural network) to reproduce the state-to-action mapping of the optimized trajectories. This approach has several beneï¬ ts, including data efï¬ ciency, and has been applied successfully to a variety of real-world robotic manipulation tasks using vision. In these tasks GPS uses a similar convolutional policy network to ours with 2 notable exceptions: 1. it uses a spatial softmax to reduce the dimensionality of visual features into a single (x, y) coordinate for each feature map, and 2. the policy also receives direct low-dimensional state information about the conï¬ guration of the robot at the ï¬ rst fully connected layer in the network. Both likely increase the power and data efï¬ ciency of the algorithm and could easily be exploited within the DDPG framework. PILCO (Deisenroth & Rasmussen, 2011) uses Gaussian processes to learn a non-parametric, proba- bilistic model of the dynamics. Using this learned model, PILCO calculates analytic policy gradients and achieves impressive data efï¬ ciency in a number of control problems. However, due to the high computational demand, PILCO is â impractical for high-dimensional problemsâ (Wahlstr¨om et al., 2015). It seems that deep function approximators are the most promising approach for scaling rein- forcement learning to large, high-dimensional domains. Wahlstr¨om et al. (2015) used a deep dynamical model network along with model predictive control to solve the pendulum swing-up task from pixel input. They trained a differentiable forward model and encoded the goal state into the learned latent space. They use model-predictive control over the learned model to ï¬ nd a policy for reaching the target. | 1509.02971#24 | 1509.02971#26 | 1509.02971 | [
"1502.03167"
] |
1509.02971#26 | Continuous control with deep reinforcement learning | However, this approach is only applicable to domains with goal states that can be demonstrated to the algorithm. Recently, evolutionary approaches have been used to learn competitive policies for Torcs from pixels using compressed weight parametrizations (Koutn´ık et al., 2014a) or unsupervised learning (Koutn´ık et al., 2014b) to reduce the dimensionality of the evolved weights. It is unclear how well these approaches generalize to other problems. # 6 CONCLUSION The work combines insights from recent advances in deep learning and reinforcement learning, re- sulting in an algorithm that robustly solves challenging problems across a variety of domains with continuous action spaces, even when using raw pixels for observations. As with most reinforcement learning algorithms, the use of non-linear function approximators nulliï¬ es any convergence guar- antees; however, our experimental results demonstrate that stable learning without the need for any modiï¬ cations between environments. Interestingly, all of our experiments used substantially fewer steps of experience than was used by DQN learning to ï¬ nd solutions in the Atari domain. Nearly all of the problems we looked at were solved within 2.5 million steps of experience (and usually far fewer), a factor of 20 fewer steps than 8 | 1509.02971#25 | 1509.02971#27 | 1509.02971 | [
"1502.03167"
] |
1509.02971#27 | Continuous control with deep reinforcement learning | Published as a conference paper at ICLR 2016 DQN requires for good Atari solutions. This suggests that, given more simulation time, DDPG may solve even more difï¬ cult problems than those considered here. A few limitations to our approach remain. Most notably, as with most model-free reinforcement approaches, DDPG requires a large number of training episodes to ï¬ nd solutions. However, we believe that a robust model-free approach may be an important component of larger systems which may attack these limitations (Gl¨ascher et al., 2010). # REFERENCES Balduzzi, David and Ghifary, Muhammad. Compatible value gradients for reinforcement learning of continuous deep policies. arXiv preprint arXiv:1509.03005, 2015. Deisenroth, Marc and Rasmussen, Carl E. | 1509.02971#26 | 1509.02971#28 | 1509.02971 | [
"1502.03167"
] |
1509.02971#28 | Continuous control with deep reinforcement learning | Pilco: A model-based and data-efï¬ cient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML- 11), pp. 465â 472, 2011. Deisenroth, Marc Peter, Neumann, Gerhard, Peters, Jan, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â 142, 2013. Gl¨ascher, Jan, Daw, Nathaniel, Dayan, Peter, and Oâ Doherty, John P. | 1509.02971#27 | 1509.02971#29 | 1509.02971 | [
"1502.03167"
] |
1509.02971#29 | Continuous control with deep reinforcement learning | States versus rewards: dis- sociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron, 66(4):585â 595, 2010. Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectiï¬ er networks. In Proceed- ings of the 14th International Conference on Artiï¬ cial Intelligence and Statistics. JMLR W&CP Volume, volume 15, pp. 315â 323, 2011. Hafner, Roland and Riedmiller, Martin. | 1509.02971#28 | 1509.02971#30 | 1509.02971 | [
"1502.03167"
] |
1509.02971#30 | Continuous control with deep reinforcement learning | Reinforcement learning in feedback control. Machine learning, 84(1-2):137â 169, 2011. Hasselt, Hado V. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613â 2621, 2010. Heess, N., Hunt, J. J, Lillicrap, T. P, and Silver, D. Memory-based control with recurrent neural networks. NIPS Deep Reinforcement Learning Workshop (arXiv:1512.04455), 2015. | 1509.02971#29 | 1509.02971#31 | 1509.02971 | [
"1502.03167"
] |
1509.02971#31 | Continuous control with deep reinforcement learning | Heess, Nicolas, Wayne, Gregory, Silver, David, Lillicrap, Tim, Erez, Tom, and Tassa, Yuval. Learn- ing continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2926â 2934, 2015. Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Evolving deep unsupervised convolu- tional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference on Genetic and evolutionary computation, pp. 541â | 1509.02971#30 | 1509.02971#32 | 1509.02971 | [
"1502.03167"
] |
1509.02971#32 | Continuous control with deep reinforcement learning | 548. ACM, 2014a. Koutn´ık, Jan, Schmidhuber, J¨urgen, and Gomez, Faustino. Online evolution of deep convolutional network for vision-based reinforcement learning. In From Animals to Animats 13, pp. 260â 269. Springer, 2014b. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬ cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â 1105, 2012. Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. | 1509.02971#31 | 1509.02971#33 | 1509.02971 | [
"1502.03167"
] |
1509.02971#33 | Continuous control with deep reinforcement learning | 9 Published as a conference paper at ICLR 2016 Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wier- stra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human- level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. Prokhorov, Danil V, Wunsch, Donald C, et al. Adaptive critic designs. Neural Networks, IEEE Transactions on, 8(5):997â 1007, 1997. Schulman, John, Heess, Nicolas, Weber, Theophane, and Abbeel, Pieter. | 1509.02971#32 | 1509.02971#34 | 1509.02971 | [
"1502.03167"
] |
1509.02971#34 | Continuous control with deep reinforcement learning | Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3510â 3522, 2015a. Schulman, John, Levine, Sergey, Moritz, Philipp, Jordan, Michael I, and Abbeel, Pieter. Trust region policy optimization. arXiv preprint arXiv:1502.05477, 2015b. Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In ICML, 2014. Tassa, Yuval, Erez, Tom, and Todorov, Emanuel. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906â 4913. IEEE, 2012. Todorov, Emanuel and Li, Weiwei. A generalized iterative lqg method for locally-optimal feed- back control of constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings of the 2005, pp. 300â 306. IEEE, 2005. | 1509.02971#33 | 1509.02971#35 | 1509.02971 | [
"1502.03167"
] |
1509.02971#35 | Continuous control with deep reinforcement learning | Todorov, Emanuel, Erez, Tom, and Tassa, Yuval. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â 5033. IEEE, 2012. Uhlenbeck, George E and Ornstein, Leonard S. On the theory of the brownian motion. Physical review, 36(5):823, 1930. Wahlstr¨om, Niklas, Sch¨on, Thomas B, and Deisenroth, Marc Peter. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv:1502.02251, 2015. Watkins, Christopher JCH and Dayan, Peter. Q-learning. Machine learning, 8(3-4):279â 292, 1992. | 1509.02971#34 | 1509.02971#36 | 1509.02971 | [
"1502.03167"
] |
1509.02971#36 | Continuous control with deep reinforcement learning | Wawrzy´nski, PaweÅ . Real-time reinforcement learning by sequential actorâ critics and experience replay. Neural Networks, 22(10):1484â 1497, 2009. Wawrzy´nski, PaweÅ . Control policy with autocorrelated noise in reinforcement learning for robotics. International Journal of Machine Learning and Computing, 5:91â 95, 2015. Wawrzy´nski, PaweÅ and Tanwani, Ajay Kumar. | 1509.02971#35 | 1509.02971#37 | 1509.02971 | [
"1502.03167"
] |
1509.02971#37 | Continuous control with deep reinforcement learning | Autonomous reinforcement learning with experience replay. Neural Networks, 41:156â 167, 2013. 10 Published as a conference paper at ICLR 2016 # Supplementary Information: Continuous control with deep reinforcement learning 7 EXPERIMENT DETAILS We used Adam (Kingma & Ba, 2014) for learning the neural network parameters with a learning rate of 10â 4 and 10â 3 for the actor and critic respectively. For Q we included L2 weight decay of 10â 2 and used a discount factor of γ = 0.99. For the soft target updates we used Ï = 0.001. The neural networks used the rectiï¬ ed non-linearity (Glorot et al., 2011) for all hidden layers. The ï¬ nal output layer of the actor was a tanh layer, to bound the actions. The low-dimensional networks had 2 hidden layers with 400 and 300 units respectively (â 130,000 parameters). Actions were not included until the 2nd hidden layer of Q. When learning from pixels we used 3 convolutional layers (no pooling) with 32 ï¬ lters at each layer. This was followed by two fully connected layers with 200 units (â 430,000 parameters). The ï¬ nal layer weights and biases of both the actor and critic were initialized from a uniform distribution [â 3 à 10â 3, 3 à 10â 3] and [3 à 10â 4, 3 à 10â 4] for the low dimensional and pixel cases respectively. This was to ensure the initial outputs for the policy and value estimates were near zero. The other layers were initialized from uniform distributions [â 1â f ] where f is the fan-in of the layer. The actions were not included until the fully-connected layers. We trained with minibatch sizes of 64 for the low dimensional problems and 16 on pixels. We used a replay buffer size of 106. For the exploration noise process we used temporally correlated noise in order to explore well in physical environments that have momentum. We used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) with θ = 0.15 and Ï = 0.2. The Ornstein-Uhlenbeck process models the velocity of a Brownian particle with friction, which results in temporally correlated values centered around 0. | 1509.02971#36 | 1509.02971#38 | 1509.02971 | [
"1502.03167"
] |
1509.02971#38 | Continuous control with deep reinforcement learning | # 8 PLANNING ALGORITHM Our planner is implemented as a model-predictive controller (Tassa et al., 2012): at every time step we run a single iteration of trajectory optimization (using iLQG, (Todorov & Li, 2005)), starting from the true state of the system. Every single trajectory optimization is planned over a horizon between 250ms and 600ms, and this planning horizon recedes as the simulation of the world unfolds, as is the case in model-predictive control. The iLQG iteration begins with an initial rollout of the previous policy, which determines the nom- inal trajectory. We use repeated samples of simulated dynamics to approximate a linear expansion of the dynamics around every step of the trajectory, as well as a quadratic expansion of the cost function. We use this sequence of locally-linear-quadratic models to integrate the value function backwards in time along the nominal trajectory. | 1509.02971#37 | 1509.02971#39 | 1509.02971 | [
"1502.03167"
] |
1509.02971#39 | Continuous control with deep reinforcement learning | This back-pass results in a putative modiï¬ cation to the action sequence that will decrease the total cost. We perform a derivative-free line-search over this direction in the space of action sequences by integrating the dynamics forward (the forward- pass), and choose the best trajectory. We store this action sequence in order to warm-start the next iLQG iteration, and execute the ï¬ rst action in the simulator. This results in a new state, which is used as the initial state in the next iteration of trajectory optimization. # 9 ENVIRONMENT DETAILS 9.1 TORCS ENVIRONMENT For the Torcs environment we used a reward function which provides a positive reward at each step for the velocity of the car projected along the track direction and a penalty of â 1 for collisions. Episodes were terminated if progress was not made along the track after 500 frames. 11 | 1509.02971#38 | 1509.02971#40 | 1509.02971 | [
"1502.03167"
] |
1509.02971#40 | Continuous control with deep reinforcement learning | Published as a conference paper at ICLR 2016 # 9.2 MUJOCO ENVIRONMENTS For physical control tasks we used reward functions which provide feedback at every step. In all tasks, the reward contained a small action cost. For all tasks that have a static goal state (e.g. pendulum swingup and reaching) we provide a smoothly varying reward based on distance to a goal state, and in some cases an additional positive reward when within a small radius of the target state. For grasping and manipulation tasks we used a reward with a term which encourages movement towards the payload and a second component which encourages moving the payload to the target. In locomotion tasks we reward forward action and penalize hard impacts to encourage smooth rather than hopping gaits (Schulman et al., 2015b). In addition, we used a negative reward and early termination for falls which were determined by simple threshholds on the height and torso angle (in the case of walker2d). Table 2 states the dimensionality of the problems and below is a summary of all the physics envi- ronments. task name blockworld1 blockworld3da canada canada2d cart cartpole cartpoleBalance cartpoleParallelDouble cartpoleParallelTriple cartpoleSerialDouble cartpoleSerialTriple cheetah ï¬ xedReacher ï¬ xedReacherDouble ï¬ | 1509.02971#39 | 1509.02971#41 | 1509.02971 | [
"1502.03167"
] |
1509.02971#41 | Continuous control with deep reinforcement learning | xedReacherSingle gripper gripperRandom hardCheetah hardCheetahNice hopper hyq hyqKick movingGripper movingGripperRandom pendulum reacher reacher3daFixedTarget reacher3daRandomTarget reacherDouble reacherObstacle reacherSingle walker2d dim(s) 18 31 22 14 2 4 4 6 8 6 8 18 10 8 6 18 18 18 18 14 37 37 22 22 2 10 20 20 6 18 6 18 dim(a) 5 9 7 3 1 1 1 1 1 1 1 6 3 2 1 5 5 6 6 4 12 12 7 7 1 3 7 7 1 5 1 6 dim(o) 43 102 62 29 3 14 14 16 23 14 23 17 23 18 13 43 43 17 17 14 37 37 49 49 3 23 61 61 13 38 13 41 Table 2: Dimensionality of the MuJoCo tasks: the dimensionality of the underlying physics model dim(s), number of action dimensions dim(a) and observation dimensions dim(o). task name Brief Description blockworld1 Agent is required to use an arm with gripper constrained to the 2D plane to grab a falling block and lift it against gravity to a ï¬ xed target position. | 1509.02971#40 | 1509.02971#42 | 1509.02971 | [
"1502.03167"
] |
1509.02971#42 | Continuous control with deep reinforcement learning | 12 Published as a conference paper at ICLR 2016 blockworld3da Agent is required to use a human-like arm with 7-DOF and a simple gripper to grab a block and lift it against gravity to a ï¬ xed target posi- tion. canada Agent is required to use a 7-DOF arm with hockey-stick like appendage to hit a ball to a target. canada2d Agent is required to use an arm with hockey-stick like appendage to hit a ball initialzed to a random start location to a random target location. cart Agent must move a simple mass to rest at 0. The mass begins each trial in random positions and with random velocities. cartpole The classic cart-pole swing-up task. Agent must balance a pole at- tached to a cart by applying forces to the cart alone. The pole starts each episode hanging upside-down. cartpoleBalance The classic cart-pole balance task. Agent must balance a pole attached to a cart by applying forces to the cart alone. The pole starts in the upright positions at the beginning of each episode. cartpoleParallelDouble Variant on the classic cart-pole. Two poles, both attached to the cart, should be kept upright as much as possible. cartpoleSerialDouble Variant on the classic cart-pole. Two poles, one attached to the cart and the second attached to the end of the ï¬ rst, should be kept upright as much as possible. cartpoleSerialTriple Variant on the classic cart-pole. Three poles, one attached to the cart, the second attached to the end of the ï¬ rst, and the third attached to the end of the second, should be kept upright as much as possible. cheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. | 1509.02971#41 | 1509.02971#43 | 1509.02971 | [
"1502.03167"
] |
1509.02971#43 | Continuous control with deep reinforcement learning | This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013). ï¬ xedReacher Agent is required to move a 3-DOF arm to a ï¬ xed target position. ï¬ xedReacherDouble Agent is required to move a 2-DOF arm to a ï¬ xed target position. ï¬ xedReacherSingle Agent is required to move a simple 1-DOF arm to a ï¬ xed target position. gripper Agent must use an arm with gripper appendage to grasp an object and manuver the object to a ï¬ xed target. gripperRandom The same task as gripper except that the arm object and target posi- tion are initialized in random locations. hardCheetah hardCheetah The agent should move forward as quickly as possible with a cheetah- like body that is constrained to the plane. This environment is based very closely on the one introduced by Wawrzy´nski (2009); Wawrzy´nski & Tanwani (2013), but has been made much more difï¬ cult by removing the stabalizing joint stiffness from the model. | 1509.02971#42 | 1509.02971#44 | 1509.02971 | [
"1502.03167"
] |
1509.02971#44 | Continuous control with deep reinforcement learning | # hopper Agent must balance a multiple degree of freedom monoped to keep it from falling. hyq Agent is required to keep a quadroped model based on the hyq robot from falling. 13 Published as a conference paper at ICLR 2016 movingGripper Agent must use an arm with gripper attached to a moveable platform to grasp an object and move it to a ï¬ xed target. movingGripperRandom The same as the movingGripper environment except that the object po- sition, target position, and arm state are initialized randomly. pendulum The classic pendulum swing-up problem. The pendulum should be brought to the upright position and balanced. Torque limits prevent the agent from swinging the pendulum up directly. reacher3daFixedTarget Agent is required to move a 7-DOF human-like arm to a ï¬ xed target position. reacher3daRandomTarget Agent is required to move a 7-DOF human-like arm from random start- ing locations to random target positions. reacher Agent is required to move a 3-DOF arm from random starting locations to random target positions. reacherSingle Agent is required to move a simple 1-DOF arm from random starting locations to random target positions. reacherObstacle Agent is required to move a 5-DOF arm around an obstacle to a ran- domized target position. walker2d Agent should move forward as quickly as possible with a bipedal walker constrained to the plane without falling down or pitching the torso too far forward or backward. | 1509.02971#43 | 1509.02971#45 | 1509.02971 | [
"1502.03167"
] |
1509.02971#45 | Continuous control with deep reinforcement learning | 14 | 1509.02971#44 | 1509.02971 | [
"1502.03167"
] |
|
1508.06615#0 | Character-Aware Neural Language Models | 5 1 0 2 c e D 1 ] L C . s c [ 4 v 5 1 6 6 0 . 8 0 5 1 : v i X r a # Character-Aware Neural Language Models # Yoon Kimâ # Yacine Jerniteâ # David Sontagâ # Alexander M. Rushâ # â School of Engineering and Applied Sciences Harvard University {yoonkim,srush}@seas.harvard.edu # â Courant Institute of Mathematical Sciences New York University {jernite,dsontag}@cs.nyu.edu # Abstract | 1508.06615#1 | 1508.06615 | [
"1507.06228"
] |
|
1508.06615#1 | Character-Aware Neural Language Models | We describe a simple neural language model that re- lies only on character-level inputs. Predictions are still made at the word-level. Our model employs a con- volutional neural network (CNN) and a highway net- is given to a work over characters, whose output long short-term memory (LSTM) recurrent neural net- work language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model out- performs word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufï¬ cient for lan- guage modeling. Analysis of word representations ob- tained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information. Introduction Language modeling is a fundamental task in artiï¬ cial intel- ligence and natural language processing (NLP), with appli- cations in speech recognition, text generation, and machine translation. A language model is formalized as a probability distribution over a sequence of strings (words), and tradi- tional methods usually involve making an n-th order Markov assumption and estimating n-gram probabilities via count- ing and subsequent smoothing (Chen and Goodman 1998). The count-based models are simple to train, but probabilities of rare n-grams can be poorly estimated due to data sparsity (despite smoothing techniques). Neural Language Models (NLM) address the n-gram data sparsity issue through parameterization of words as vectors (word embeddings) and using them as inputs to a neural net- work (Bengio, Ducharme, and Vincent 2003; Mikolov et al. 2010). The parameters are learned as part of the training process. Word embeddings obtained through NLMs exhibit the property whereby semantically close words are likewise close in the induced vector space (as is the case with non- neural techniques such as Latent Semantic Analysis (Deer- wester, Dumais, and Harshman 1990)). | 1508.06615#0 | 1508.06615#2 | 1508.06615 | [
"1507.06228"
] |
1508.06615#2 | Character-Aware Neural Language Models | While NLMs have been shown to outperform count-based n-gram language models (Mikolov et al. 2011), they are blind to subword information (e.g. morphemes). For exam- ple, they do not know, a priori, that eventful, eventfully, un- eventful, and uneventfully should have structurally related embeddings in the vector space. Embeddings of rare words can thus be poorly estimated, leading to high perplexities for rare words (and words surrounding them). This is espe- cially problematic in morphologically rich languages with long-tailed frequency distributions or domains with dynamic vocabularies (e.g. social media). In this work, we propose a language model that lever- ages subword information through a character-level con- volutional neural network (CNN), whose output is used as an input to a recurrent neural network language model (RNN-LM). Unlike previous works that utilize subword in- formation via morphemes (Botha and Blunsom 2014; Lu- ong, Socher, and Manning 2013), our model does not require morphological tagging as a pre-processing step. And, unlike the recent line of work which combines input word embed- dings with features from a character-level model (dos Santos and Zadrozny 2014; dos Santos and Guimaraes 2015), our model does not utilize word embeddings at all in the input layer. Given that most of the parameters in NLMs are from the word embeddings, the proposed model has signiï¬ cantly fewer parameters than previous NLMs, making it attractive for applications where model size may be an issue (e.g. cell phones). | 1508.06615#1 | 1508.06615#3 | 1508.06615 | [
"1507.06228"
] |
1508.06615#3 | Character-Aware Neural Language Models | To summarize, our contributions are as follows: ⠢ on English, we achieve results on par with the existing state-of-the-art on the Penn Treebank (PTB), despite hav- ing approximately 60% fewer parameters, and ⠢ on morphologically rich languages (Arabic, Czech, French, German, Spanish, and Russian), our model outperforms various baselines (Kneser-Ney, word- level/morpheme-level LSTM), again with fewer parame- ters. We have released all the code for the models described in this paper.1 Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. | 1508.06615#2 | 1508.06615#4 | 1508.06615 | [
"1507.06228"
] |
1508.06615#4 | Character-Aware Neural Language Models | 1https://github.com/yoonkim/lstm-char-cnn Model The architecture of our model, shown in Figure 1, is straight- forward. Whereas a conventional NLM takes word embed- dings as inputs, our model instead takes the output from a single-layer character-level convolutional neural network with max-over-time pooling. For notation, we denote vectors with bold lower-case (e.g. xt, b), matrices with bold upper-case (e.g. W, Uo), scalars with italic lower-case (e.g. x, b), and sets with cursive upper- case (e.g. V, C) letters. For notational convenience we as- sume that words and characters have already been converted into indices. | 1508.06615#3 | 1508.06615#5 | 1508.06615 | [
"1507.06228"
] |
1508.06615#5 | Character-Aware Neural Language Models | Recurrent Neural Network A recurrent neural network (RNN) is a type of neural net- work architecture particularly suited for modeling sequen- tial phenomena. At each time step t, an RNN takes the input vector xt â Rn and the hidden state vector htâ 1 â Rm and produces the next hidden state ht by applying the following recursive operation: ht = f (Wxt + Uhtâ 1 + b) (1) Here W â Rmà n, U â Rmà m, b â Rm are parameters of an afï¬ ne transformation and f is an element-wise nonlin- earity. In theory the RNN can summarize all historical in- formation up to time t with the hidden state ht. In practice however, learning long-range dependencies with a vanilla RNN is difï¬ cult due to vanishing/exploding gradients (Ben- gio, Simard, and Frasconi 1994), which occurs as a result of the Jacobianâ s multiplicativity with respect to time. (Hochreiter and Schmidhuber 1997) addresses the problem of learning long range dependencies by augmenting the RNN with a memory cell vector ct â Rn at each time step. Concretely, one step of an LSTM takes as input xt, htâ 1, ctâ 1 and produces ht, ct via the following intermediate calculations: i, = 0 W'x; + U'hy_, + bâ ) f, = o(Wix, + Ui + bf) 0, = 0(W°x, + U°hy_1 +b?) g, = tanh(W2x, + U%h,_; + bâ ) ce =f0Oc¢_1+i Og; h, = 0; © tanh(c;) (2) Here o(-) and tanh(-) are the element-wise sigmoid and hy- perbolic tangent functions, © is the element-wise multipli- cation operator, and i;, f;, o, are referred to as input, for- get, and output gates. Att = 1, ho and co are initialized to zero vectors. Parameters of the LSTM are W!, U/,b/â | 1508.06615#4 | 1508.06615#6 | 1508.06615 | [
"1507.06228"
] |
1508.06615#6 | Character-Aware Neural Language Models | for jâ ¬ {i f,0,9}- Memory cells in the LSTM are additive with respect to time, alleviating the gradient vanishing problem. Gradient exploding is still an issue, though in practice simple opti- mization strategies (such as gradient clipping) work well. LSTMs have been shown to outperform vanilla RNNs on many tasks, including on language modeling (Sundermeyer, Schluter, and Ney 2012). It is easy to extend the RNN/LSTM to two (or more) layers by having another network whose absurdity is: recognized 7 betweennext word and prediction Softmax output to obtain distribution over next word Long short-term memory network Highway network Max-over-time poolinglayer Convolution layer with multiple filters of different widths Concatenation of character embeddings moment the is recognized | 1508.06615#5 | 1508.06615#7 | 1508.06615 | [
"1507.06228"
] |
1508.06615#7 | Character-Aware Neural Language Models | Figure 1: Architecture of our language model applied to an exam- ple sentence. Best viewed in color. Here the model takes absurdity as the current input and combines it with the history (as represented by the hidden state) to predict the next word, is. First layer performs a lookup of character embeddings (of dimension four) and stacks them to form the matrix Ck. Then convolution operations are ap- plied between Ck and multiple ï¬ lter matrices. Note that in the above example we have twelve ï¬ ltersâ three ï¬ lters of width two (blue), four ï¬ lters of width three (yellow), and ï¬ ve ï¬ lters of width four (red). A max-over-time pooling operation is applied to obtain a ï¬ xed-dimensional representation of the word, which is given to the highway network. The highway networkâ | 1508.06615#6 | 1508.06615#8 | 1508.06615 | [
"1507.06228"
] |
1508.06615#8 | Character-Aware Neural Language Models | s output is used as the input to a multi-layer LSTM. Finally, an afï¬ ne transformation fol- lowed by a softmax is applied over the hidden representation of the LSTM to obtain the distribution over the next word. Cross en- tropy loss between the (predicted) distribution over next word and the actual next word is minimized. Element-wise addition, multi- plication, and sigmoid operators are depicted in circles, and afï¬ ne transformations (plus nonlinearities where appropriate) are repre- sented by solid arrows. input at t is ht (from the ï¬ rst network). | 1508.06615#7 | 1508.06615#9 | 1508.06615 | [
"1507.06228"
] |
1508.06615#9 | Character-Aware Neural Language Models | Indeed, having mul- tiple layers is often crucial for obtaining competitive perfor- mance on various tasks (Pascanu et al. 2013). Recurrent Neural Network Language Model Let V be the ï¬ xed size vocabulary of words. A language model speciï¬ es a distribution over wt+1 (whose support is V) given the historical sequence w1:t = [w1, . . . , wt]. A re- current neural network language model (RNN-LM) does this | 1508.06615#8 | 1508.06615#10 | 1508.06615 | [
"1507.06228"
] |
1508.06615#10 | Character-Aware Neural Language Models | by applying an afï¬ ne transformation to the hidden layer fol- lowed by a softmax: exp(hy - p! +9â ) Vyrev exp(hy, - p!â + q?") (3) Pr(wey1 = J|wie) where pj is the j-th column of P â Rmà |V| (also referred to as the output embedding),2 and qj is a bias term. Similarly, for a conventional RNN-LM which usually takes words as inputs, if wt = k, then the input to the RNN-LM at t is the input embedding xk, the k-th column of the embedding matrix X â Rnà |V|. | 1508.06615#9 | 1508.06615#11 | 1508.06615 | [
"1507.06228"
] |
1508.06615#11 | Character-Aware Neural Language Models | Our model simply replaces the input embeddings X with the output from a character-level con- volutional neural network, to be described below. If we denote w1:T = [w1, · · · , wT ] to be the sequence of words in the training corpus, training involves minimizing the negative log-likelihood (N LL) of the sequence T NLL=-â > log Pr(w;|w1.1-1) (4) t=1 which is typically done by truncated backpropagation through time (Werbos 1990; Graves 2013). Character-level Convolutional Neural Network In our model, the input at time t is an output from a character-level convolutional neural network (CharCNN), which we describe in this section. CNNs (LeCun et al. 1989) have achieved state-of-the-art results on computer vi- sion (Krizhevsky, Sutskever, and Hinton 2012) and have also been shown to be effective for various NLP tasks (Collobert et al. 2011). Architectures employed for NLP applications differ in that they typically involve temporal rather than spa- tial convolutions. Let C be the vocabulary of characters, d be the dimen- sionality of character embeddings,3 and Q â | 1508.06615#10 | 1508.06615#12 | 1508.06615 | [
"1507.06228"
] |
1508.06615#12 | Character-Aware Neural Language Models | Rdà |C| be the matrix character embeddings. Suppose that word k â V is made up of a sequence of characters [c1, . . . , cl], where l is the length of word k. Then the character-level representation of k is given by the matrix Ck â Rdà l, where the j-th col- umn corresponds to the character embedding for cj (i.e. the cj-th column of Q).4 We apply a narrow convolution between Ck and a ï¬ lter (or kernel) H â Rdà w of width w, after which we add a bias and apply a nonlinearity to obtain a feature map f k â Rlâ w+1. Speciï¬ cally, the i-th element of f k is given by: f* [i] = tanh((C*[x,i:i+w-1],H)+b) (6) | 1508.06615#11 | 1508.06615#13 | 1508.06615 | [
"1507.06228"
] |
1508.06615#13 | Character-Aware Neural Language Models | 2In our work, predictions are at the word-level, and hence we still utilize word embeddings in the output layer. 3Given that |C| is usually small, some authors work with one- hot representations of characters. However we found that using lower dimensional representations of characters (i.e. d < |C|) per- formed slightly better. 4Two technical details warrant mention here: (1) we append start-of-word and end-of-word characters to each word to better represent preï¬ xes and sufï¬ xes and hence Ck actually has l + 2 columns; (2) for batch processing, we zero-pad Ck so that the num- ber of columns is constant (equal to the max word length) for all words in V. where C* [x, i : i+-wâ 1] is the i-to-(i+wâ 1)-th column of C* and (A, B) = Tr(AB*) is the Frobenius inner product. Finally, we take the max-over-time yk = max i f k[i] (6) as the feature corresponding to the ï¬ lter H (when applied to word k). The idea is to capture the most important featureâ the one with the highest valueâ for a given ï¬ lter. A ï¬ lter is essentially picking out a character n-gram, where the size of the n-gram corresponds to the ï¬ lter width. We have described the process by which one feature is obtained from one ï¬ lter matrix. Our CharCNN uses multiple ï¬ lters of varying widths to obtain the feature vector for k. So if we have a total of h ï¬ lters H1, . . . , Hh, then yk = [yk h] is the input representation of k. For many NLP applications h is typically chosen to be in [100, 1000]. Highway Network We could simply replace xk (the word embedding) with yk at each t in the RNN-LM, and as we show later, this simple model performs well on its own (Table 7). One could also have a multilayer perceptron (MLP) over yk to model in- teractions between the character n-grams picked up by the ï¬ lters, but we found that this resulted in worse performance. | 1508.06615#12 | 1508.06615#14 | 1508.06615 | [
"1507.06228"
] |
1508.06615#14 | Character-Aware Neural Language Models | Instead we obtained improvements by running yk through a highway network, recently proposed by Srivastava et al. (2015). Whereas one layer of an MLP applies an afï¬ ne trans- formation followed by a nonlinearity to obtain a new set of features, z = g(Wy + b) (7) one layer of a highway network does the following: z=tOg(Wuy+by)+(1-t)oy (8) where g is a nonlinearity, t = Ï (WT y + bT ) is called the transform gate, and (1â t) is called the carry gate. Similar to the memory cells in LSTM networks, highway layers allow for training of deep networks by adaptively carrying some dimensions of the input directly to the output.5 By construc- tion the dimensions of y and z have to match, and hence WT and WH are square matrices. | 1508.06615#13 | 1508.06615#15 | 1508.06615 | [
"1507.06228"
] |
1508.06615#15 | Character-Aware Neural Language Models | Experimental Setup As is standard in language modeling, we use perplexity (P P L) to evaluate the performance of our models. Perplex- ity of a model over a sequence [w1, . . . , wT ] is given by PPL= exp(â *) (9) where N LL is calculated over the test set. We test the model on corpora of varying languages and sizes (statistics avail- able in Table 1). We conduct hyperparameter search, model introspection, and ablation studies on the English Penn Treebank (PTB) (Marcus, Santorini, and Marcinkiewicz 1993), utilizing the 5Srivastava et al. (2015) recommend initializing bT to a neg- ative value, in order to militate the initial behavior towards carry. We initialized bT to a small interval around â | 1508.06615#14 | 1508.06615#16 | 1508.06615 | [
"1507.06228"
] |
Subsets and Splits