doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1704.03073
16
DDPG bears a relation to several other recent model free RL algorithms: The NAF algorithm [7] which has recently been applied to a real-world robotics problem [5] can be viewed as a DDPG variant where the Q-function is quadratic in the action so that the optimal action can be easily recovered directly from the Q-function, making a separate representation of the policy unnecessary. DDPG and especially NAF are the continuous action counterparts of DQN [22], a Q-learning algorithm that recently re-popularized the use of experience replay and target networks to stabilize learning with powerful function approximators such as neural networks. DDPG, NAF, and DQN all interleave mini-batch updates of the Q-function (and the policy for DDPG) with data collection via interaction with the environment. These mini-batch based updates set DDPG and DQN apart from the otherwise closely related NFQ and NFQCA algorithms for discrete and continuous actions respectively. NFQ [29] and NFQCA [9] employ the same basic update as DDPG and DQN, however, they are batch algorithms that perform updates less frequently and fully
1704.03073#16
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
17
and NFQCA [9] employ the same basic update as DDPG and DQN, however, they are batch algorithms that perform updates less frequently and fully re-fit the Q-function and the policy network after every episode with several hundred iterations of gradient descent with Rprop [28] and using full-batch updates with the entire replay buffer. The aggressive training makes NFQCA data efficient, but the full batch updates can become impractical with large networks, large observation spaces, or when the number of training episodes is large. Finally, DPG can be seen as the deterministic limit of a particular instance of the stochastic value gradients (SVG) family [11], which
1704.03073#17
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
18
also computes policy gradient via back-propagation of value gradients, but optimizes stochastic policies. Discrete Continuous Mini-batch learning Target networks Full-batch learning with Rprop Parameter resetting DQN NFQ DDPG, NAF NFQCA One appealing property of the above family of algorithms is that the use of a Q-function facilitates off-policy learning. This allows decoupling the collection of experience data from the updates of the policy and value networks, a desirable property given that experience is expensive to collect in a robotics setup. In this context, because neural network training is often slow, decoupling allows us to make many parameter update steps per step in the environment, ensuring that the networks are well fit to the data that is currently available. # IV. TASK AND EXPERIMENTAL SETUP The full task that we consider in this paper is to use the arm to pick up one Lego Duplo brick from the table and stack it onto the remaining brick. This ”composite” task can be decomposed into several subtasks, including grasping and stacking. In our experiments we consider the full task as well as the two sub-tasks in isolation as shown in the table below: Starting state Reward Grasp StackInHand Stack Both bricks on table Brick 1 above table Brick 1 in gripper Both bricks on table Bricks stacked Bricks stacked
1704.03073#18
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
19
Starting state Reward Grasp StackInHand Stack Both bricks on table Brick 1 above table Brick 1 in gripper Both bricks on table Bricks stacked Bricks stacked In every episode the arm starts in a random configuration with the positioning of gripper and brick appropriate for the task of interest. We implement the experiments in a physically plausible simulation in MuJoCo [36] with the simulated arm being closely matched to a real-world Jaco arm1 setup in our lab. Episodes are terminated after 150 steps, with each step corresponding to 50ms of physical simulation time. This means that the agent has 7.5 seconds to perform the task. Un- less otherwise noted we give a reward of one upon successful completion of the task and zero otherwise.
1704.03073#19
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
20
The observation vector provided to the agent contains information about the angles and angular velocities of the 6 joints of the arm and 3 fingers of the gripper. In addition, we provide information about the position and orientation of the two bricks and relative distances of the two bricks to the pinch position of the gripper, i.e. roughly the position where the fin- gertips would meet if the fingers are closed. The 9-dimensional continuous action directly sets the velocities of the arm and finger joints. In experiments not reported in this paper we have tried using an observation vector containing only the raw state of the brick in addition to the arm configuration (i.e. without the vector between the end-effector and brick) and found that 1Jaco is a robotics arm developed by Kinova Robotics this increased the number of environment interactions needed roughly by a factor of two to three.
1704.03073#20
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
21
1Jaco is a robotics arm developed by Kinova Robotics this increased the number of environment interactions needed roughly by a factor of two to three. The only hyper-parameter that we optimize for each ex- perimental condition is the learning rate. For each condition we train and measure the performance of 10 agents with different random initial network parameters. After every 30 is evaluated for 10 episodes. training episodes the agent We used the mean performance at each evaluation phase as the performance measure presented in all plots. We found empirically that 10 episodes of evaluation gave a reasonable proxy for performance in the studied tasks. In the plots the line shows the mean performance for the set and the shaded regions correspond to the range between the worst and best performing agent in the set. In all plots the x-axis represents the number of environment transitions seen so far at an evaluation point (in millions) and the y-axis represent episode return. A video of the full setup and examples of policies tasks can be found here: solving the component and full https://www.youtube.com/watch?v=8QnD8ZM0YCo. V. ASYNCHRONOUS DPG WITH VARIABLE REPLAY STEPS
1704.03073#21
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
22
V. ASYNCHRONOUS DPG WITH VARIABLE REPLAY STEPS In this section we study two methods for extending the DDPG algorithm and find that they can have significant effect on data and computation efficiency, in some cases making the difference between finding a solution to a task or not. a) Multiple mini-batch replay steps: Deep neural net- works can require many steps of gradient descent to converge. In a supervised learning setting this affects purely computa- tion time. In reinforcement learning, however, neural network training is interleaved with the acquisition of interaction expe- rience, and the nature of the latter is affected by the state of the former – and vice versa – so the situation is more complicated. To gain a better understanding of this interaction we modified the original DDPG algorithm as described in [20] to perform a fixed but configurable number of mini-batch updates per step in the environment. In [20] one update was performed after each new interaction step.
1704.03073#22
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
23
We refer to DDPG with a configurable number of update steps as DPG-R and tested the impact of this modification on the two primitive tasks Grasp and StackInHand. The results are shown in Fig. 2. It is evident that the number of update steps has a dramatic effect on the amount of experience data required for learning successful policies. After one million interactions the original version of DDPG with a single update step (blue traces) appears to have made no progress towards a successful policy for stacking, and only a small number of controllers have learned to grasp. Increasing the number of updates per interaction to 5 greatly improves the results (green traces), and with 40 updates (purple) the first successful policies for stacking and grasping are obtained after 200,000 and 300,000 interactions respectively (corresponding to 1,300 and 2,000 episodes). It is task dependent and the dependence between update steps and convergence is clearly not linear, in both cases we continue to see a reduction in total environment interaction up to 40 update steps, the maximum used in the experiment.
1704.03073#23
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
24
One may speculate as to why changing the number of updates per environment step has such a pronounced effect. One hypothesis is that, loosely speaking and drawing an analogy to supervised learning, insufficient training leads to underfitting of the policy and value network with respect to the already collected training data. Unlike in supervised learning, however, where the dataset is typically fixed, the quality of the policy directly feeds back into the data acquisition process since the policy network is used for exploration, thus affecting the quality the data used in future iterations of network training. We have observed in various experiments (not listed here) that other aspects of the network architecture and training process can have a similar effect on the extent of underfitting. Some examples include the type of non-linearities used in the network layers, the size of layers and the learning rate. It is important to note that one cannot replicate the effect of multiple replay steps simply by increasing the learning rate. In practice we find that attempts to do so make training unstable. 140 Grasp 140 StackinHand 120 120 100 100 80 80 Ci) C1) 40 40 20 20 “0 02 04 06 os “10 02 04 06 08
1704.03073#24
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
25
140 Grasp 140 StackinHand 120 120 100 100 80 80 Ci) C1) 40 40 20 20 “0 02 04 06 os “10 02 04 06 08 Fig. 2: Mean episode return as a function of number of transitions seen (in millions) of DPG-R (single worker) on the Grasp (left) and StackInHand (right) task with 1 (blue), 5 (green), 10 (red), 20 (yellow) and 40 (purple) mini-batch updates per environment step b) Asynchronous DPG: While increasing the number of update steps relative to the number of environment interactions greatly improves the data efficiency of the algorithm it can also strongly increase the computation time. In the extreme case, in simulation, when the overall run time is dominated by the network updates it may scale linearly with the number of replay steps. In this setting it is desirable to be able to parallelize the update computations. In a real robotics setup the overall run time is typically dominated by the collection of robot interactions. In this case it is desirable to be able to collect experience from multiple robots simultaneously (e.g. as in [39, 5]).
1704.03073#25
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
26
We therefore develop an asynchronous version of DPG that allows parallelization of training and environment interaction by combining multiple instances of an DPG-R actor and critic that each share their network parameters and can be configured to either share or have independent experience replay buffers. This is inspired by the A3C algorithm proposed in [23], and also analogous to [5, 39]. We found that this strategy is also an effective way to share parameters for DPG. That is, we employ asynchronous updates whereby each worker has its own copy of the parameters and uses it for computing gradients which are then applied to a shared parameter instance without any synchronization. We use the Adam optimizer [15] with local non-shared first-order statistics and a single shared instance of second-order statistics. The pseudo code of the asynchronous DPG-R is shown in algorithm box 1. # Algorithm 1 (A)DPG-R algorithm
1704.03073#26
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
27
# Algorithm 1 (A)DPG-R algorithm Initialize global shared critic and actor network parameters: 62" and oH” Pseudo code for each learner thread: Initialize critic network Q(s,a|@@) and actor p.(s|6“) with weights 02 and 6”. Initialize target network Q’ and yu’ with weights: 0? — 62, oH" — oH Initialize replay buffer R for episode = 1, M do Receive initial observation state s1 for t= 1, T do Select action a, = ju(s;|0") +N; according to the current policy and exploration noise Perform action a;, observe reward r;, and new state St41 Store transition (s;, a4,74, 8:41) in R for update = 1, R do Sample a random minibatch of JN transitions (s;,@;, 7%, 5:41) from R Set yi = ri + 7Q' (Sint, MH (sit1|O")|02 ) Perform asynchronous update of the shared param- eters of the critic by minimizing the loss: L= kD iyi — Osi, a;/02)*) Perform asynchronous update of shared parameters of actor policy using the sampled gradient:
1704.03073#27
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
28
1 Vogut Hla © aD VaQ(s, 4/09) |Vou(sl0")| Copy the shared parameters to the local ones: 0? — 62", gH ge" Every S update steps, update the target networks: 0? — 62, oH" — 6H end for # end for # end for Figure 3 compares the performance of ADPG-R for different number of update steps and 16 workers (all workers perform- ing both data collection and computing updates). Similar to Fig. 2 we find that increasing the ratio of update steps per environment steps improves data efficiency, although the effect appears to be somewhat less pronounced than for DPG-R. Figure 4 (top row) directly compares the single-worker and asynchronous version of DPG-R. In both cases we choose the best performing number of replay steps and learning rate. As we can see, the use of multiple workers does not affect overall StackInHand 40 40 Oe 120 120 100 100 80 80 Ci) C1) 40 40 20 20 20 05 10 15 20 25 30 35 “0 05 10 15 20 25 30 35 Fig. 3: Mean episode return as a function of number of transitions seen (in millions) of ADPG-R (16 workers) on the Grasp (left) and StackInHand (right) task. Different colored traces indicate number of replay step as in Fig. 2
1704.03073#28
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
29
data efficiency for StackInHand but it reduced roughly in half for Grasp, with the note that the single worker still hasn’t quite converged. Figure 4 (bottom row) plots the same data but as a function of environment steps per worker. This measure corresponds to the optimal wall clock efficiency that we can achieve, under the assumption that communication time between workers is negligible compared to environment interaction and gradient computation (this usually holds up to a certain degree of parallelization). This theoretical wall clock time for running an experiment with 16 workers is about 16x lower for Stack- InHand and roughly 8x lower for Grasp. Overall these results show that distributing neural network training and data collection across multiple computers and robots can be an extremely effective way of reducing the overall run time of experiments and thus making it feasible to run more challenging experiments. We make extensive use of asynchronous DPG for remaining the experiments. o nH uo Grasp uo StackinHand 2 120 100 100 Cy Cy —1 6 @ — 6 40 40 2 2 0 0 oo 860s 10 15 20 00 os 10 15 20 Grasp StackinHand 40 40 pany 1 10 100 100 a Ey) —1 ea Py — 6 40 40 20 20 t) t)
1704.03073#29
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
30
o nH uo Grasp uo StackinHand 2 120 100 100 Cy Cy —1 6 @ — 6 40 40 2 2 0 0 oo 860s 10 15 20 00 os 10 15 20 Grasp StackinHand 40 40 pany 1 10 100 100 a Ey) —1 ea Py — 6 40 40 20 20 t) t) Fig. 4: Figure with two panels: (a) Grasp; (b) StackInHand; 16 workers vs single worker in data (total for all workers) and ”wallclock” (per-worker) time in millions of transitions with best replay step and learning rate selection. # VI. COMPOSITE SHAPING REWARDS
1704.03073#30
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
31
# VI. COMPOSITE SHAPING REWARDS In the previous section we discussed how the ability of DDPG to exploit information that is available in the acquired interaction data affects learning speed. One important factor that determines what information is available from this data is the nature of the reward function. The reward function in the previous section was ”sparse” or ”pure” reward where a reward of 1 was given for states that correspond to successful task completion (brick lifted above 3cm for grasp; for stack) and 0 otherwise. For this reward to be useful for learning it is of course necessary that the agent is able to enter this goal region in state space with whatever exploration strategy is chosen. This was indeed the case for the two subtasks in isolation, but it is highly unlikely for the full task: without further guidance na¨ıve random exploration is very unlikely to lead to a successful grasp and stack as we also experimentally verify in Fig. 5. One commonly used solution to this problem is to provide informative shaping rewards that allow a learning signal to be obtained even with simple exploration strategies, e.g. by embedding information about the value function in the reward function for every transition acquired from the environment. For instance, for a simple reaching problem with a robotic arm we could define a shaping reward that takes into account the distance between the end-effector and the target.
1704.03073#31
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
32
While this a convenient way of embedding prior knowledge the solution and is a widely and successfully used about approach for simple problems it comes with several caveats, especially for complex sequential or compositional tasks such as the one we are interested in here. Firstly, while a suitable shaping reward may be easy to construct for simple problems for more complex composite tasks, such as the one considered in this paper, a suitable reward function is often non-obvious and may require con- siderable effort and experimentation. Secondly, and related to the previous point, the use of a shaping reward typically alters the solution to the optimization problem. The effect of this can be benign but especially when it comes to complex tasks a small mistake may lead to complete failure of learning as we will demonstrate below. Thirdly, in a robotics setup not all information that would be desirable to define a good shaping reward may be easily available. For instance, in the manipulation problem considered in this paper determining the position of the Lego bricks requires extra instrumentation of the experimental setup.
1704.03073#32
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
33
In this section we propose and analyze several possible reward functions for our full Stack task, aiming to provide a recipe that can be applied to other tasks with similar compositional structure. Shaping rewards are typically defined based on some notion of distance from or progress towards a goal state. We attempt to transfer this idea to our compositional setup via, what we call, composite (shaping) rewards. These reward functions return an increasing reward as the agent com- pletes components of the full task. They are either piecewise constant or smoothly varying across different regions of the Sparse reward components Subtask Reach Brick 1 Grasp Brick 1 Stack Brick 1 Description hypothetical pinch site position of the fingers is in a box around the first brick position the first brick is located at least 3cm above the table surface, which is only possible if the arm is holding the brick bricks stacked Reward 0.125 0.25 1.00 Smoothly varying reward components distance of the pinch site to the first brick - non-linear bounded while grasped: distance of the first brick to the stacking site of the second brick - non-linear bounded Reaching to brick 1 Reaching to stack [0, 0.125] [0.25, 0.5] TABLE I: Composite reward function
1704.03073#33
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
34
TABLE I: Composite reward function state space that correspond to completed subtasks. In the case of Stack we use the reward components described in table I. These reward components can be combined in different ways. We consider three different composite rewards in ad- ditional to the original sparse task reward: Grasp shaping: Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.25 when the brick 1 has been grasped and a reward of 1.0 after completion of the full task. Reach and grasp shaping: Reach brick 1, Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.125 when being close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a reward of 1.0 after completion of the full task. Full composite shaping: the sparse reward components as be- fore in combination with the distance-based smoothly varying components.
1704.03073#34
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
35
Figure 5 shows the results of learning with the above reward functions (blue traces). The figure makes clear that learning with the sparse reward only does not succeed for the full task. Introducing an intermediate reward for grasping allows the agent to learn to grasp but learning is very slow. The time to successful grasping can be substantially reduced by giving a distance based reward component for reaching to the first brick, but learning does not progress beyond grasping. Only with an additional intermediate reward component as in continuous reach, grasp, stack the full task can be solved. Although the above reward functions are specific to the particular task, we expect that the idea of a composite reward function can be applied to many other tasks thus allow- ing learning for to succeed even for challenging problems. Nevertheless, great care must be taken when defining the reward function. We encountered several unexpected failure cases while designing the reward function components: e.g. reach and grasp components leading to a grasp unsuitable for stacking, agent not stacking the bricks because it will stop receiving the grasping reward before it receives reward for stacking and the agent flips the brick because it gets a grasping reward calculated with the wrong reference point on the brick. We show examples of these in the video: https://www.youtube.com/watch?v=8QnD8ZM0YCo. # VII. LEARNING FROM INSTRUCTIVE STATES
1704.03073#35
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
36
# VII. LEARNING FROM INSTRUCTIVE STATES In the previous section we have described a strategy for designing effective reward functions for complex composi- tional tasks which alleviate the burden of exploration. We have also pointed out, however, that designing shaping rewards can be error prone and may rely on privileged information. In this section we describe a different strategy for embedding prior knowledge into the training process and improving exploration that reduces the reliance on carefully designed reward functions. Specifically we propose to let the distribution of states at which the learning agent is initialized at the beginning of an episode reflect the compositional nature of the task: In our case, instead of initializing the agent always at the beginning of the full task with both bricks on the table we can, for instance, choose to initialize the agent occasionally with the brick already in its hand and thus prepared for stacking in the same way as when learning the subtask StackInHand in section V. Trajectories of policies solving the task will have to visit this region of space before stacking the bricks and we can thus think of this initialization strategy as initializing the agent closer to the goal.
1704.03073#36
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
37
More generally, we can choose to initialize episodes with states taken from anywhere along or close to successful tra- jectories. Suitable states can be either manually defined (as in section V), or they can be obtained from a human demonstrator or a previously trained agent that can partially solve the task. This can be seen as a form of apprenticeship learning in which we provide teacher information by influencing the state visitation distribution.
1704.03073#37
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
38
We perform experiments with two alternative methods for generating the starting states. The first one uses manually defined initial states and amounts to the possibility discussed above: we initialize the learning agent in either the original starting states with both bricks located on the table or in states where the first brick is already in the gripper as if the agent just performed a successful grasp and lifted the brick. These two sets of start states correspond to those used in section V. The second method for generating instructive starting states can also be used on a real robot provided a human demonstra- tor or a pre-trained policy are available. It aims at initializing the learning agent along solution trajectory states in a more fine-grained fashion. We sample a random number of steps for each episode between one and the expected number of steps required to solve the task from the original starting states and then run the demonstrator for this number of steps. The final state of this process is then used as a starting state initialization for the learning agent which then acts in the environment for the remainder of the episode. The results of these experiments are shown in Figure 5. It shows results for the four reward functions considered in the previous section when combined with the simple augmented start state distribution. While there is still no learning for the basic sparse reward case, results obtained with all other reward functions are improved. In particular, even for the second
1704.03073#38
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
39
Stack - No shaping 140 012 3 45 678 9 Stack - Grasp shaping 40 120 100 Stack - No shaping 140 Stack - Grasp shaping 40 120 100 012 3 45 678 9 14g __Stack - Reach and Grasp shaping 40 120 100 14g __Stack - Reach and Grasp shaping 40 120 100 Fig. 5: Four panels with (a) no progress without extra shaping (b, c, d) different shaping strategies for the composite task with starting states with both bricks on the table (blue), manually defined initial states (green) and initial states continuously on solution trajectories (red). On all plots, x-axis is millions of transitions of total experience and y-axis is mean episode return. Policies with mean return over 100 robustly perform the full Stack from different starting states. simplest reward function (Grasp shaping) we now obtain some controllers that can solve the full task. Learning with the full composite shaping reward is faster and more robust than without the use of instructive states.
1704.03073#39
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
40
The top left plot of Figure 5 (red trace) shows results for the case where the episode is initialized anywhere along trajectories from a pre-trained controller. We use this start state distribution in combination with the basic sparse reward for the overall case (Stack without shaping). Episodes were configured to be 50 steps, shorter than in the previous experiments, to be better suited to this setup with assisted exploration. During testing we still used episodes with 150 steps as before (so the traces are comparable). We can see a large improvement in performance in comparison to the two-state method variant even in the absence of any shaping rewards. We can learn a robust policy for all seeds within a total of 1 million environment transitions. This corresponds to less than 1 hour of interaction time on 16 simulated robots. Overall these results suggest that an appropriate start state distribution does not only greatly speed up learning, it also allows simpler reward function to be used. In our final ex- periment the simplest reward function, only indicating overall experimental success, was sufficient to solve the task. Con- sidering the difficulties that can be associated with designing good shaping rewards this is an encouraging results. The robustness of the policies that we can train to the starting state variation are also quite encouraging. Table II lists the success rate by task from 1000 trials. You can find a video
1704.03073#40
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
41
The robustness of the policies that we can train to the starting state variation are also quite encouraging. Table II lists the success rate by task from 1000 trials. You can find a video Grasp StackInHand Stack Success rate (1000 random starts) 99.2% 98.2% 95.5% TABLE II: Robustness of learned policies. with trained policies performing the Grasp, StackInHand and Stack tasks from different initial states in the supplementary material. # VIII. CONCLUSION We have introduced two extensions to the DDPG algorithm which make it a powerful method for learning robust policies for complex continuous control tasks. Specifically, we have shown that by decoupling the frequency of network updates from the environment interaction we can substantially improve data-efficiency, in some cases makes the that difference between finding a solution or not. The asynchronous version of DDPG which allows data collection and network training to be distributed over several computers and (simu- lated) robots has provided us with a close to linear speed up in wall-clock time for 16 parallel workers.
1704.03073#41
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
42
In addition, we presented two methods that help to guide the learning process towards good solutions and thus reduce the pressure on exploration strategies and speed up learning. The first, composite rewards, is a recipe for constructing effective reward functions for tasks that consist of a sequence of sub- tasks. The second, instructive starting states, can be seen as a lightweight form of apprenticeship learning that facilitates learning of long horizon tasks even with sparse rewards, a property of many real-world problems. Taken together, the algorithmic changes and exploration shaping strategies have allowed us to learn robust policies for the Stack task within a number of transitions that is feasible to collect in a real- robot system within a few days, or in significantly less time if multiple robots were used for training.
1704.03073#42
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
43
It is of course a challenge to judge the transfer of results in simulation to the real world. We have taken care to design a physically realistic simulation, and in initial experiments, which we have performed both in simulation and on the physical robot, we generally find a good correspondence of performance and learning speed between simulation and real world. This makes us optimistic that our performance numbers also hold when going to the real world. A second caveat of our simulated setup is that it currently uses information about the state of the environment, which although not impossible to obtain on a real robot, may require additional instrumentation of the experimental setup, e.g. to determine the position of the two bricks in the work space. To address this second issue we are currently focusing on end-to-end learning directly from raw visual information. Here, we have some first results showing the feasibility of learning policies for grasping with a success rate of about 80% across different starting conditions. We view the algorithms and techniques presented here as an important step towards applying versatile deep reinforcement learning methods for real-robot dexterous manipulation with perception. # REFERENCES [1] J Andrew Bagnell and Jeff G Schneider. Autonomous helicopter control using reinforcement learning policy In Robotics and Automation, 2001. search methods. Proceedings 2001 ICRA. IEEE International Conference on, volume 2, pages 1615–1620. IEEE, 2001.
1704.03073#43
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
44
[2] A. Boularias, J. Kober, and J. Peters. Relative entropy In JMLR Workshop inverse reinforcement learning. and Conference Proceedings Volume 15: AISTATS 2011, pages 182–189, Cambridge, MA, USA, April 2011. MIT Press. [3] Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1–142, 2013. [4] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 49–58, 2016. URL http://jmlr.org/proceedings/papers/v48/finn16.html. [5] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manip- ulation. arXiv preprint arXiv:1610.00633, 2016.
1704.03073#44
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
45
[6] Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. International Conference on Learning Representations (ICLR), 2016. [7] Shixiang Gu, Tim Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based In International Conference on Machine acceleration. Learning (ICML), 2016. [8] Abhishek Gupta, Clemens Eppner, Sergey Levine, and Pieter Abbeel. Learning dexterous manipulation for a soft robotic hand from human demonstrations. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016, Daejeon, South Korea, October 9-14, 2016, pages 3786–3793, 2016. [9] Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control. Machine learning, 84(1-2): 137–169, 2011. [10] Roland Hafner and Martin A. Riedmiller. Neural rein- forcement learning controllers for a real robot applica- tion. In 2007 IEEE International Conference on Robotics and Automation, ICRA 2007, 10-14 April 2007, Roma, Italy, pages 2098–2103, 2007.
1704.03073#45
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
46
[11] Nicolas Heess, Gregory Wayne, David Silver, Tim Lill- icrap, Tom Erez, and Yuval Tassa. Learning continuous In Ad- control policies by stochastic value gradients. vances in Neural Information Processing Systems (NIPS), pages 2926–2934, 2015. [12] K. J. Hunt, D. Sbarbaro, R. ˙Zbikowski, and P. J. Gawthrop. Neural networks for control systems: A survey. Automatica, 28(6):1083–1112, November 1992. ISSN 0005-1098. [13] M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies for compliant manipula- tion. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), Sept. 25-30, San Francisco, CA, 2011. URL http://www-clmc.usc.edu/ publications/K/kalakrishnan-IROS2011. [14] M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal. Learning objective functions for manipulation. In IEEE International Conference on Robotics and Automation, 2013.
1704.03073#46
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
47
[15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [16] Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Proceedings of the IEEE International Conference on Robotics and Automation, May 2004. [17] Sergey Levine and Pieter Abbeel. Learning neural net- work policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), pages 1071–1079, 2014. [18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. [19] Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. CoRR, abs/1603.02199, 2016. [20] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforce- International Conference on Learning ment learning. Representations (ICLR), 2016.
1704.03073#47
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
48
[21] Takamitsu Matsubara, Jun Morimoto, Jun Nakanishi, Masa-aki Sato, and Kenji Doya. Learning cpg- based biped locomotion with a policy gradient method. Robotics and Autonomous Systems, 54(11):911–920, 2006. [22] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529–533, 2015. [23] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous In Interna- methods for deep reinforcement learning. tional Conference on Machine Learning (ICML), 2016. J. Kober, O. Kroemer, and J. Pe- ters. Learning to select and generalize striking move- (3):263–279, 2013. ments URL http://www.ias.informatik.tu-darmstadt.de/uploads/ Publications/Muelling IJRR 2013.pdf.
1704.03073#48
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
49
[25] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal. Skill learning and task outcome prediction for manipulation. In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13, 2011. [26] Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In International Conference on Intelligent Robots and Systems (IROS), pages 2219–2225. IEEE, 2006. Supersizing self- supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015. URL http: //arxiv.org/abs/1509.06825. [28] M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algo- In H. Ruspini, editor, Proceedings of the IEEE rithm. International Conference on Neural Networks (ICNN), pages 586 – 591, San Francisco, 1993.
1704.03073#49
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
50
[29] Martin A. Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement In Machine Learning: ECML 2005, learning method. 16th European Conference on Machine Learning, Porto, Portugal, October 3-7, 2005, Proceedings, pages 317– 328, 2005. [30] Stefan Schaal. Dynamic Movement Primitives -A Frame- work for Motor Control in Humans and Humanoid Robotics, pages 261–280. Springer Tokyo, Tokyo, 2006. ISBN 978-4-431-31381-6. doi: 10.1007/4-431-31381-8 23. URL http://dx.doi.org/10.1007/4-431-31381-8 23. [31] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimiza- tion. In International Conference on Machine Learning (ICML), pages 1889–1897, 2015. [32] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. Interna- tional Conference on Learning Representations (ICLR), 2016.
1704.03073#50
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
51
[33] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning (ICML), 2014. [34] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cam- bridge, 1998. [35] Gerald Tesauro. Temporal difference learning and td- gammon. Commun. ACM, 38(3):58–68, 1995. [36] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: In 2012 A physics engine for model-based control. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE, 2012. [37] Herke van Hoof, Tucker Hermans, Gerhard Neumann, and Jan Peters. Learning robot in-hand manipulation with In 15th IEEE-RAS International Con- tactile features. ference on Humanoid Robots, Humanoids 2015, Seoul, South Korea, November 3-5, 2015, pages 121–127, 2015. [38] Paul J. Webros. Neural networks for control. chapter A Menu of Designs for Reinforcement Learning over Time, pages 67–95. 1990. ISBN 0-262-13261-3.
1704.03073#51
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
52
Menu of Designs for Reinforcement Learning over Time, pages 67–95. 1990. ISBN 0-262-13261-3. [39] Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, and Sergey Levine. Collective robot rein- forcement learning with distributed asynchronous guided policy search. CoRR, abs/1610.00673, 2016. URL http://arxiv.org/abs/1610.00673. # APPENDIX A. Reward function In this section we provide further details regarding the reward functions described in section VI. For our experiments we derived these from the state vector of the simulation, but they could also be obtained through instrumentation in hardware. The reward functions are defined in terms of the following quantities: b(1) • sB1 • sB2 z : height of brick 1 above table {x,y,z}: x,y,z positions of site located roughly in the center of brick 1 {x,y,z}: x,y,z positions of site located just above brick 2, at the position where sB1 will be located when brick 1 is stacked on top of brick 2. {x,y,z}: x,y,z positions of the pinch site of the hand – roughly the position where the fingertips would meet if the fingers are closed.. °
1704.03073#52
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
53
° 1) Sparse reward components: Using the above we can define the following conditions for the successful completion of subtasks: a) Reach Brick 1: The pinch site of the fingers is within a virtual box around the first brick position. reach =(|sB1 x − sP x | < ∆reach x ) ∧ (|sB1 y − sP y | < ∆reach y ) ∧ (|sB1 z − sP z | < ∆reach z ), # where ∆reach {x,y,z} denote the half-lengths of the sides of the virtual box for reaching. b) Grasp Brick 1: Brick 1 is located above the table surface by a threshold, θ, that is possible only if the arm is the brick has been lifted. grasp =b(1) z > θ c) Stack: Brick 1 is stacked on brick 2. This is expressed as a box constraint on the displacement between brick 1 and brick 2 measured in the coordinate system of brick 2.
1704.03073#53
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
54
c) Stack: Brick 1 is stacked on brick 2. This is expressed as a box constraint on the displacement between brick 1 and brick 2 measured in the coordinate system of brick 2. stack =(|C (2) x (sB1 − sB2)| < ∆stack x ) ∧ (|C (2) y (sB1 − sB2)| < ∆stack y ) ∧ (|C (2) z (sB1 − sB2)| < ∆stack z ), {x,y,z} denote the half-lengths of the sides of the virtual box for stacking, and C (2) is the rotation matrix that projects where ∆stack a vector into the coordinate system of brick 2. This projection into the coordinate system of brick 2 is necessary since brick 2 is allowed to move freely. It ensures that the box constraint is considered relative to the pose of brick 2. While this criterion for a successful stack is quite complicated to express in terms of sites, it could be easily implemented in hardware e.g. via a contact sensor attached to brick 2.
1704.03073#54
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
55
2) Shaping components: The full composite reward also includes two distance based shaping components that guide the hand to the brick 1 and then brick 1 to brick 2. These could be approximate and would be relatively simple to implement with a hardware visual system that can only roughly identify the centroid of an object. The shaping components of the reward are given as follows: a) Reaching to brick 1: : rgi(s®1, s?) = 1 —tanh?(w4||s?! — s? ||) b) Reaching to brick 2 for stacking: rgo(s®!, s??) = 1 — tanh? (wo||s?! — s??|\9). 3) Full reward: Using the above components the reward functions from section VI: Stack, Grasp shaping, Reach and grasp shaping, and Full composite shaping can be expressed as in equations (3, 4, 5, 6) below. These make use of the predicates above to determine whether which subtasks have been completed and return a reward accordingly. r(b(1) z , sP , sB1, sB2) = if stack(b(1) otherwise z , sP , sB1, sB2) 1 0
1704.03073#55
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.03073
56
1 0 r(b(1) z , sP , sB1, sB2) = if stack(b(1) z , sP , sB1, sB2) z , sP , sB1, sB2) ∧ grasp(b(1) z , sP , sB1, sB2) 1 0.25 if ¬stack(b(1) 0 otherwise if stack(b(1) 1 if ¬stack(b(1) 0.25 0.125 if ¬(stack(b(1) 0 otherwise r(b(1) z , sP , sB1, sB2) = z , sP , sB1, sB2) z , sP , sB1, sB2) ∧ grasp(b(1) z , sP , sB1, sB2) ∨ grasp(b(1) z , sP , sB1, sB2) z , sP , sB1, sB2)) ∧ reach(b(1) z , sP , sB1, sB2) (5)
1704.03073#56
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difficult and practically relevant problem in the real world is an important long-term goal for the field of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
http://arxiv.org/pdf/1704.03073
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller
cs.LG, cs.RO
12 pages, 5 Figures
null
cs.LG
20170410
20170410
[ { "id": "1504.00702" }, { "id": "1610.00633" } ]
1704.01696
0
7 1 0 2 r p A 6 ] L C . s c [ 1 v 6 9 6 1 0 . 4 0 7 1 : v i X r a # A Syntactic Neural Model for General-Purpose Code Generation # Pengcheng Yin Language Technologies Institute Carnegie Mellon University [email protected] Graham Neubig Language Technologies Institute Carnegie Mellon University [email protected] # Abstract We consider the problem of parsing natu- ral language descriptions into source code written in a general-purpose programming language like Python. Existing data- driven methods treat this problem as a lan- guage generation task without considering the underlying syntax of the target pro- gramming language. Informed by previ- ous work in semantic parsing, in this pa- per we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowl- edge. Experiments find this an effective way to scale up to generation of complex programs from natural language descrip- tions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches. # Introduction
1704.01696#0
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
1
# Introduction Every programmer has experienced the situation where they know what they want to do, but do not have the ability to turn it into a concrete im- plementation. For example, a Python programmer may want to “sort my list in descending order,” but not be able to come up with the proper syn- tax sorted(my list, reverse=True) to real- ize his intention. To resolve this impasse, it is common for programmers to search the web in natural language (NL), find an answer, and mod- ify it into the desired form (Brandt et al., 2009, this is time-consuming, and 2010). However, thus the software engineering literature is ripe with methods to directly generate code from NL descriptions, mostly with hand-engineered meth- ods highly tailored to specific programming lan- guages (Balzer, 1985; Little and Miller, 2009; Gvero and Kuncak, 2015).
1704.01696#1
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
2
In parallel, the NLP community has developed methods for data-driven semantic parsing, which attempt to map NL to structured logical forms ex- ecutable by computers. These logical forms can be general-purpose meaning representations (Clark and Curran, 2007; Banarescu et al., 2013), for- malisms for querying knowledge bases (Tang and Mooney, 2001; Zettlemoyer and Collins, 2005; Berant et al., 2013) and instructions for robots or personal assistants (Artzi and Zettlemoyer, 2013; Quirk et al., 2015), among others. While these methods have the advantage of being learnable from data, compared to the programming lan- guages (PLs) in use by programmers, the domain- specific languages targeted by these works have a schema and syntax that is relatively simple. Recently, Ling et al. (2016) have proposed a data-driven code generation method for high-level, general-purpose PLs like Python and Java. This work treats code generation as a sequence-to- sequence modeling problem, and introduce meth- ods to generate words from character-level mod- els, and copy variable names from input descrip- tions. However, unlike most work in semantic parsing, it does not consider the fact that code has to be well-defined programs in the target syntax.
1704.01696#2
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
3
In this work, we propose a data-driven syntax- based neural network model tailored for genera- tion of general-purpose PLs like Python. In or- der to capture the strong underlying syntax of the PL, we define a model that transduces an NL state- ment into an Abstract Syntax Tree (AST; Fig. 1(a), § 2) for the target PL. ASTs can be deterministi- cally generated for all well-formed programs us- ing standard parsers provided by the PL, and thus give us a way to obtain syntax information with minimal engineering. Once we generate an AST, we can use deterministic generation tools to con- vert the AST into surface code. We hypothesize that such a structured approach has two benefits.
1704.01696#3
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
4
Production Rule | Role | Explanation Call +> expr[func] expr*[args] keyword*[keywords] Function Call > func: the function to be invoked > args: arguments list > keywords: keyword arguments list If +> expr[test] stmt*[body] stmt*[orelse] If Statement | > fest: condition expression > body: statements inside the If clause > orelse: elif or else statements For +> expr[target] expr*[iter] stmt*[body] For Loop > target: iteration variable > iter: enumerable to iterate stmt*[orelse] over > body: loop body > orelse: else statements FunctionDef ++ identifier[mame] arguments*[args] | Function Def. | > name: function name > args: function arguments stmt*[body] > body: function body Table 1: Example production rules for common Python statements (Python Software Foundation, 2016)
1704.01696#4
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
5
Table 1: Example production rules for common Python statements (Python Software Foundation, 2016) First, we hypothesize that structure can be used to constrain our search space, ensuring generation of well-formed code. To this end, we propose a syntax-driven neural code generation model. The backbone of our approach is a grammar model (§ 3) which formalizes the generation story of a derivation AST into sequential application of ac- tions that either apply production rules (§ 3.1), or emit terminal tokens (§ 3.2). The underlying syn- tax of the PL is therefore encoded in the grammar model a priori as the set of possible actions. Our approach frees the model from recovering the un- derlying grammar from limited training data, and instead enables the system to focus on learning the compositionality among existing grammar rules. Xiao et al. (2016) have noted that this imposition of structure on neural models is useful for seman- tic parsing, and we expect this to be even more im- portant for general-purpose PLs where the syntax trees are larger and more complex.
1704.01696#5
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
6
Second, we hypothesize that structural informa- tion helps to model information flow within the neural network, which naturally reflects the recur- sive structure of PLs. To test this, we extend a standard recurrent neural network (RNN) decoder to allow for additional neural connections which reflect the recursive structure of an AST (§ 4.2). As an example, when expanding the node x in Fig. l(a), we make use of the information from both its parent and left sibling (the dashed rectan- gle). This enables us to locally pass information of relevant code segments via neural network con- nections, resulting in more confident predictions. Experiments (§ 5) on two Python code gener- ation tasks show 11.7% and 9.3% absolute im- provements in accuracy against the state-of-the-art system (Ling et al., 2016). Our model also gives competitive performance on a standard semantic parsing benchmark. 2 The Code Generation Problem Given an NL description , our task is to generate the code snippet c in a modern PL based on the intent of x. We attack this problem by first generat- ing the underlying AST. We define a probabilistic grammar model of generating an AST y given x: p(y|x). The best-possible AST ˆy is then given by
1704.01696#6
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
7
ˆy = arg max p(y|x). y (1) ˆy is then deterministically converted to the corre- sponding surface code c.1 While this paper uses examples from Python code, our method is PL- agnostic. Before detailing our approach, we first present a brief introduction of the Python AST and its underlying grammar. The Python abstract gram- mar contains a set of production rules, and an AST is generated by applying several production rules composed of a head node and multiple child nodes. For instance, the first rule in Tab. 1 is used to generate the function call sorted(·) in Fig. 1(a). It consists of a head node of type Call, and three child nodes of type expr, expr* and keyword*, respectively. Labels of each node are In an AST, non-terminal noted within brackets. nodes sketch the general structure of the target code, while terminal nodes can be categorized into two types: operation terminals and variable ter- minals. Operation terminals correspond to basic arithmetic operations like AddOp.Variable termi- nal nodes store values for variables and constants of built-in data types2. For instance, all terminal nodes in Fig. 1(a) are variable terminal nodes.
1704.01696#7
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
8
3 Grammar Model Before detailing our neural code generation method, we first introduce the grammar model at its core. Our probabilistic grammar model defines the generative story of a derivation AST. We fac- torize the generation process of an AST into se- quential application of actions of two types: • APPLYRULE[r] applies a production rule r to the current derivation tree; 1We use astor library to convert ASTs into Python code. 2bool, float, int, str. bo [root] 4 (Expr) 4 (Call — 1 | root + Expr 1, | Expr expr[value] 13] expr Call ts | Call++ expr[func] expr*[args] keyword*[keywords] [args]]* 14 [_keyword*[keywords] expr Name | ty expr* + expr tg] keyword* + keyword pr tia (Keyword me) fishel7 tye (exprlvalue] (a) Input: sort my_list in descending order Name + str to] expr > Name —» Action Flow enToken{sorted] Name + str a » Parent Feeding % | Apply Rule enToken[</n>] GenToken[my_list] Generate Token “4 GenToken with Copy GenToken[</n>] (b) Code: sorted(my_list, reverse=True)
1704.01696#8
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
9
Figure 1: (a) the Abstract Syntax Tree (AST) for the given example code. Dashed nodes denote terminals. Nodes are labeled with time steps during which they are generated. (b) the action sequence (up to t14) used to generate the AST in (a) • GENTOKEN[v] populates a variable terminal node by appending a terminal token v. Fig. 1(b) shows the generation process of the tar- get AST in Fig. 1(a). Each node in Fig. 1(b) in- dicates an action. Action nodes are connected by solid arrows which depict the chronological order of the action flow. The generation proceeds in depth-first, left-to-right order (dotted arrows rep- resent parent feeding, explained in § 4.2.1). example, in Fig. 1(b), the rule Call +> expr... expands the frontier node Cal] at time step t4, and its three child nodes expr, expr* and keyword* are added to the derivation. APPLYRULE actions grow the derivation AST by appending nodes. When a variable terminal node (e.g., str) is added to the derivation and be- comes the frontier node, the grammar model then switches to GENTOKEN actions to populate the variable terminal with tokens.
1704.01696#9
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
10
Formally, under our grammar model, the probability of generating an AST y is factorized as: T [] p(y|x) = p(at|x, a<t), t=1 (2) where at is the action taken at time step t, and a<t is the sequence of actions before t. We will explain how to compute Eq. (2) in § 4. Put simply, the generation process begins from a root node at t0, and proceeds by the model choosing APPLYRULE actions to generate the overall program structure from a closed set of grammar rules, then at leaves of the tree corresponding to variable terminals, the model switches to GENTOKEN actions to gener- ate variables or constants from the open set. We describe this process in detail below. 3.1 APPLYRULE Actions APPLYRULE actions generate program structure, expanding the current node (the frontier node at time step t: nft) in a depth-first, left-to-right traversal of the tree. Given a fixed set of produc- tion rules, APPLYRULE chooses a rule r from the subset that has a head matching the type of nft, and uses r to expand nft by appending all child nodes specified by the selected production. As an
1704.01696#10
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
11
Unary Closure Sometimes, generating an AST requires applying a chain of unary productions. For instance, it takes three time steps (tg — t11) to generate the sub-structure expr* +> expr +> Name +> str in Fig. l(a). This can be effectively reduced to one step of APPLYRULE action by tak- ing the closure of the chain of unary productions and merging them into a single rule: expr* »* str. Unary closures reduce the number of actions needed, but would potentially increase the size of the grammar. In our experiments we tested our model both with and without unary closures (§ 5). # 3.2 GENTOKEN Actions Once we reach a frontier node nft that corresponds to a variable type (e.g., str), GENTOKEN actions are used to fill this node with values. For general- purpose PLs like Python, variables and constants have values with one or multiple tokens. For in- stance, a node that stores the name of a function (e.g., sorted) has a single token, while a node that denotes a string constant (e.g., a=‘hello world’) could have multiple tokens. Our model copes with both scenarios by firing GENTOKEN actions at one or more time steps. At each time
1704.01696#11
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
12
step, GENTOKEN appends one terminal token to the current frontier variable node. A special </n> token is used to “close” the node. The grammar model then proceeds to the new frontier node. Terminal tokens can be generated from a pre- defined vocabulary, or be directly copied from the input NL. This is motivated by the observation that the input description often contains out-of- vocabulary (OOV) variable names or literal values that are directly used in the target code. For in- stance, in our running example the variable name my list can be directly copied from the the input at t12. We give implementation details in § 4.2.2. # 4 Estimating Action Probabilities We estimate action probabilities in Eq. (2) using attentional neural encoder-decoder models with an information flow structured by the syntax trees. # 4.1 Encoder For an NL description x consisting of n words {wi}n the encoder computes a context sen- sitive embedding hi for each wi using a bidi- rectional Long Short-Term Memory (LSTM) net- work (Hochreiter and Schmidhuber, 1997), simi- lar to the setting in (Bahdanau et al., 2014). See supplementary materials for detailed equations. # 4.2 Decoder
1704.01696#12
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
13
# 4.2 Decoder The decoder uses an RNN to model the sequential generation process of an AST defined as Eq. (2). Each action step in the grammar model naturally grounds to a time step in the decoder RNN. There- fore, the action sequence in Fig. 1(b) can be in- terpreted as unrolling RNN time steps, with solid arrows indicating RNN connections. The RNN maintains an internal state to track the generation process (§ 4.2.1), which will then be used to com- pute action probabilities p(at|x, a<t) (§ 4.2.2). 4.2.1 Tracking Generation States Our implementation of the decoder resembles a vanilla LSTM, with additional neural connections (parent feeding, Fig. 1(b)) to reflect the topological structure of an AST. The decoder’s internal hidden state at time step t, st, is given by:
1704.01696#13
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
14
st = fLSTM([at−1 : ct : pt : nft], st−1), where fLSTM(·) is the LSTM update function. [:] denotes vector concatenation. st will then be used to compute action probabilities p(at|x, a<t) in Eq. (2). Here, at−1 is the embedding of the pre- vious action. ct is a context vector retrieved from ApplyRule GenToken embedding of GenToken[</n>] non-terminal variable terminal embedding of node type expr* C9 ~ < ~< ~« > > > > sort my_list in descending order Figure 2: Illustration of a decoder time step (t = 9)
1704.01696#14
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
15
input encodings {h;} via soft attention. p; is a vector that encodes the information of the parent action. ny, denotes the node type embedding of the current frontier node n,,°. Intuitively, feeding the decoder the information of n+, helps the model 0 keep track of the frontier node to expand. Action Embedding a; We maintain two action embedding matrices, Wr and Wa. Each row in Wr (Wc) corresponds to an embedding vector or an action APPLYRULE|r] (GENTOKEN|v)). Context Vector c; The decoder RNN uses soft at- ention to retrieve a context vector c; from the in- put encodings {h;} pertain to the prediction of the current action. We follow Bahdanau et al. (2014) and use a Deep Neural Network (DNN) with a sin- gle hidden layer to compute attention weights. Parent Feeding p; Our decoder RNN uses ad- ditional neural connections to directly pass infor- mation from parent actions. For instance, when computing So, the information from its parent ac- tion step t4 will be used. Formally, we define the parent action step p; as the time step at which the frontier node ny, is generated. As an exam- ple, for
1704.01696#15
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
17
information pt from two sources: (1) the hidden state of parent action spt, and (2) the embedding of parent action apt. pt is the concatenation. The parent feeding schema en- ables the model to utilize the information of par- ent code segments to make more confident predic- tions. Similar approaches of injecting parent in- formation were also explored in the SEQ2TREE model in Dong and Lapata (2016)4. 3We maintain an embedding for each node type. 4SEQ2TREE generates tree-structured outputs by condi# 4.2.2 Calculating Action Probabilities In this section we explain how action probabilities p(at|x, a<t) are computed based on st. APPLYRULE The probability of applying rule r as the current action at is given by a softmax5: p(a¢ = APPLYRULE[r]|z, act) = softmax(Wp - g(st))'-e(r) (4) where g(·) is a non-linearity tanh(W·st +b), and e(r) the one-hot vector for rule r. GENTOKEN As in § 3.2, a token v can be gener- ated from a predefined vocabulary or copied from the input, defined as the marginal probability:
1704.01696#17
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
18
p(at = GENTOKEN[v]|x, a<t) = p(gen|x, a<t)p(v|gen, x, a<t) + p(copy|x, a<t)p(v|copy, x, a<t). The selection probabilities p(gen|·) and p(copy|·) are given by softmax(WS · st). The prob- ability of generating v from the vocabulary, p(v|gen, x, a<t), is defined similarly as Eq. (4), except that we use the GENTOKEN embedding matrix WG, and we concatenate the context vector ct with st as input. To model the copy probability, we follow recent advances in modeling copying mechanism in neural networks (Gu et al., 2016; Jia and Liang, 2016; Ling et al., 2016), and use a pointer network (Vinyals et al., 2015) to compute the probability of copying the i-th word from the input by attending to input representations {hi}: exp(w(hy, $1, ¢z)) Dai exp(w(hy, St, cr)’ p(w;|copy, x, act)
1704.01696#18
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
19
exp(w(hy, $1, ¢z)) Dai exp(w(hy, St, cr)’ p(w;|copy, x, act) where ω(·) is a DNN with a single hidden layer. Specifically, if wi is an OOV word (e.g., my list, which is represented by a special <unk> token in encoding), we directly copy the actual word wi to the derivation. 4.3 Training and Inference Given a dataset of pairs of NL descriptions xi and code snippets ci, we parse ci into its AST yi and decompose yi into a sequence of oracle actions un- der the grammar model. The model is then op- timized by maximizing the log-likelihood of the oracle action sequence. At inference time, we use beam search to approximate the best AST ˆy in Eq. (1). See supplementary materials for the pseudo-code of the inference algorithm. tioning on the hidden states of parent non-terminals, while our parent feeding uses the states of parent actions. 5We do not show bias terms for all softmax equations.
1704.01696#19
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
20
tioning on the hidden states of parent non-terminals, while our parent feeding uses the states of parent actions. 5We do not show bias terms for all softmax equations. Dataset HS DJANGO IFTTT Train Development Test 533 66 66 16,000 1,000 1,805 77,495 5,171 758 Avg. tokens in description Avg. characters in code Avg. size of AST (# nodes) 39.1 360.3 136.6 14.3 41.1 17.2 7.4 62.2 7.0 Statistics of Grammar w/o unary closure # productions # node types terminal vocabulary size Avg. # actions per example 100 61 1361 173.4 222 96 6733 20.3 1009 828 0 5.0 w/ unary closure # productions # node types Avg. # actions per example 100 57 141.7 237 92 16.4 – – – Table 2: Statistics of datasets and associated grammars # 5 Experimental Evaluation # 5.1 Datasets and Metrics
1704.01696#20
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
21
HEARTHSTONE (HS) dataset (Ling et al., 2016) is a collection of Python classes that implement cards for the card game HearthStone. Each card comes with a set of fields (e.g., name, cost, and description), which we concatenate to create the input sequence. This dataset is relatively difficult: input descriptions are short, while the target code is in complex class structures, with each AST hav- ing 137 nodes on average. DJANGO dataset (Oda et al., 2015) is a collection of lines of code from the Django web framework, each with a manually annotated NL description. Compared with the HS dataset where card imple- mentations are somewhat homogenous, examples in DJANGO are more diverse, spanning a wide va- riety of real-world use cases like string manipula- tion, IO operations, and exception handling. IFTTT dataset (Quirk et al., 2015) is a domain- specific benchmark that provides an interest- ing side comparison. Different from HS and DJANGO which are in a general-purpose PL, pro- grams in IFTTT are written in a domain-specific language used by
1704.01696#21
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
22
Different from HS and DJANGO which are in a general-purpose PL, pro- grams in IFTTT are written in a domain-specific language used by the IFTTT task automation App. Users of the App write simple instruc- tions (e.g., If Instagram.AnyNewPhotoByYou Then Dropbox.AddFileFromURL) with NL de- scriptions (e.g., “Autosave your Instagram photos to Dropbox”). Each statement inside the If or Then clause consists of a channel (e.g., Dropbox) and a function (e.g., AddFileFromURL)6. This
1704.01696#22
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
23
6Like Beltagy and Quirk (2016), we strip function paramsimple structure results in much more concise ASTs (7 nodes on average). Because all examples are created by ordinary Apps users, the dataset is highly noisy, with input NL very loosely con- nected to target ASTs. The authors thus provide a high-quality filtered test set, where each example is verified by at least three annotators. We use this set for evaluation. Also note IFTTT’s grammar has more productions (Tab. 2), but this does not imply that its grammar is more complex. This is because for HS and DJANGO terminal tokens are generated by GENTOKEN actions, but for IFTTT, all the code is generated directly by APPLYRULE actions. Metrics As is standard in semantic parsing, we measure accuracy, the fraction of correctly gen- erated examples. However, because generating an exact match for complex code structures is non- trivial, we follow Ling et al. (2016), and use token- level BLEU-4 with as a secondary metric, defined as the averaged BLEU scores over all examples.7
1704.01696#23
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
24
5.2 Setup Preprocessing All input descriptions are tok- enized using NLTK. We perform simple canoni- calization for DJANGO, such as replacing quoted strings in the inputs with place holders. See sup- plementary materials for details. We extract unary closures whose frequency is larger than a thresh- old k (k = 30 for HS and 50 for DJANGO). Configuration The size of all embeddings is 128, except for node type embeddings, which is 64. The dimensions of RNN states and hidden layers are 256 and 50, respectively. Since our datasets are relatively small for a data-hungry neural model, we impose strong regularization using recurrent dropouts (Gal and Ghahramani, 2016), together with standard dropout layers added to the inputs and outputs of the decoder RNN. We validate the dropout probability from {0, 0.2, 0.3, 0.4}. For decoding, we use a beam size of 15. 5.3 Results Evaluation results for Python code generation tasks are listed in Tab. 3. Numbers for our syseters since they are mostly specific to users.
1704.01696#24
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
25
5.3 Results Evaluation results for Python code generation tasks are listed in Tab. 3. Numbers for our syseters since they are mostly specific to users. 7These two metrics are not ideal: accuracy only measures exact match and thus lacks the ability to give credit to seman- tically correct code that is different from the reference, while it is not clear whether BLEU provides an appropriate proxy for measuring semantics in the code generation task. A more intriguing metric would be directly measuring semantic/func- tional code equivalence, for which we present a pilot study at the end of this section (cf. Error Analysis). We leave ex- ploring more sophisticated metrics (e.g. based on static code analysis) as future work.
1704.01696#25
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
26
HS DJANGO Retrieval System† Phrasal Statistical MT† Hierarchical Statistical MT† ACC 0.0 0.0 0.0 BLEU 62.5 34.1 43.2 ACC 14.7 31.5 9.5 BLEU 18.6 47.6 35.9 NMT SEQ2TREE SEQ2TREE–UNK LPN † 1.5 1.5 13.6 4.5 60.4 53.4 62.8 65.6 45.1 28.9 39.4 62.3 63.4 44.6 58.2 77.6 Our system 16.2 75.8 71.6 84.5 Ablation Study – frontier embed. – parent feed. – copy terminals + unary closure – unary closure 16.7 10.6 3.0 10.1 – 75.8 75.7 65.7 74.8 70.7 71.5 32.3 70.3 – 83.8 84.3 61.7 83.3 Table 3: Results on two Python code generation tasks. †Results previously reported in Ling et al. (2016).
1704.01696#26
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
27
tems are averaged over three runs. We compare primarily with two approaches: (1) Latent Pre- dictor Network (LPN), a state-of-the-art sequence- to-sequence code generation model (Ling et al., 2016), and (2) SEQ2TREE, a neural semantic pars- ing model (Dong and Lapata, 2016). SEQ2TREE generates trees one node at a time, and the tar- get grammar is not explicitly modeled a priori, but implicitly learned from data. We test both the original SEQ2TREE model released by the au- thors and our revised one (SEQ2TREE–UNK) that uses unknown word replacement to handle rare words (Luong et al., 2015). For completeness, we also compare with a strong neural machine translation (NMT) system (Neubig, 2015) using a standard encoder-decoder architecture with atten- tion and unknown word replacement8, and include numbers from other baselines used in Ling et al. (2016). On the HS dataset, which has relatively large ASTs, we use unary closure for our model and SEQ2TREE, and for DJANGO we do not. System Comparison As in Tab. 3, our model
1704.01696#27
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
28
large ASTs, we use unary closure for our model and SEQ2TREE, and for DJANGO we do not. System Comparison As in Tab. 3, our model registers 11.7% and 9.3% absolute improvements over LPN in accuracy on HS and DJANGO. This boost in performance strongly indicates the impor- tance of modeling grammar in code generation. For the baselines, we find LPN outperforms oth- ers in most cases. We also note that SEQ2TREE achieves a decent accuracy of 13.6% on HS, which is due to the effect of unknown word replacement, since we only achieved 1.5% without it. A closer
1704.01696#28
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
29
8For NMT, we also attempted to find the best-scoring syn- tactically correct predictions in the size-5 beam, but this did not yield a significant improvement over the NMT results in Tab. 3. 1.05 © 08 a w= BLEU &~ acc 8 . ~s 2 . ~~ 806 ~S ~. £ . ~ £04 Sa 5 . . 20.2 “se +S Speone- ae 0.0. = == (7 10 20 30 40 50 Reference AST Size (# nodes) Figure 3: Performance w.r.t reference AST size on DJANGO 10, oe osh 7m e-—= BLEU «-« acc 2 wel, 806 7s nee aa een £ o4 \ Zo2k ‘s . aL N 0.0 = rn 50 100 150 200 250 Reference AST Size (# nodes) Figure 4: Performance w.r.t reference AST size on HS
1704.01696#29
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
30
comparison with SEQ2TREE is insightful for un- derstanding the advantage of our syntax-driven ap- proach, since both SEQ2TREE and our system out- put ASTs: (1) SEQ2TREE predicts one node each time step, and requires additional “dummy” nodes to mark the boundary of a subtree. The sheer num- ber of nodes in target ASTs makes the prediction process error-prone. In contrast, the APPLYRULE actions of our grammar model allows for gener- ating multiple nodes at a single time step. Em- pirically, we found that in HS, SEQ2TREE takes more than 300 time steps on average to generate a target AST, while our model takes only 170 steps. (2) SEQ2TREE does not directly use productions in the grammar, which possibly leads to grammat- ically incorrect ASTs and thus empty code out- puts. We observe that the ratio of grammatically incorrect ASTs predicted by SEQ2TREE on HS and DJANGO are 21.2% and 10.9%, respectively, while our system guarantees grammaticality. Ablation Study We also ablated our best- performing models to analyze the contribution of each component.
1704.01696#30
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
31
10.9%, respectively, while our system guarantees grammaticality. Ablation Study We also ablated our best- performing models to analyze the contribution of each component. “–frontier embed.” removes the frontier node embedding nft from the decoder RNN inputs (Eq. (3)). This yields worse results on DJANGO while gives slight improvements in ac- curacy on HS. This is probably because that the grammar of HS has fewer node types, and thus the RNN is able to keep track of nft without de- pending on its embedding. Next, “–parent feed.” removes the parent feeding mechanism. The ac- curacy drops significantly on HS, with a marginal deterioration on DJANGO. This result is interest- ing because it suggests that parent feeding is more important when the ASTs are larger, which will be the case when handling more complicated code generation tasks like HS. Finally, removing the pointer network (“–copy terminals”) in GENTOCHANNEL FULL TREE Classical Methods posclass (Quirk et al., 2015) LR (Beltagy and Quirk, 2016) 81.4 88.8 71.0 82.5 Neural Network Methods NMT
1704.01696#31
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
34
The results with and without unary closure demonstrate that, interestingly, it is effective on HS but not on DJANGO. We conjecture that this is because on HS it significantly reduces the number of actions from 173 to 142 (c.f., Tab. 2), with the number of productions in the grammar remaining unchanged. In contrast, DJANGO has a broader domain, and thus unary closure results in more productions in the grammar (237 for DJANGO vs. 100 for HS), increasing sparsity. Performance by the size of AST We further in- vestigate our model’s performance w.r.t. the size of the gold-standard ASTs in Figs. 3 and 4. Not surprisingly, the performance drops when the size of the reference ASTs increases. Additionally, on the HS dataset, the BLEU score still remains at around 50 even when the size of ASTs grows to 200, indicating that our proposed syntax-driven approach is robust for long code segments. Domain Specific Code Generation Although this is not the focus of our work, evaluation on IFTTT brings us closer to a standard semantic parsing set- ting, which helps to investigate similarities and differences between generation of more compli- cated general-purpose code
1704.01696#34
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
35
brings us closer to a standard semantic parsing set- ting, which helps to investigate similarities and differences between generation of more compli- cated general-purpose code and and more limited- domain simpler code. Tab. 4 shows the results, following the evaluation protocol in (Beltagy and Quirk, 2016) for accuracies at both channel and full parse tree (channel + function) levels. Our full model performs on par with existing neu- ral network-based methods, while outperforming other neural models in full tree accuracy (82.0%). This score is close to the best classical method (LR), which is based on a logistic regression
1704.01696#35
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
36
input <name> Brawl </name> <cost> 5 </cost> <desc> Destroy all minions except one (chosen randomly) </desc> <rarity> Epic </rarity> ... pred. class Brawl(SpellCard): # def # init (self): # super(). init (’Brawl’, 5, CHARACTER CLASS. WARRIOR, CARD RARITY.EPIC) def use(self, player, game): super().use(player, game) targets = copy.copy(game.other player.minions) targets.extend(player.minions) for minion in targets: # minion.die(self) # A # ref. minions = copy.copy(player.minions) minions.extend(game.other player.minions) if len(minions) > 1: # B # survivor = game.random choice(minions) for minion in minions: if minion is not survivor: minion.die(self) input join app config.path and string ’locale’ into a file path, substitute it for localedir. pred. localedir = os.path.join( # app-config.path, ’locale’) ¥ input self.plural is an lambda function with an argument n, which returns result of boolean expression n not equal to integer 1
1704.01696#36
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
38
model with rich hand-engineered features (e.g., brown clusters and paraphrase). Also note that the performance between NMT and other neural mod- els is much closer compared with the results in Tab. 3. This suggests that general-purpose code generation is more challenging than the simpler IFTTT setting, and therefore modeling structural information is more helpful. Case Studies We present output examples in Tab. 5. On HS, we observe that most of the time our model gives correct predictions by filling learned code templates from training data with ar- guments (e.g., cost) copied from input. However, we do find interesting examples indicating that the model learns to generalize beyond trivial copy- ing. For instance, the first example is one that our model predicted wrong — it generated code block A instead of the gold B (it also missed a function definition not shown here). However, we find that the block A actually conveys part of the input in- tent by destroying all, not some, of the minions. Since we are unable to find code block A in the training data, it is clear that the model has learned to generalize to some extent from multiple training card examples
1704.01696#38
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
40
| how to populate the arguments by copying from inputs. The second example illustrates the dif- ficulty of generating code with complex nested structures like lambda functions, a scenario worth further investigation in future studies. More exam- ples are attached in supplementary materials. Error Analysis To understand the sources of er- rors and how good our evaluation metric (exact match) is, we randomly sampled and labeled 100 and 50 failed examples (with accuracy=0) from DJANGO and HS, resp. We found that around 2% of these examples in the two datasets are actually semantically equivalent. These examples include: (1) using different parameter names when defining a function; (2) omitting (or adding) default values of parameters in function calls. While the rarity of such examples suggests that our exact match met- ric is reasonable, more advanced evaluation met- rics based on statistical code analysis are definitely intriguing future work.
1704.01696#40
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
41
For DJANGO, we found that 30% of failed cases were due to errors where the pointer net- work failed to appropriately copy a variable name into the correct position. 25% were because the generated code only partially implementated the required functionality. 10% and 5% of errors were due to malformed English inputs and pre- processing errors, respectively. The remaining 30% of examples were errors stemming from mul- tiple sources, or errors that could not be easily cat- egorized into the above. For HS, we found that all failed card examples were due to partial imple- mentation errors, such as the one shown in Table 5. # 6 Related Work
1704.01696#41
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
42
# 6 Related Work Code Generation and Analysis Most existing works on code generation focus on generating code for domain specific languages (DSLs) (Kush- man and Barzilay, 2013; Raza et al., 2015; Man- shadi et al., 2013), with neural network-based ap- proaches recently explored (Parisotto et al., 2016; Balog et al., 2016). For general-purpose code gen- eration, besides the general framework of Ling et al. (2016), existing methods often use language and task-specific rules and strategies (Lei et al., 2013; Raghothaman et al., 2016). A similar line is to use NL queries for code retrieval (Wei et al., 2015; Allamanis et al., 2015). The reverse task of generating NL summaries from source code has also been explored (Oda et al., 2015; Iyer et al., 2016). Finally, there are probabilistic models of
1704.01696#42
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
43
source code (Maddison and Tarlow, 2014; Nguyen et al., 2013). The most relevant work is Allama- nis et al. (2015), which uses a factorized model to measure semantic relatedness between NL and ASTs for code retrieval, while our model tackles the more challenging generation task. Semantic Parsing Our work is related to the general topic of semantic parsing, where the tar- get logical forms can be viewed as DSLs. The parsing process is often guided by grammatical formalisms like combinatory categorical gram- mars (Kwiatkowski et al., 2013; Artzi et al., 2015), dependency-based syntax (Liang et al., 2011; Pasupat and Liang, 2015) or task-specific formalisms (Clarke et al., 2010; Yih et al., 2015; Krishnamurthy et al., 2016; Misra et al., 2015; Mei et al., 2016). Recently, there are efforts in design- ing neural network-based semantic parsers (Misra and Artzi, 2016; Dong and Lapata, 2016; Nee- lakantan et al., 2016; Yin et al., 2016). Several approaches have be proposed to utilize grammar knowledge in a
1704.01696#43
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
44
Lapata, 2016; Nee- lakantan et al., 2016; Yin et al., 2016). Several approaches have be proposed to utilize grammar knowledge in a neural parser, such as augmenting the training data by generating examples guided by the grammar (Kocisk´y et al., 2016; Jia and Liang, 2016). Liang et al. (2016) used a neu- ral decoder which constrains the space of next valid tokens in the query language for question answering. Finally, the structured prediction ap- proach proposed by Xiao et al. (2016) is closely related to our model in using the underlying gram- mar as prior knowledge to constrain the genera- tion process of derivation trees, while our method is based on a unified grammar model which jointly captures production rule application and terminal symbol generation, and scales to general purpose code generation tasks.
1704.01696#44
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
45
# 7 Conclusion This paper proposes a syntax-driven neural code generation approach that generates an abstract syntax tree by sequentially applying actions from a grammar model. Experiments on both code gen- eration and semantic parsing tasks demonstrate the effectiveness of our proposed approach. # Acknowledgment We are grateful to Wang Ling for his generous help with LPN and setting up the benchmark. We also thank Li Dong for helping with SEQ2TREE and insightful discussions. # References Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gor- don, and Yi Wei. 2015. Bimodal modelling of In Proceedings source code and natural language. of ICML. volume 37. David Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly recurrent neu- ral networks. In Proceedings of ICLR. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of EMNLP. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transaction of ACL 1(1).
1704.01696#45
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
46
Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transaction of ACL 1(1). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Neural machine translation by CoRR Bengio. 2014. jointly learning to align and translate. abs/1409.0473. Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2016. Deepcoder: Learning to write programs. CoRR abs/1611.01989. Robert Balzer. 1985. A 15 year perspective on au- IEEE Trans. Software Eng. tomatic programming. 11(11). Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, LAW-ID@ACL. Improved seman- tic parsers for if-then statements. In Proceedings of ACL.
1704.01696#46
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
47
Improved seman- tic parsers for if-then statements. In Proceedings of ACL. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. Joel Brandt, Mira Dontcheva, Marcos Weskamp, and Scott R. Klemmer. 2010. Example-centric program- ming: integrating web search into the development environment. In Proceedings of CHI. Joel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two stud- ies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of CHI. Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4). James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of CoNLL. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL.
1704.01696#47
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
48
Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL. Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Proceedings of NIPS. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Incorporating copying mechanism in In Proceedings of Li. 2016. sequence-to-sequence learning. ACL. Tihomir Gvero and Viktor Kuncak. 2015. Interactive In Proceedings synthesis using free-form queries. of ICSE. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8). Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code In Proceedings of using a neural attention model. ACL. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL. Tom´as Kocisk´y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with In Pro- semi-supervised sequential autoencoders. ceedings of EMNLP.
1704.01696#48
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
49
Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. 2016. Semantic parsing to probabilistic programs for situated question answering. In Pro- ceedings of EMNLP. Nate Kushman and Regina Barzilay. 2013. Using se- mantic unification to generate regular expressions from natural language. In Proceedings of NAACL. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Scaling semantic Luke S. Zettlemoyer. 2013. parsers with on-the-fly ontology matching. In Pro- ceedings of the EMNLP. Tao Lei, Fan Long, Regina Barzilay, and Martin C. Ri- nard. 2013. From natural language specifications to program input parsers. In Proceedings of ACL. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural symbolic ma- chines: Learning semantic parsers on freebase with weak supervision. CoRR abs/1611.00020. Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of ACL.
1704.01696#49
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
50
Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of ACL. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor In Proceedings of networks for code generation. ACL. Greg Little and Robert C. Miller. 2009. Keyword pro- gramming in java. Autom. Softw. Eng. 16(1). Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL. Chris J. Maddison and Daniel Tarlow. 2014. Structured In Pro- generative models of natural source code. ceedings of ICML. volume 32. Mehdi Hafezi Manshadi, Daniel Gildea, and James F. Allen. 2013. Integrating programming by example and natural language programming. In Proceedings of AAAI. Hongyuan Mei, Mohit Bansal, and Matthew R. Wal- ter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of AAAI.
1704.01696#50
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
51
Dipendra K. Misra and Yoav Artzi. 2016. Neural shift- In Proceedings of reduce CCG semantic parsing. EMNLP. Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lex- In Pro- icon induction for high-level instructions. ceedings of ACL. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Inducing latent pro- In Proceedings of 2016. Neural programmer: grams with gradient descent. ICLR. lamtram: A toolkit for lan- guage and translation modeling using neural net- works. http://www.github.com/neubig/lamtram. Tung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N. Nguyen. 2013. A statistical semantic language model for source code. In Pro- ceedings of ACM SIGSOFT. Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical ma- chine translation (T). In Proceedings of ASE.
1704.01696#51
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
52
Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet Kohli. 2016. Neuro-symbolic program synthesis. CoRR abs/1611.01855. Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of ACL. Python Software Foundation. 2016. Python abstract grammar. https://docs.python.org/2/library/ast.html. Chris Quirk, Raymond J. Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of ACL. Mukund Raghothaman, Yi Wei, and Youssef Hamadi. SWIM: synthesizing what i mean: code 2016. search and idiomatic snippet synthesis. In Proceed- ings of ICSE. Mohammad Raza, Sumit Gulwani, and Natasa Milic- Frayling. 2015. Compositional program synthesis In Proceed- from natural language and examples. ings of IJCAI. Lappoon R. Tang and Raymond J. Mooney. 2001. Us- ing multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of ECML.
1704.01696#52
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
53
Lappoon R. Tang and Raymond J. Mooney. 2001. Us- ing multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of ECML. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS. Yi Wei, Nirupama Chandrasekaran, Sumit Gul- Build- and Youssef Hamadi. 2015. Techni- https://www.microsoft.com/en- wani, ing cal us/research/publication/building-bing-developer- assistant/. bing report. developer assistant. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of ACL. Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL. Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016. Neural enquirer: Learning to query tables in natural language. In Proceedings of IJCAI.
1704.01696#53
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
54
Luke Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form structured clas- sification with probabilistic categorial grammars. In Proceedings of UAI. # Supplementary Materials # A Encoder LSTM Equations Suppose the input natural language description x consists of n words {w;}"_,. Let w; denote the embedding of w;. We use two LSTMs to process x in forward and backward order, and get the se- quence of hidden states {hy}”_, and fh}, in the two directions: hy = fig (wi, bi-1) hy = fi'srm(wi, hist), where fiépyy and fispy are standard LSTM up- date functions. The representation of the i-th word, h;, is given by concatenating h, and h,. # hy = fig hy = fiépyy and fispy # B Inference Algorithm Given an NL description, we approximate the best AST ˆy in Eq. 1 using beam search. The inference procedure is listed in Algorithm 1.
1704.01696#54
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
55
# B Inference Algorithm Given an NL description, we approximate the best AST ˆy in Eq. 1 using beam search. The inference procedure is listed in Algorithm 1. We maintain a beam of size K. The beam is initialized with one hypothesis AST with a single root node (line 2). At each time step, the decoder enumerates over all hypotheses in the beam. For each hypothesis AST, we first find its frontier node nft (line 6). If nft is a non-terminal node, we col- lect all syntax rules r with nft as the head node If nft is a variable to the actions set (line 10). terminal node, we add all terminal tokens in the vocabulary and the input description as candidate actions (line 13). We apply each candidate action on the current hypothesis AST to generate a new hypothesis (line 15). We then rank all newly gen- erated hypotheses and keep the top-K scored ones in the beam. A complete hypothesis AST is gener- ated when it has no frontier node. We then convert the top-scored complete AST into the surface code (lines 18-19). We remark that our inference algorithm can be implemented efficiently by expanding multi- ple hypotheses (lines 5-16) simultaneously using mini-batching on GPU. # C Dataset Preprocessing
1704.01696#55
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
57
quoted string literals (e.g., verbose name is a string ‘cache entry’). We therefore replace quoted strings with indexed placeholders using regular expression. After decoding, we run a post- processing step to replace all placeholders with their actual values. (2) For descriptions with cascading variable reference (e.g., call method self.makekey), we append after the whole variable (e.g., append name with tokens separated by ‘.’ self and makekey after self.makekey). This gives the pointer network flexibility to copy either par- tial or whole variable names. Generate Oracle Action Sequence To train our model, we generate the gold-standard action se- quence from reference code. For IFTTT, we sim- ply parse the officially provided ASTs into se- quences of APPLYRULE actions. For HS and DJANGO, we first convert the Python code into ASTs using the standard ast module. Values inside variable terminal nodes are tokenized by space and camel case (e.g., ClassName is tok- enized to Class and Name). We then traverse the AST in pre-order to generate the reference action sequence according to the grammar model. # D Additional Decoding Examples
1704.01696#57
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
58
# D Additional Decoding Examples We provide extra decoding examples from the DJANGO and HS datasets, listed in Table 6 and Ta- ble 7, respectively. The model heavily relies on the pointer network to copy variable names and con- stants from input descriptions. We find the source of errors in DJANGO is more diverse, with most incorrect examples resulting from missing argu- ments and incorrect words copied by the pointer network. Errors in HS are mostly due to partially or incorrectly implemented effects. Also note that the first example in Table 6 is semantically cor- rect, although it was considered incorrect under our exact-match metric. This suggests more ad- vanced evaluation metric that takes into account the execution results in future studies.
1704.01696#58
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
59
Algorithm 1: Inference Algorithm Input : NL description x Output: code snippet c 1 call Encoder to encode x 2 Q = {yo(root)} 3 for time step t do 4 Q’=0 5 foreach hypothesis yz € Q do 6 ny, = FrontierNode(z;) 7 A=0 8 if ny, is non-terminal then 9 foreach production rule r with nz, as the head node do 10 | | A=AU {ApPLyRULE[r]} uw else 2 foreach terminal token v do 13 | A=AU {GENTOKEN|v]} 14 foreach action a; € Ado yi = ApplyAction(y:, a+) Q = QU fy} 17 Q =top-K scored hypotheses in Q’ 18 ¥ = top-scored complete hypothesis AST 19 convert ¥ to surface code c 20 return c > Initialize a beam of size K > Initialize the set of candidate actions > APPLYRULE actions for non-terminal nodes > GENTOKEN actions for variable terminal nodes
1704.01696#59
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
60
input for every i in range of integers from 0 to length of result, not included pred. for i in range(O, len(result)): ¥ ref. for i in range(len(result)): input call the function blankout with 2 arguments: t.contents and ’B’, write the result to out. pred. out.write(blankout(t.contents, ’B’)) / ref. out.write(blankout(t.contents, ’B’)) pred. code_list.append(foreground[v]) / ref. code_list .append (foreground [v] ) input zip elements of inner_result and inner_args into a list of tuples, for every i_item and i_args in the result pred. for i_item, i-args in zip(inner_result, ref. for i_item, i_args in zip(inner_result, inner_args): / inner_args): input activate is a lambda function which returns for any argument x. pred. activate = lambda x: / ref. activate = lambda x: input if elt is an instance of Choice or NonCapture classes pred. if isinstance(elt, Choice): X ref. if isinstance(elt, (Choice, NonCapture)): input get translation function attribute of the object t, call the result with
1704.01696#60
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
61
pred. if isinstance(elt, Choice): X ref. if isinstance(elt, (Choice, NonCapture)): input get translation function attribute of the object t, call the result with an argument eol_message, substitute the result for result. pred. translation_function = getattr(t, ref. result = getattr(t, translation_function) ( translation_function) X eol_message) input for every s in strings, call the function force_text with an argument s, join the results in a string, return the result. pred. return ’’.join(force_text(s)) X ref. return ’’.join(force_text(s) for s in strings) input for every p in parts without the first element pred. for p in p[i:]: X ref. for p in parts[1:]: input call the function get_language, split the result by ’-’, substitute the first element of the result for base_lang. pred. base_lang = get _language().split() [0] x ref. base_lang = get_language().split(’—’) [0]
1704.01696#61
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
62
Table 6: Predicted examples from DJANGO dataset. Copied contents (copy probability > 0.9) are highlighted input <name> Burly Rockjaw Trogg </name> <cost> 5 </cost> <attack> 3 </attack> <defense> 5 </defense> <defense> <desc> Whenever your opponent casts a spell, gain 2 Attack. </desc> <rarity> Common </rarity>... pred. class BurlyRockjawTrogg(MinionCard) : def _init-(self): super ()._init_(Burly Rockjaw Trogg’, 4, CHARACTER-CLASS.ALL, CARD-RARITY . COMMON) def create_minion(self, player): return Minion(3, 5, effects=[Effect (SpellCast (player=EnemyPlayer()), ActionTag (Give (ChangeAttack(2)), SelfSelector()))]) Vv input <name> Maexxna </name> <cost> 6 </cost> <attack> 2 </attack> <defense> 8 </defense> <desc> Destroy any minion damaged by this minion. </desc> <rarity> Legendary </rarity> ... pred. class Maexxna(MinionCard): # def # init (self):
1704.01696#62
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
63
pred. class Maexxna(MinionCard): # def # init (self): super(). init (’Maexxna’, 6, CHARACTER CLASS.ALL, CARD RARITY.LEGENDARY, # minion type=MINION TYPE.BEAST) # def create minion(self, player): return Minion(2, 8, effects=[Effect(DidDamage(), ActionTag(Kill(), # TargetSelector(IsMinion())))]) ¥ input <name> Hellfire </name> <cost> 4 </cost> <attack> -1 </attack> <defense> -1 </defense> <desc> Deal 3
1704.01696#63
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]
1704.01696
64
input <name> Hellfire </name> <cost>4 </cost> <attack>-1 </attack> <defense>-I </defense> <desc> damage to ALL characters. </desc> <rarity> Free </rarity>... pred. class Hellfire(SpellCard) : def __init__(self): super ()._init_(’Hellfire’, 4, CHARACTER_CLASS.WARLOCK, CARD_RARITY.FREE) def use(self, player, game): super().use(player, game) for minion in copy.copy(game.other_player minions) : minion.damage (player .effective-spell_damage(3), self) X ref. class Hellfire(SpellCard) : def _init_(self): super ()._-init_.(Hellfire’, 4, CHARACTER_CLASS.WARLOCK, CARD_RARITY . FREE) def use(self, player, game): super().use(player, game) targets = copy.copy(game.other_player.minions) targets.extend(game.current_player.minions) targets. append(game.other_player.hero) targets. append(game.current_player.hero) for minion in targets: minion.damage (player .effective_spell_damage(3), self)
1704.01696#64
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
http://arxiv.org/pdf/1704.01696
Pengcheng Yin, Graham Neubig
cs.CL, cs.PL, cs.SE
To appear in ACL 2017
null
cs.CL
20170406
20170406
[]