id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1606.01885#3
Learning to Optimize
Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over the space of all possible optimization algorithms. In addition, when presented with a new objective function, hyperparameter optimization needs to conduct multiple trials with different hyperparameter settings to ï¬ nd the optimal hyperparameters. In contrast, once training is complete, the autonomous algorithm knows how to choose hyperparameters on-the-ï¬ y without needing to try different hyperparameter settings, even when presented with an objective function that it has not seen during training. To the best of our knowledge, the proposed method represents the ï¬ rst attempt to learn a better algorithm automatically. # 3 Method # 3.1 Preliminaries
1606.01885#2
1606.01885#4
1606.01885
[ "1505.00521" ]
1606.01885#4
Learning to Optimize
In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps. A reinforcement learning problem is typically formally represented as an Markov decision process (MDP).
1606.01885#3
1606.01885#5
1606.01885
[ "1505.00521" ]
1606.01885#5
Learning to Optimize
We consider a ï¬ nite-horizon MDP with continuous state and action spaces deï¬ ned by the tuple (S, A, p0, p, c, γ), where S is the set of states, A is the set of actions, p0 : S â R+ is the probability density over initial states, p : S à A à S â R+ is the transition probability density, that is, the conditional probability density over successor states given the current state and action, c : S â R is a function that maps state to cost and γ â (0, 1] is the discount factor. The objective is to learn a stochastic policy Ï
1606.01885#4
1606.01885#6
1606.01885
[ "1505.00521" ]
1606.01885#6
Learning to Optimize
â : S à A â R+, which is a conditional probability density over actions given the current state, such that the expected cumulative cost is minimized. That 2 is, Tv smi t â ¢ = argmin Eso,a0,s1,...,97 » Y 2) : t=0 where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density T-1 4 (80; 40; 81,-+-, 87) = Po (So) Il T (az| 84) P ( Sepa] Se, 44) - t=0 This problem of ï¬
1606.01885#5
1606.01885#7
1606.01885
[ "1505.00521" ]
1606.01885#7
Learning to Optimize
nding the cost-minimizing policy is known as the policy search problem. To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately. In many practical settings, p, which characterizes the dynamics, is unknown and must therefore be estimated. Additionally, because it is often equally important to minimize cost at earlier and later time steps, we will henceforth focus on the undiscounted setting, i.e. the setting where γ = 1. Guided policy search [17] is a method for performing policy search in continuous state and action spaces under possibly unknown dynamics. It works by alternating between computing a target distribution over trajectories that is encouraged to minimize cost and agree with the current policy and learning parameters of the policy in a standard supervised fashion so that sample trajectories from executing the policy are close to sample trajectories drawn from the target distribution. The target trajectory distribution is computed by iteratively ï¬ tting local time-varying linear and quadratic approximations to the (estimated) dynamics and cost respectively and optimizing over a restricted class of linear-Gaussian policies subject to a trust region constraint, which can be solved efï¬ ciently in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian (LQG).
1606.01885#6
1606.01885#8
1606.01885
[ "1505.00521" ]
1606.01885#8
Learning to Optimize
We refer interested readers to [17] for details. # 3.2 Formulation Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current location by a step vector computed from some functional Ï of the objective function, the current location and past locations. # Algorithm 1 General structure of optimization algorithms # Require: Objective function f x(0) â random point in the domain of f for i = 1, 2, . . . do â x â Ï (f, {x(0), . . . , x(iâ 1)}) if stopping condition is met then return x(iâ 1) end if x(i) â x(iâ 1) + â x end for
1606.01885#7
1606.01885#9
1606.01885
[ "1505.00521" ]
1606.01885#9
Learning to Optimize
This framework subsumes all existing optimization algorithms. Different optimization algorithms differ in the choice of Ï . First-order methods use a Ï that depends only on the gradient of the objective function, whereas second-order methods use a Ï that depends on both the gradient and the Hessian of the objective function. In particular, the following choice of Ï yields the gradient descent method: Ï (f, {x(0), . . . , x(iâ 1)}) = â γâ f (x(iâ 1)),
1606.01885#8
1606.01885#10
1606.01885
[ "1505.00521" ]
1606.01885#10
Learning to Optimize
where γ denotes the step size or learning rate. Similarly, the following choice of Ï yields the gradient descent method with momentum: i-1 D}) = 9 [ Sra hovy(el) j=0 mf, {r,.. 3 where γ again denotes the step size and α denotes the momentum decay factor. Therefore, if we can learn Ï , we will be able to learn an optimization algorithm. Since it is difï¬ cult to model general functionals, in practice, we restrict the dependence of Ï on the objective function f to objective values and gradients evaluated at current and past locations. Hence, Ï can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far to the next step vector. We observe that the execution of an optimization algorithm can be viewed as the execution of a ï¬ xed policy in an MDP: the state consists of the current location and the objective values and gradients evaluated at the current and past locations, the action is the step vector that is used to update the current location, and the transition probability is partially characterized by the location update formula, x(i) â x(iâ 1) + â x. The policy that is executed corresponds precisely to the choice of Ï used by the optimization algorithm. For this reason, we will also use Ï to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over all possible ï¬
1606.01885#9
1606.01885#11
1606.01885
[ "1505.00521" ]
1606.01885#11
Learning to Optimize
rst-order optimization algorithms. We can use reinforcement learning to learn the policy Ï . To do so, we need to deï¬ ne the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we deï¬ ne cost at a state to be the objective value at the current location. This encourages the policy to reach the minimum of the objective function as quickly as possible. Since the policy Ï
1606.01885#10
1606.01885#12
1606.01885
[ "1505.00521" ]
1606.01885#12
Learning to Optimize
may be stochastic in general, we model each dimension of the action conditional on the state as an independent Gaussian whose mean is given by a regression model and variance is some learned constant. We choose to parameterize the mean of Ï using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use guided policy search to learn the parameters of the policy. We use a training set consisting of different randomly generated objective functions. We evaluate the resulting autonomous algorithm on different objective functions drawn from the same distribution.
1606.01885#11
1606.01885#13
1606.01885
[ "1505.00521" ]
1606.01885#13
Learning to Optimize
# 3.3 Discussion An autonomous optimization algorithm offers several advantages over hand-engineered algorithms. First, an autonomous optimizer is trained on real algorithm execution data, whereas hand-engineered optimizers are typically derived by analyzing objective functions with properties that may or may not be satisï¬ ed by objective functions that arise in practice. Hence, an autonomous optimizer minimizes the amount of a priori assumptions made about objective functions and can instead take full advantage of the information about the actual objective functions of interest. Second, an autonomous optimizer has no hyperparameters that need to be tuned by the user.
1606.01885#12
1606.01885#14
1606.01885
[ "1505.00521" ]
1606.01885#14
Learning to Optimize
Instead of just computing a step direction which must then be combined with a user-speciï¬ ed step size, an autonomous optimizer predicts the step direction and size jointly. This allows the autonomous optimizer to dynamically adjust the step size based on the information it has acquired about the objective function while performing the optimization. Finally, when an autonomous optimizer is trained on a particular class of objective functions, it may be able to discover hidden structure in the geometry of the class of objective functions. At test time, it can then exploit this knowledge to perform optimization faster.
1606.01885#13
1606.01885#15
1606.01885
[ "1505.00521" ]
1606.01885#15
Learning to Optimize
# Implementation Details We store the current location, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous H time steps and use H = 25 in our experiments. More speciï¬ cally, the dimensions of the state space encode the following information: Current location in the domain â ¢ Change in the objective value at the current location relative to the objective value at the ith most recent location for all i â {2, . . . , H + 1} â
1606.01885#14
1606.01885#16
1606.01885
[ "1505.00521" ]
1606.01885#16
Learning to Optimize
¢ Gradient of the objective function evaluated at the ith most recent location for all i â {2, . . . , H + 1} 4 Initially, we set the dimensions corresponding to historical information to zero. The current location is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current location, we exclude it from the input that is fed into the neural net. We use a small neural net to model the policy. Its architecture consists of a single hidden layer with 50 hidden units. Softplus activation units are used in the hidden layer and linear activation units are used in the output layer.
1606.01885#15
1606.01885#17
1606.01885
[ "1505.00521" ]
1606.01885#17
Learning to Optimize
The training objective imposed by guided policy search takes the form of the squared Mahalanobis distance between mean predicted and target actions along with other terms dependent on the variance of the policy. We also regularize the entropy of the policy to encourage deterministic actions conditioned on the state. The coefï¬ cient on the regularizer increases gradually in later iterations of guided policy search. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights. Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting. For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration.
1606.01885#16
1606.01885#18
1606.01885
[ "1505.00521" ]
1606.01885#18
Learning to Optimize
# 4 Experiments We learn autonomous optimization algorithms for various convex and non-convex classes of objective functions that correspond to loss functions for different machine learning models. We ï¬ rst learn an autonomous optimizer for logistic regression, which induces a convex loss function. We then learn an autonomous optimizer for robust linear regression using the Geman-McClure M-estimator, whose loss function is non-convex. Finally, we learn an autonomous optimizer for a two-layer neural net classiï¬ er with ReLU activation units, whose error surface has even more complex geometry. # 4.1 Logistic Regression We consider a logistic regression model with an ¢2 regularizer on the weight vector. Training the model requires optimizing the following objective:
1606.01885#17
1606.01885#19
1606.01885
[ "1505.00521" ]
1606.01885#19
Learning to Optimize
wb n n Xr ¢ min â â yi logo (w? x; +b) +(1â y)log(l-o (w? x; +b))+ 3 \jw||5 ; i=l where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â {0, 1} denote the feature vector and label of the ith instance, λ denotes the coefï¬ cient on the regularizer and Ï (z) := 1 1+eâ z . For our experiments, we choose λ = 0.0005 and d = 3. This objective is convex in w and b. We train an autonomous algorithm that learns to optimize objectives of this form. The training set consists of examples of such objective functions whose free variables, which in this case are xi and yi, are all assigned concrete values. Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset. To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels.
1606.01885#18
1606.01885#20
1606.01885
[ "1505.00521" ]
1606.01885#20
Learning to Optimize
We train the autonomous algorithm on a set of 90 objective functions. We evaluate it on a test set of 100 random objective functions generated using the same procedure and compare to popular hand-engineered algorithms, such as gradient descent, momentum, conjugate gradient and L-BFGS. All baselines are run with the best hyperparameter settings tuned on the training set. For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing
1606.01885#19
1606.01885#21
1606.01885
[ "1505.00521" ]
1606.01885#21
Learning to Optimize
5 (a) (b) (c) Figure 1: (a) Mean margin of victory of each algorithm for optimizing the logistic regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. algorithms at every iteration, a quantity we will refer to as â the margin of victoryâ . This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure 1a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set.
1606.01885#20
1606.01885#22
1606.01885
[ "1505.00521" ]
1606.01885#22
Learning to Optimize
We ï¬ nd that conjugate gradient and L-BFGS diverge or oscillate in rare cases (on 6% of the objective functions in the test set), even though the autonomous algorithm, gradient descent and momentum do not. To reï¬ ect performance of these baselines in the majority of cases, we exclude the offending objective functions when computing the mean margin of victory. As shown, the autonomous algorithm outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory of the autonomous algorithm is quite high in early iterations, indicating that the autonomous algorithm converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the autonomous algorithm is able to generalize to much longer time horizons at test time. L-BFGS converges to slightly better optima than the autonomous algorithm and the momentum method. This is not surprising, as the objective functions are convex and L-BFGS is known to be a very good optimizer for convex optimization problems. We show the performance of each algorithm on two objective functions from the test set in Figures 1b and 1c. In Figure 1b, the autonomous algorithm converges faster than all other algorithms. In Figure 1c, the autonomous algorithm initially converges faster than all other algorithms but is later overtaken by L-BFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as L-BFGS, while the objective values achieved by gradient descent and momentum remain much higher. # 4.2 Robust Linear Regression Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an M-estimator for parameter estimation. A popular choice is the Geman-McClure estimator, which induces the following objective:
1606.01885#21
1606.01885#23
1606.01885
[ "1505.00521" ]
1606.01885#23
Learning to Optimize
2 min i SS _ (i= wtxi = bd) = wTxi - 5) 2 ~_ wlx, â bp)?â wb 2 2+ (y; â wx; â b) where w â Rd and b â R denote the weight vector and bias respectively, xi â Rd and yi â R denote the feature vector and label of the ith instance and c â R is a constant that modulates the shape of the loss function. For our experiments, we use c = 1 and d = 3. This loss function is not convex in either w or b. As with the preceding section, each objective function in the training set is a function of the above form with realized values for xi and yi. The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise.
1606.01885#22
1606.01885#24
1606.01885
[ "1505.00521" ]
1606.01885#24
Learning to Optimize
6 (a) (b) (c) Figure 2: (a) Mean margin of victory of each algorithm for optimizing the robust linear regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. The autonomous algorithm is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure 2a, the autonomous algorithm outperforms all hand-engineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and L-BFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and L-BFGS diverge quickly. Interestingly, unlike in the previous experiment, L-BFGS no longer performs well, which could be caused by non-convexity of the objective functions. Figures 2b and 2c show performance on objective functions from the test set. In Figure 2b, the autonomous optimizer not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure 2c, the autonomous algorithm converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum. # 4.3 Neural Net Classiï¬
1606.01885#23
1606.01885#25
1606.01885
[ "1505.00521" ]
1606.01885#25
Learning to Optimize
er Finally, we train an autonomous algorithm to train a small neural net classifier. We consider a two-layer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the cross-entropy loss combined with @2 regularization on the weights. To train the model, we need to optimize the following objective: exp (a max (Wx; + b, 0) + c),,) Do exp (G max (Wx; + b,0) + c);) in 2 Shoe wig d yop Sa ee SW + 5 ele, where W â
1606.01885#24
1606.01885#26
1606.01885
[ "1505.00521" ]
1606.01885#26
Learning to Optimize
Rhà d, b â Rh, U â Rpà h, c â Rp denote the ï¬ rst-layer and second-layer weights and biases, xi â Rd and yi â {1, . . . , p} denote the input and target class label of the ith instance, λ denotes the coefï¬ cient on regularizers and (v)j denotes the jth component of v. For our experiments, we use λ = 0.0005 and d = h = p = 2.
1606.01885#25
1606.01885#27
1606.01885
[ "1505.00521" ]
1606.01885#27
Learning to Optimize
The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem. The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label. We evaluate the autonomous algorithm in the same manner as above. As shown in Figure 3a, the autonomous algorithm signiï¬ cantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory of the autonomous optimizer and the momentum method, the autonomous optimizer is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that hand-engineered algorithms are more sub-optimal on
1606.01885#26
1606.01885#28
1606.01885
[ "1505.00521" ]
1606.01885#28
Learning to Optimize
7 (a) (b) (c) Figure 3: (a) Mean margin of victory of each algorithm for training neural net classiï¬ ers. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to non-convexity, conjugate gradient and L-BFGS often diverge. Performance on examples of objective functions from the test set is shown in Figures 3b and 3c. As shown, the autonomous optimizer is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from.
1606.01885#27
1606.01885#29
1606.01885
[ "1505.00521" ]
1606.01885#29
Learning to Optimize
# 5 Conclusion We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to ï¬ nd the optimal policy. We used guided policy search for this purpose and trained autonomous optimizers for different classes of convex and non- convex objective functions. We demonstrated that the autonomous optimizer converges faster and/or reaches better optima than hand-engineered optimizers. We hope autonomous optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of innovation in science and engineering.
1606.01885#28
1606.01885#30
1606.01885
[ "1505.00521" ]
1606.01885#30
Learning to Optimize
# References [1] Jonathan Baxter, Rich Caruana, Tom Mitchell, Lorien Y Pratt, Daniel L Silver, and Sebastian Thrun. NIPS 1995 workshop on learning to learn: Knowledge consolidation and transfer in inductive sys- tems. https://web.archive.org/web/20000618135816/http://www.cs.cmu.edu/afs/cs.cmu. edu/user/caruana/pub/transfer.html, 1995. Accessed: 2015-12-05. [2] Yoshua Bengio.
1606.01885#29
1606.01885#31
1606.01885
[ "1505.00521" ]
1606.01885#31
Learning to Optimize
Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889â 1900, 2000. [3] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281â 305, 2012. [4] James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546â 2554, 2011. [5] Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta.
1606.01885#30
1606.01885#32
1606.01885
[ "1505.00521" ]
1606.01885#32
Learning to Optimize
Metalearning: applications to data mining. Springer Science & Business Media, 2008. [6] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. [7] Allen Cypher and Daniel Conrad Halbert. Watch what I do: programming by demonstration.
1606.01885#31
1606.01885#33
1606.01885
[ "1505.00521" ]
1606.01885#33
Learning to Optimize
MIT press, 1993. [8] Justin Domke. Generic methods for optimization-based modeling. In AISTATS, volume 22, pages 318â 326, 2012. [9] Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In AAAI, pages 1128â 1135, 2015. 8 [10] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel.
1606.01885#32
1606.01885#34
1606.01885
[ "1505.00521" ]
1606.01885#34
Learning to Optimize
Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015. [11] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014. [12] Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015. [13] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm conï¬
1606.01885#33
1606.01885#35
1606.01885
[ "1505.00521" ]
1606.01885#35
Learning to Optimize
guration. In Learning and Intelligent Optimization, pages 507â 523. Springer, 2011. [14] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190â 198, 2015. [15] Å ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. [16] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015. [17] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071â 1079, 2014. [18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. [19] Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contact-rich manipulation skills with guided policy search. arXiv preprint arXiv:1501.05611, 2015. [20] Percy Liang, Michael I Jordan, and Dan Klein. Learning programs:
1606.01885#34
1606.01885#36
1606.01885
[ "1505.00521" ]
1606.01885#36
Learning to Optimize
A hierarchical Bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 639â 646, 2010. [21] Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimization through reversible learning. arXiv preprint arXiv:1502.03492, 2015. [22] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2, 1978.
1606.01885#35
1606.01885#37
1606.01885
[ "1505.00521" ]
1606.01885#37
Learning to Optimize
[23] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. [24] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951â 2959, 2012. [25] Kevin Swersky, Jasper Snoek, and Ryan P Adams.
1606.01885#36
1606.01885#38
1606.01885
[ "1505.00521" ]
1606.01885#38
Learning to Optimize
Multi-task bayesian optimization. In Advances in neural information processing systems, pages 2004â 2012, 2013. [26] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. [27] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artiï¬ cial Intelligence Review, 18(2):77â 95, 2002. [28] Greg Yang.
1606.01885#37
1606.01885#39
1606.01885
[ "1505.00521" ]
1606.01885#39
Learning to Optimize
Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016. [29] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015. [30] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
1606.01885#38
1606.01885#40
1606.01885
[ "1505.00521" ]
1606.01885#40
Learning to Optimize
9
1606.01885#39
1606.01885
[ "1505.00521" ]
1606.01541#0
Deep Reinforcement Learning for Dialogue Generation
6 1 0 2 p e S 9 2 ] L C . s c [ 4 v 1 4 5 1 0 . 6 0 6 1 : v i X r a # Deep Reinforcement Learning for Dialogue Generation Jiwei Li1, Will Monroe1, Alan Ritter2, Michel Galley3, Jianfeng Gao3 and Dan Jurafsky1 1Stanford University, Stanford, CA, USA 2Ohio State University, OH, USA 3Microsoft Research, Redmond, WA, USA {jiweil,wmonroe4,jurafsky}@stanford.edu, [email protected] {mgalley,jfgao}@microsoft.com # Abstract Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be short- sighted, predicting utterances one at a time while ignoring their inï¬ uence on future out- comes. Modeling the future direction of a di- alogue is crucial to generating coherent, inter- esting dialogues, a need which led traditional NLP models of dialogue to draw on reinforce- ment learning. In this paper, we show how to integrate these goals, applying deep reinforce- ment learning to model future reward in chat- bot dialogue. The model simulates dialogues between two virtual agents, using policy gradi- ent methods to reward sequences that display three useful conversational properties: infor- mativity, coherence, and ease of answering (re- lated to forward-looking function). We evalu- ate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conver- sation in dialogue simulation.
1606.01541#1
1606.01541
[ "1506.08941" ]
1606.01541#1
Deep Reinforcement Learning for Dialogue Generation
This work marks a ï¬ rst step towards learning a neural conversa- tional model based on the long-term success of dialogues. context when mapping between consecutive dialogue turns (Sordoni et al., 2015) in a way not possible, for example, with MT-based dialogue models (Ritter et al., 2011). Despite the success of SEQ2SEQ models in di- alogue generation, two problems emerge: First, SEQ2SEQ models are trained by predicting the next dialogue turn in a given conversational context using the maximum-likelihood estimation (MLE) objective function. However, it is not clear how well MLE approximates the real-world goal of chatbot develop- ment: teaching a machine to converse with humans, while providing interesting, diverse, and informative feedback that keeps users engaged. One concrete example is that SEQ2SEQ models tend to generate highly generic responses such as â
1606.01541#0
1606.01541#2
1606.01541
[ "1506.08941" ]
1606.01541#2
Deep Reinforcement Learning for Dialogue Generation
I donâ t knowâ re- gardless of the input (Sordoni et al., 2015; Serban et al., 2016; Li et al., 2016a). This can be ascribed to the high frequency of generic responses found in the training set and their compatibility with a diverse range of conversational contexts. Yet â I donâ t knowâ is apparently not a good action to take, since it closes the conversation down. # Introduction Neural response generation (Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016a; Wen et al., 2015; Yao et al., 2015; Luan et al., 2016; Xu et al., 2016; Wen et al., 2016; Li et al., 2016b; Su et al., 2016) is of growing inter- est. The LSTM sequence-to-sequence (SEQ2SEQ) model (Sutskever et al., 2014) is one type of neural generation model that maximizes the probability of generating a response given the previous dialogue turn. This approach enables the incorporation of rich Another common problem, illustrated in the two sample conversations on the left of Table 1, is that the system becomes stuck in an inï¬ nite loop of repet- itive responses. This is due to MLE-based SEQ2SEQ modelsâ inability to account for repetition. In exam- ple 2 (bottom left), the dialogue falls into an inï¬ nite loop after three turns, with both agents generating dull, generic utterances like i donâ t know what you are talking about and you donâ t know what you are saying. Looking at the entire conversation, utterance (4) turns out to be a bad action to take because it offers no way of continuing the conversation.1 1A similar rule is often suggested in improvisational comedy: https://en.wikipedia.org/wiki/Yes,_and... Baseline mutual information model (Li et al. 2015) A: Where are you going? (1) B:
1606.01541#1
1606.01541#3
1606.01541
[ "1506.08941" ]
1606.01541#3
Deep Reinforcement Learning for Dialogue Generation
Iâ m going to the restroom. (2) A: See you later. (3) B: See you later. (4) A: See you later. (5) B: See you later. (6) ... ... A: how old are you? (1) B: Iâ m 16. (2) A: 16? (3) B: I donâ t know what you are talking about. (4) A: You donâ t know what you are saying. (5) B: I donâ t know what you are talking about . (6) A: You donâ t know what you are saying. (7) ...
1606.01541#2
1606.01541#4
1606.01541
[ "1506.08941" ]
1606.01541#4
Deep Reinforcement Learning for Dialogue Generation
Proposed reinforcement learning model A: Where are you going? (1) B: Iâ m going to the police station. (2) A: Iâ ll come with you. (3) B: No, no, no, no, youâ re not going anywhere. (4) A: Why? (5) B: I need you to stay here. (6) A: I donâ t know what you are talking about. (7) ... A: How old are you? (1) B: Iâ m 16. Why are you asking? (2) A I thought you were 12. (3) B: What made you think so? (4) A: I donâ t know what you are talking about. (5) B: You donâ t know what you are saying. (6) ... ...
1606.01541#3
1606.01541#5
1606.01541
[ "1506.08941" ]
1606.01541#5
Deep Reinforcement Learning for Dialogue Generation
Table 1: Left Column: Dialogue simulation between two agents using a 4-layer LSTM encoder-decoder trained on the OpenSubtitles dataset. The ï¬ rst turn (index 1) is input by the authors. Then the two agents take turns conversing, taking as input the other agentâ s prior generated turn. The output is generated using the mutual information model (Li et al., 2015) in which an N-best list is ï¬ rst obtained using beam search based on p(t|s) and reranked by linearly combining the backward probability p(s|t), where t and s respectively denote targets and sources. Right Column: Dialogue simulated using the proposed reinforcement learning model. The new model has more forward-looking utterances (questions like â Why are you asking?â and offers like â Iâ ll come with youâ
1606.01541#4
1606.01541#6
1606.01541
[ "1506.08941" ]
1606.01541#6
Deep Reinforcement Learning for Dialogue Generation
) and lasts longer before it falls into conversational black holes. These challenges suggest we need a conversa- tion framework that has the ability to (1) integrate developer-deï¬ ned rewards that better mimic the true goal of chatbot development and (2) model the long- term inï¬ uence of a generated response in an ongoing dialogue. To achieve these goals, we draw on the insights of reinforcement learning, which have been widely ap- plied in MDP and POMDP dialogue systems (see Re- lated Work section for details). We introduce a neu- ral reinforcement learning (RL) generation method, which can optimize long-term rewards designed by system developers. Our model uses the encoder- decoder architecture as its backbone, and simulates conversation between two virtual agents to explore the space of possible actions while learning to maxi- mize expected reward.
1606.01541#5
1606.01541#7
1606.01541
[ "1506.08941" ]
1606.01541#7
Deep Reinforcement Learning for Dialogue Generation
We deï¬ ne simple heuristic ap- proximations to rewards that characterize good con- versations: good conversations are forward-looking (Allwood et al., 1992) or interactive (a turn suggests a following turn), informative, and coherent. The pa- rameters of an encoder-decoder RNN deï¬ ne a policy over an inï¬ nite action space consisting of all possible utterances. The agent learns a policy by optimizing the long-term developer-deï¬ ned reward from ongo- ing dialogue simulations using policy gradient meth- ods (Williams, 1992), rather than the MLE objective deï¬ ned in standard SEQ2SEQ models. Our model thus integrates the power of SEQ2SEQ systems to learn compositional semantic meanings of utterances with the strengths of reinforcement learn- ing in optimizing for long-term goals across a conver- sation. Experimental results (sampled results at the right panel of Table 1) demonstrate that our approach fosters a more sustained dialogue and manages to produce more interactive responses than standard SEQ2SEQ models trained using the MLE objective. # 2 Related Work Efforts to build statistical dialog systems fall into two major categories.
1606.01541#6
1606.01541#8
1606.01541
[ "1506.08941" ]
1606.01541#8
Deep Reinforcement Learning for Dialogue Generation
The ï¬ rst treats dialogue generation as a source- to-target transduction problem and learns mapping rules between input messages and responses from a massive amount of training data. Ritter et al. (2011) frames the response generation problem as a statisti- cal machine translation (SMT) problem. Sordoni et al. (2015) improved Ritter et al.â s system by rescor- ing the outputs of a phrasal SMT-based conversation system with a neural model that incorporates prior context. Recent progress in SEQ2SEQ models inspire several efforts (Vinyals and Le, 2015) to build end- to-end conversational systems which ï¬ rst apply an encoder to map a message to a distributed vector rep- resenting its semantics and generate a response from the message vector. Serban et al. (2016) propose a hierarchical neural model that captures dependen- cies over an extended conversation history. Li et al. (2016a) propose mutual information between mes- sage and response as an alternative objective function in order to reduce the proportion of generic responses produced by SEQ2SEQ systems. The other line of statistical research focuses on building task-oriented dialogue systems to solve domain-speciï¬ c tasks. Efforts include statistical models such as Markov Decision Processes (MDPs) (Levin et al., 1997; Levin et al., 2000; Walker et al., 2003; Pieraccini et al., 2009), POMDP (Young et al., 2010; Young et al., 2013; GaË sic et al., 2013a; GaË sic et al., 2014) models, and models that statisti- cally learn generation rules (Oh and Rudnicky, 2000; Ratnaparkhi, 2002; Banchs and Li, 2012; Nio et al., 2014). This dialogue literature thus widely applies reinforcement learning (Walker, 2000; Schatzmann et al., 2006; Gasic et al., 2013b; Singh et al., 1999; Singh et al., 2000; Singh et al., 2002) to train dialogue policies.
1606.01541#7
1606.01541#9
1606.01541
[ "1506.08941" ]
1606.01541#9
Deep Reinforcement Learning for Dialogue Generation
But task-oriented RL dialogue systems of- ten rely on carefully limited dialogue parameters, or hand-built templates with state, action and reward sig- nals designed by humans for each new domain, mak- ing the paradigm difï¬ cult to extend to open-domain scenarios. Also relevant is prior work on reinforcement learn- ing for language understanding - including learning from delayed reward signals by playing text-based games (Narasimhan et al., 2015; He et al., 2016), executing instructions for Windows help (Branavan et al., 2011), or understanding dialogues that give navigation directions (Vogel and Jurafsky, 2010). Our goal is to integrate the SEQ2SEQ and rein- forcement learning paradigms, drawing on the advan- tages of both. We are thus particularly inspired by recent work that attempts to merge these paradigms, including Wen et al. (2016)â training an end-to-end task-oriented dialogue system that links input repre- sentations to slot-value pairs in a databaseâ or Su et al. (2016), who combine reinforcement learning with neural generation on tasks with real users, show- ing that reinforcement learning improves dialogue performance. # 3 Reinforcement Learning for Open-Domain Dialogue In this section, we describe in detail the components of the proposed RL model. The learning system consists of two agents. We use p to denote sentences generated from the ï¬ rst agent and q to denote sentences from the second. The two agents take turns talking with each other. A dialogue can be represented as an alternating se- quence of sentences generated by the two agents: p1, q1, p2, q2, ..., pi, qi. We view the generated sen- tences as actions that are taken according to a policy deï¬ ned by an encoder-decoder recurrent neural net- work language model. The parameters of the network are optimized to maximize the expected future reward using policy search, as described in Section 4.3. Policy gradi- ent methods are more appropriate for our scenario than Q-learning (Mnih et al., 2013), because we can initialize the encoder-decoder RNN using MLE pa- rameters that already produce plausible responses, before changing the objective and tuning towards a policy that maximizes long-term reward.
1606.01541#8
1606.01541#10
1606.01541
[ "1506.08941" ]
1606.01541#10
Deep Reinforcement Learning for Dialogue Generation
Q-learning, on the other hand, directly estimates the future ex- pected reward of each action, which can differ from the MLE objective by orders of magnitude, thus mak- ing MLE parameters inappropriate for initialization. The components (states, actions, reward, etc.) of our sequential decision problem are summarized in the following sub-sections. # 3.1 Action An action a is the dialogue utterance to generate. The action space is inï¬ nite since arbitrary-length se- quences can be generated. # 3.2 State A state is denoted by the previous two dialogue turns [pi, qi]. The dialogue history is further transformed to a vector representation by feeding the concatena- tion of pi and qi into an LSTM encoder model as described in Li et al. (2016a).
1606.01541#9
1606.01541#11
1606.01541
[ "1506.08941" ]
1606.01541#11
Deep Reinforcement Learning for Dialogue Generation
# 3.3 Policy A policy takes the form of an LSTM encoder-decoder (i.e., pRL(pi+1|pi, qi) ) and is deï¬ ned by its param- eters. Note that we use a stochastic representation of the policy (a probability distribution over actions given states). A deterministic policy would result in a discontinuous objective that is difï¬ cult to optimize using gradient-based methods. # 3.4 Reward r denotes the reward obtained for each action. In this subsection, we discuss major factors that contribute to the success of a dialogue and describe how approx- imations to these factors can be operationalized in computable reward functions. Ease of answering A turn generated by a machine should be easy to respond to. This aspect of a turn is related to its forward-looking function: the con- straints a turn places on the next turn (Schegloff and Sacks, 1973; Allwood et al., 1992). We propose to measure the ease of answering a generated turn by using the negative log likelihood of responding to that utterance with a dull response. We manually con- structed a list of dull responses S consisting 8 turns such as â
1606.01541#10
1606.01541#12
1606.01541
[ "1506.08941" ]
1606.01541#12
Deep Reinforcement Learning for Dialogue Generation
I donâ t know what you are talking aboutâ , â I have no ideaâ , etc., that we and others have found occur very frequently in SEQ2SEQ models of con- versations. The reward function is given as follows: 1 1 r= He De Fy; 08 Powsesea(sla) (1) scS where NS denotes the cardinality of NS and Ns de- notes the number of tokens in the dull response s. Although of course there are more ways to generate dull responses than the list can cover, many of these responses are likely to fall into similar regions in the vector space computed by the model. A system less likely to generate utterances in the list is thus also less likely to generate other dull responses. represents the likelihood output by SEQ2SEQ models. It is worth noting that pseq2seq is different from the stochastic policy function pRL(pi+1|pi, qi), since the former is learned based on the MLE objective of the SEQ2SEQ model while the latter is the policy optimized for long-term future reward in the RL setting. r1 is further scaled by the length of target S. Information Flow We want each agent to con- tribute new information at each turn to keep the di- alogue moving and avoid repetitive sequences. We therefore propose penalizing semantic similarity be- tween consecutive turns from the same agent. Let hpi and hpi+1 denote representations obtained from the encoder for two consecutive turns pi and pi+1. The reward is given by the negative log of the cosine similarity between them: hp, hi rz = â log cos(hp,, hp,,,) log cos dpi Pina â _ [pl psa (2)
1606.01541#11
1606.01541#13
1606.01541
[ "1506.08941" ]
1606.01541#13
Deep Reinforcement Learning for Dialogue Generation
Semantic Coherence We also need to measure the adequacy of responses to avoid situations in which the generated replies are highly rewarded but are un- grammatical or not coherent. We therefore consider the mutual information between the action a and pre- vious turns in the history to ensure the generated responses are coherent and appropriate: r3 = 1 Na log pseq2seq(a|qi, pi)+ 1 Nqi log pbackward seq2seq (qi|a) (3) pseq2seq(a|pi, qi) denotes the probability of generat- ing response a given the previous dialogue utterances [pi, qi]. pbackward seq2seq (qi|a) denotes the backward proba- bility of generating the previous dialogue utterance qi based on response a. pbackward is trained in a simi- seq2seq lar way as standard SEQ2SEQ models with sources and targets swapped.
1606.01541#12
1606.01541#14
1606.01541
[ "1506.08941" ]
1606.01541#14
Deep Reinforcement Learning for Dialogue Generation
Again, to control the inï¬ u- ence of target length, both log pseq2seq(a|qi, pi) and log pbackward seq2seq (qi|a) are scaled by the length of targets. The ï¬ nal reward for action a is a weighted sum of the rewards discussed above: r(a, [pi, qi]) = λ1r1 + λ2r2 + λ3r3 (4) where λ1 + λ2 + λ3 = 1. We set λ1 = 0.25, λ2 = 0.25 and λ3 = 0.5.
1606.01541#13
1606.01541#15
1606.01541
[ "1506.08941" ]
1606.01541#15
Deep Reinforcement Learning for Dialogue Generation
A reward is observed after the agent reaches the end of each sentence. # 4 Simulation The central idea behind our approach is to simulate the process of two virtual agents taking turns talking with each other, through which we can explore the state-action space and learn a policy pRL(pi+1|pi, qi) that leads to the optimal expected reward. We adopt an AlphaGo-style strategy (Silver et al., 2016) by initializing the RL system using a general response generation policy which is learned from a fully su- pervised setting. # 4.1 Supervised Learning
1606.01541#14
1606.01541#16
1606.01541
[ "1506.08941" ]
1606.01541#16
Deep Reinforcement Learning for Dialogue Generation
For the ï¬ rst stage of training, we build on prior work of predicting a generated target sequence given dia- logue history using the supervised SEQ2SEQ model (Vinyals and Le, 2015). Results from supervised models will be later used for initialization. We trained a SEQ2SEQ model with attention (Bah- danau et al., 2015) on the OpenSubtitles dataset, which consists of roughly 80 million source-target pairs. We treated each turn in the dataset as a target and the concatenation of two previous sentences as source inputs.
1606.01541#15
1606.01541#17
1606.01541
[ "1506.08941" ]
1606.01541#17
Deep Reinforcement Learning for Dialogue Generation
# 4.2 Mutual Information Samples from SEQ2SEQ models are often times dull and generic, e.g., â i donâ t knowâ (Li et al., 2016a) We thus do not want to initialize the policy model using the pre-trained SEQ2SEQ models because this will lead to a lack of diversity in the RL modelsâ ex- periences. Li et al. (2016a) showed that modeling mutual information between sources and targets will signiï¬ cantly decrease the chance of generating dull responses and improve general response quality. We now show how we can obtain an encoder-decoder model which generates maximum mutual informa- tion responses. As illustrated in Li et al. (2016a), direct decoding from Eq 3 is infeasible since the second term requires the target sentence to be completely generated. In- spired by recent work on sequence level learning (Ranzato et al., 2015), we treat the problem of gen- erating maximum mutual information response as a reinforcement learning problem in which a reward of mutual information value is observed when the model arrives at the end of a sequence. Similar to Ranzato et al. (2015), we use policy gra- dient methods (Sutton et al., 1999; Williams, 1992) for optimization. We initialize the policy model pRL using a pre-trained pSEQ2SEQ(a|pi, qi) model. Given an input source [pi, qi], we generate a candidate list A = {Ë a|Ë a â ¼ pRL}. For each generated candi- date Ë a, we will obtain the mutual information score m(Ë a, [pi, qi]) from the pre-trained pSEQ2SEQ(a|pi, qi) and pbackward SEQ2SEQ(qi|a). This mutual information score will be used as a reward and back-propagated to the encoder-decoder model, tailoring it to generate se- quences with higher rewards. We refer the readers to Zaremba and Sutskever (2015) and Williams (1992) for details. The expected reward for a sequence is given by: J(θ) = E[m(Ë a, [pi, qi])] (5) The gradient is estimated using the likelihood ratio trick: â J(θ) = m(Ë
1606.01541#16
1606.01541#18
1606.01541
[ "1506.08941" ]
1606.01541#18
Deep Reinforcement Learning for Dialogue Generation
a, [pi, qi])â log pRL(Ë a|[pi, qi]) We update the parameters in the encoder-decoder model using stochastic gradient descent. A curricu- lum learning strategy is adopted (Bengio et al., 2009) as in Ranzato et al. (2015) such that, for every se- quence of length T we use the MLE loss for the ï¬ rst L tokens and the reinforcement algorithm for the remaining T â L tokens. We gradually anneal the value of L to zero. A baseline strategy is employed to decrease the learning variance: an additional neural model takes as inputs the generated target and the initial source and outputs a baseline value, similar to the strategy adopted by Zaremba and Sutskever (2015).
1606.01541#17
1606.01541#19
1606.01541
[ "1506.08941" ]
1606.01541#19
Deep Reinforcement Learning for Dialogue Generation
The ï¬ nal gradient is thus: â J(θ) = â log pRL(Ë a|[pi, qi])[m(Ë a, [pi, qi]) â b] (7) # 4.3 Dialogue Simulation between Two Agents We simulate conversations between the two virtual agents and have them take turns talking with each other. The simulation proceeds as follows: at the initial step, a message from the training set is fed to the ï¬ rst agent. The agent encodes the input message to a vector representation and starts decoding to gen- erate a response output. Combining the immediate output from the ï¬ rst agent with the dialogue history, the second agent updates the state by encoding the dialogue history into a representation and uses the decoder RNN to generate responses, which are sub- sequently fed back to the ï¬ rst agent, and the process is repeated. v XY 4 \ Input Message ON nd â e 4 Turn 2 & Sim 4 Tom n to Ue Dis 3 8 Dis = = = encode decode encode decode 1 encode | decode 1 m > > Di > > â 11 â â Png 1 â ~__, "Ce ) â ~__, pho How old are . fa you? : : P12 2 2 22> â Ss i â > Pra 2 2 â â fiz â â Pn2 p a 3 1,3: > > 11 â Paar 3 â _, 3 'm 16, why are â Le Cte) you Pn youasking? J, were Cte) : Figure 1: Dialogue simulation between the two agents. Optimization We initialize the policy model pRL with parameters from the mutual information model described in the previous subsection. We then use policy gradient methods to ï¬ nd parameters that lead to a larger expected reward. The objective to maxi- mize is the expected future reward: Tri(0) = i=T PRL(41:7) > R(ai, [pis al)] (8) i=l where R(ai, [pi, qi]) denotes the reward resulting from action ai. We use the likelihood ratio trick (Williams, 1992; Glynn, 1990; Aleksandrov et al., 1968) for gradient updates:
1606.01541#18
1606.01541#20
1606.01541
[ "1506.08941" ]
1606.01541#20
Deep Reinforcement Learning for Dialogue Generation
generation systems using both human judgments and two automatic metrics: conversation length (number of turns in the entire session) and diversity. # 5.1 Dataset The dialogue simulation requires high-quality initial inputs fed to the agent. For example, an initial input of â why ?â is undesirable since it is unclear how the dialogue could proceed. We take a subset of 10 million messages from the OpenSubtitles dataset and extract 0.8 million sequences with the lowest likelihood of generating the response â i donâ t know what you are taking aboutâ
1606.01541#19
1606.01541#21
1606.01541
[ "1506.08941" ]
1606.01541#21
Deep Reinforcement Learning for Dialogue Generation
to ensure initial inputs are easy to respond to. i=T VJrLO ~ D7 Viosn\ (ailpi.a:) Â¥- R(ai [pis ai) i=1 (9) (9) We refer readers to Williams (1992) and Glynn (1990) for more details. # 4.4 Curriculum Learning A curriculum Learning strategy is again employed in which we begin by simulating the dialogue for 2 turns, and gradually increase the number of simulated turns. We generate 5 turns at most, as the number of candidates to examine grows exponentially in the size of candidate list. Five candidate responses are generated at each step of the simulation. # 5.2 Automatic Evaluation Evaluating dialogue systems is difï¬
1606.01541#20
1606.01541#22
1606.01541
[ "1506.08941" ]
1606.01541#22
Deep Reinforcement Learning for Dialogue Generation
cult. Metrics such as BLEU (Papineni et al., 2002) and perplexity have been widely used for dialogue quality evaluation (Li et al., 2016a; Vinyals and Le, 2015; Sordoni et al., 2015), but it is widely debated how well these auto- matic metrics are correlated with true response qual- ity (Liu et al., 2016; Galley et al., 2015). Since the goal of the proposed system is not to predict the highest probability response, but rather the long-term success of the dialogue, we do not employ BLEU or perplexity for evaluation2. # 5 Experimental Results In this section, we describe experimental results along with qualitative analysis. We evaluate dialogue 2We found the RL model performs worse on BLEU score. On a random sample of 2,500 conversational pairs, single reference BLEU scores for RL models, mutual information models and vanilla SEQ2SEQ models are respectively 1.28, 1.44 and 1.17. BLEU is highly correlated with perplexity in generation tasks. Model SEQ2SEQ mutual information RL # of simulated turns 2.68 3.40 4.48 Table 2: The average number of simulated turns from standard SEQ2SEQ models, mutual informa- tion model and the proposed RL model. Length of the dialogue The ï¬ rst metric we pro- pose is the length of the simulated dialogue. We say a dialogue ends when one of the agents starts gener- ating dull responses such as â
1606.01541#21
1606.01541#23
1606.01541
[ "1506.08941" ]
1606.01541#23
Deep Reinforcement Learning for Dialogue Generation
i donâ t knowâ 3 or two consecutive utterances from the same user are highly overlapping4. The test set consists of 1,000 input messages. To reduce the risk of circular dialogues, we limit the number of simulated turns to be less than 8. Results are shown in Table 2. As can be seen, using mutual information leads to more sustained conversations between the two agents. The proposed RL model is ï¬ rst trained based on the mutual information objec- tive and thus beneï¬ ts from it in addition to the RL model. We observe that the RL model with dialogue simulation achieves the best evaluation score. Diversity We report degree of diversity by calculat- ing the number of distinct unigrams and bigrams in generated responses. The value is scaled by the total number of generated tokens to avoid favoring long sentences as described in Li et al. (2016a). The re- sulting metric is thus a type-token ratio for unigrams and bigrams. For both the standard SEQ2SEQ model and the pro- posed RL model, we use beam search with a beam size 10 to generate a response to a given input mes- sage. For the mutual information model, we ï¬ rst generate n-best lists using pSEQ2SEQ(t|s) and then linearly re-rank them using pSEQ2SEQ(s|t). Results are presented in Table 4.
1606.01541#22
1606.01541#24
1606.01541
[ "1506.08941" ]
1606.01541#24
Deep Reinforcement Learning for Dialogue Generation
We ï¬ nd that the proposed RL model generates more diverse outputs when com- Since the RL model is trained based on future reward rather than MLE, it is not surprising that the RL based models achieve lower BLEU score. 3We use a simple rule matching method, with a list of 8 phrases that count as dull responses. Although this can lead to both false-positives and -negatives, it works pretty well in practice. 4Two utterances are considered to be repetitive if they share more than 80 percent of their words. pared against both the vanilla SEQ2SEQ model and the mutual information model. Model SEQ2SEQ mutual information RL Unigram Bigram 0.0062 0.011 0.017 0.015 0.031 0.041 Table 4: Diversity scores (type-token ratios) for the standard SEQ2SEQ model, mutual information model and the proposed RL model. Human Evaluation We explore three settings for human evaluation: the ï¬ rst setting is similar to what was described in Li et al. (2016a), where we employ crowdsourced judges to evaluate a random sample of 500 items. We present both an input message and the generated outputs to 3 judges and ask them to decide which of the two outputs is better (denoted as single- turn general quality).
1606.01541#23
1606.01541#25
1606.01541
[ "1506.08941" ]
1606.01541#25
Deep Reinforcement Learning for Dialogue Generation
Ties are permitted. Identical strings are assigned the same score. We measure the improvement achieved by the RL model over the mutual information model by the mean difference in scores between the models. For the second setting, judges are again presented with input messages and system outputs, but are asked to decide which of the two outputs is easier to respond to (denoted as single-turn ease to answer). Again we evaluate a random sample of 500 items, each being assigned to 3 judges. For the third setting, judges are presented with sim- ulated conversations between the two agents (denoted as multi-turn general quality). Each conversation consists of 5 turns. We evaluate 200 simulated con- versations, each being assigned to 3 judges, who are asked to decide which of the simulated conversations is of higher quality. Setting single-turn general quality single-turn ease to answer multi-turn general quality RL-win RL-lose 0.40 0.52 0.72 0.36 0.23 0.12 Tie 0.24 0.25 0.16 Table 5: RL gains over the mutual information sys- tem based on pairwise human judgments. Results for human evaluation are shown in Table 5. The proposed RL system does not introduce a signiï¬ - cant boost in single-turn response quality (winning 40 percent of time and losing 36 percent of time).
1606.01541#24
1606.01541#26
1606.01541
[ "1506.08941" ]
1606.01541#26
Deep Reinforcement Learning for Dialogue Generation
Input Message How old are you? Whatâ s your full name? I donâ t want to go home tonight. Do you have any feelings for me? How much time do you have here? Not long enough. Sorry, sir. Shall we get started? Do you play football? Weâ d have to talk to him. How come you never say it? Table 3: Sampled responses generated from the mutual information models and the proposed RL model. This is in line with our expectations, as the RL model is not optimized to predict the next utterance, but rather to increase long-term reward. The RL sys- tem produces responses that are signiï¬ cantly easier to answer than does the mutual information system, as demonstrated by the single-turn ease to answer setting (winning 52 percent of time and losing 23 percent of time), and also signiï¬ cantly higher quality multi-turn dialogues, as demonstrated by the multi- turn general quality setting (winning 72 percent of time). A: Whatâ s your name ? B: Daniel. A: How old are you ? B. Twelve. Whatâ s your name ? A. Daniel. B: How old are you ? A: Twelve. Whatâ s your name ? B: Daniel. A: How old are you ? B ... Table 6: An simulated dialogue with a cycle longer than one. Qualitative Analysis and Discussion We show a random sample of generated responses in Table 3 and simulated conversations in Table 1 at the beginning of the paper. From Table 3, we can see that the RL based agent indeed generates more interactive responses than the other baselines.
1606.01541#25
1606.01541#27
1606.01541
[ "1506.08941" ]
1606.01541#27
Deep Reinforcement Learning for Dialogue Generation
We also ï¬ nd that the RL model has a tendency to end a sentence with another question and hand the conversation over to the user. From Table 1, we observe that the RL model manages to produce more interactive and sustained conversations than the mutual information model. some aspects of what makes a good conversation, ideally the system would instead receive real rewards from humans. Another problem with the current model is that we can only afford to explore a very small number of candidates and simulated turns since the number of cases to consider grow exponentially. # 6 Conclusion During error analysis, we found that although we penalize repetitive utterances in consecutive turns, the dialogue sometimes enters a cycle with length greater than one, as shown in Table 6. This can be ascribed to the limited amount of conversational his- tory we consider. Another issue observed is that the model sometimes starts a less relevant topic during the conversation. There is a tradeoff between rele- vance and less repetitiveness, as manifested in the reward function we deï¬ ne in Eq 4. The fundamental problem, of course, is that the manually deï¬ ned reward function canâ t possibly cover the crucial aspects that deï¬ ne an ideal conversa- tion. While the heuristic rewards that we deï¬ ned are amenable to automatic calculation, and do capture We introduce a reinforcement learning framework for neural response generation by simulating dialogues between two agents, integrating the strengths of neu- ral SEQ2SEQ systems and reinforcement learning for dialogue. Like earlier neural SEQ2SEQ models, our framework captures the compositional models of the meaning of a dialogue turn and generates se- mantically appropriate responses. Like reinforce- ment learning dialogue systems, our framework is able to generate utterances that optimize future re- ward, successfully capturing global properties of a good conversation. Despite the fact that our model uses very simple, operationable heuristics for captur- ing these global properties, the framework generates more diverse, interactive responses that foster a more sustained conversation. # Acknowledgement We would like to thank Chris Brockett, Bill Dolan and other members of the NLP group at Microsoft Re- search for insightful comments and suggestions. We also want to thank Kelvin Guu, Percy Liang, Chris Manning, Sida Wang, Ziang Xie and other members of the Stanford NLP groups for useful discussions.
1606.01541#26
1606.01541#28
1606.01541
[ "1506.08941" ]
1606.01541#28
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li is supported by the Facebook Fellowship, to which we gratefully acknowledge. This work is par- tially supported by the NSF via Awards IIS-1514268, IIS-1464128, and by the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF- 15-1-0462. Any opinions, ï¬ ndings, and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reï¬ ect the views of NSF, DARPA, or Facebook. # References V. M. Aleksandrov, V. I. Sysoyev, and V. V. Shemeneva. 1968.
1606.01541#27
1606.01541#29
1606.01541
[ "1506.08941" ]
1606.01541#29
Deep Reinforcement Learning for Dialogue Generation
Stochastic optimization. Engineering Cybernet- ics, 5:11â 16. Jens Allwood, Joakim Nivre, and Elisabeth Ahls´en. 1992. On the semantics and pragmatics of linguistic feedback. Journal of Semantics, 9:1â 26. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Rafael E Banchs and Haizhou Li. 2012. IRIS: a chat- oriented dialogue system based on the vector space model. In Proceedings of the ACL 2012 System Demon- strations, pages 37â 42. Yoshua Bengio, J´erË ome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Pro- ceedings of the 26th annual international conference on machine learning, pages 41â 48. ACM. SRK Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a monte-carlo framework. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1, pages 268â 277. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proc. of ACL- IJCNLP, pages 445â 450, Beijing, China, July.
1606.01541#28
1606.01541#30
1606.01541
[ "1506.08941" ]
1606.01541#30
Deep Reinforcement Learning for Dialogue Generation
Milica GaË sic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013a. Pomdp-based dialogue manager adaptation to extended domains. In Proceedings of SIGDIAL. Milica Gasic, Catherine Breslin, Mike Henderson, Dongkyu Kim, Martin Szummer, Blaise Thomson, Pir- ros Tsiakoulis, and Steve Young. 2013b. On-line policy optimisation of bayesian spoken dialogue systems via human interaction. In Proceedings of ICASSP 2013, pages 8367â 8371. IEEE.
1606.01541#29
1606.01541#31
1606.01541
[ "1506.08941" ]
1606.01541#31
Deep Reinforcement Learning for Dialogue Generation
Milica GaË sic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. 2014. Incremental on- line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Peter W Glynn. 1990. Likelihood ratio gradient estima- tion for stochastic systems. Communications of the ACM, 33(10):75â 84. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep rein- forcement learning with a natural language action space. In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1621â
1606.01541#30
1606.01541#32
1606.01541
[ "1506.08941" ]
1606.01541#32
Deep Reinforcement Learning for Dialogue Generation
1630, Berlin, Germany, August. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov In Automatic Speech decision process framework. Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 72â 79. IEEE. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 2000. A stochastic model of human-machine interac- tion for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11â 23. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proc. of NAACL-HLT. Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994â 1003, Berlin, Germany, August. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016.
1606.01541#31
1606.01541#33
1606.01541
[ "1506.08941" ]
1606.01541#33
Deep Reinforcement Learning for Dialogue Generation
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. 2016. LSTM based conversation models. arXiv preprint arXiv:1603.09457. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Mar- tin Riedmiller. 2013. Playing Atari with deep rein- forcement learning. NIPS Deep Learning Workshop. Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. 2015. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941. Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, Mirna Adriani, and Satoshi Nakamura. 2014. Developing non-goal dialog system based on examples of drama television. In Natural Interaction with Robots, Knowbots and Smartphones, pages 355â 361. Springer.
1606.01541#32
1606.01541#34
1606.01541
[ "1506.08941" ]
1606.01541#34
Deep Reinforcement Learning for Dialogue Generation
Alice H Oh and Alexander I Rudnicky. 2000. Stochastic language generation for spoken dialogue systems. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3, pages 27â 32. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â 318. Roberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. 2009.
1606.01541#33
1606.01541#35
1606.01541
[ "1506.08941" ]
1606.01541#35
Deep Reinforcement Learning for Dialogue Generation
Are we there yet? Research in commercial spoken dialog systems. In Text, Speech and Dialogue, pages 3â 13. Springer. Marcâ Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732. Adwait Ratnaparkhi. 2002. Trainable approaches to sur- face natural language generation and their application to conversational dialog systems. Computer Speech & Language, 16(3):435â 455. Alan Ritter, Colin Cherry, and William B Dolan. 2011.
1606.01541#34
1606.01541#36
1606.01541
[ "1506.08941" ]
1606.01541#36
Deep Reinforcement Learning for Dialogue Generation
Data-driven response generation in social media. In Proceedings of EMNLP 2011, pages 583â 593. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simula- tion techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(02):97â 126. Emanuel A. Schegloff and Harvey Sacks. 1973. Opening up closings. Semiotica, 8(4):289â 327. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierar- chical neural network models. In Proceedings of AAAI, February. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural In responding machine for short-text conversation. Proceedings of ACL-IJCNLP, pages 1577â
1606.01541#35
1606.01541#37
1606.01541
[ "1506.08941" ]
1606.01541#37
Deep Reinforcement Learning for Dialogue Generation
1586. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrit- twieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â 489. Satinder P Singh, Michael J Kearns, Diane J Litman, and Marilyn A Walker. 1999.
1606.01541#36
1606.01541#38
1606.01541
[ "1506.08941" ]
1606.01541#38
Deep Reinforcement Learning for Dialogue Generation
Reinforcement learning for spoken dialogue systems. In Nips, pages 956â 962. Satinder Singh, Michael Kearns, Diane J Litman, Mar- ilyn A Walker, et al. 2000. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pages 645â 651. Satinder Singh, Diane Litman, Michael Kearns, and Mari- lyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the nj- fun system.
1606.01541#37
1606.01541#39
1606.01541
[ "1506.08941" ]
1606.01541#39
Deep Reinforcement Learning for Dialogue Generation
Journal of Artiï¬ cial Intelligence Research, pages 105â 133. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Proceedings of NAACL-HLT. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas- Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Continuously learning neural dialogue management. arxiv. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â 3112. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. 1999.
1606.01541#38
1606.01541#40
1606.01541
[ "1506.08941" ]
1606.01541#40
Deep Reinforcement Learning for Dialogue Generation
Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057â 1063. Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proceedings of ICML Deep Learning Workshop. Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of ACL 2010, pages 806â 814. Marilyn A Walker, Rashmi Prasad, and Amanda Stent. 2003.
1606.01541#39
1606.01541#41
1606.01541
[ "1506.08941" ]
1606.01541#41
Deep Reinforcement Learning for Dialogue Generation
A trainable generator for recommendations in multimodal dialog. In Proceeedings of INTERSPEECH 2003. Marilyn A. Walker. 2000. An application of reinforce- ment learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artiï¬ cial Intelli- gence Research, pages 387â 416. Tsung-Hsien Wen, Milica Gasic, Nikola MrkË si´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semanti- cally conditioned LSTM-based natural language gener- ation for spoken dialogue systems. In Proceedings of EMNLP, pages 1711â
1606.01541#40
1606.01541#42
1606.01541
[ "1506.08941" ]
1606.01541#42
Deep Reinforcement Learning for Dialogue Generation
1721, Lisbon, Portugal. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â 256. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loose-structured knowledge into LSTM with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110. Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015. Attention with intention for a neural network conversa- tion model. In NIPS workshop on Machine Learning for Spoken Language Understanding and Interaction.
1606.01541#41
1606.01541#43
1606.01541
[ "1506.08941" ]
1606.01541#43
Deep Reinforcement Learning for Dialogue Generation
Steve Young, Milica GaË si´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A prac- tical framework for pomdp-based spoken dialogue man- agement. Computer Speech & Language, 24(2):150â 174. Steve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken di- alog systems:
1606.01541#42
1606.01541#44
1606.01541
[ "1506.08941" ]
1606.01541#44
Deep Reinforcement Learning for Dialogue Generation
A review. Proceedings of the IEEE, 101(5):1160â 1179. Wojciech Zaremba and Ilya Sutskever. 2015. Reinforce- ment learning neural Turing machines. arXiv preprint arXiv:1505.00521.
1606.01541#43
1606.01541
[ "1506.08941" ]
1606.01540#0
OpenAI Gym
6 1 0 2 n u J 5 ] G L . s c [ 1 v 0 4 5 1 0 . 6 0 6 1 : v i X r a # OpenAI Gym Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba OpenAI # Abstract OpenAI Gym1 is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software. # 1 Introduction
1606.01540#1
1606.01540
[ "1602.01783" ]
1606.01540#1
OpenAI Gym
Reinforcement learning (RL) is the branch of machine learning that is concerned with making sequences of decisions. RL has a rich mathematical theory and has found a variety of practical applications [1]. Recent advances that combine deep learning with reinforcement learning have led to a great deal of excitement in the ï¬ eld, as it has become evident that general algorithms such as policy gradients and Q-learning can achieve good performance on difï¬ cult problems, without problem-speciï¬ c engineering [2, 3, 4].
1606.01540#0
1606.01540#2
1606.01540
[ "1602.01783" ]
1606.01540#2
OpenAI Gym
To build on recent progress in reinforcement learning, the research community needs good benchmarks on which to compare algorithms. A variety of benchmarks have been released, such as the Arcade Learn- ing Environment (ALE) [5], which exposed a collection of Atari 2600 games as reinforcement learning problems, and recently the RLLab benchmark for continuous control [6], to which we refer the reader for a survey on other RL benchmarks, including [7, 8, 9, 10, 11]. OpenAI Gym aims to combine the best el- ements of these previous benchmark collections, in a software package that is maximally convenient and accessible. It includes a diverse collection of tasks (called environments) with a common interface, and this collection will grow over time. The environments are versioned in a way that will ensure that results remain meaningful and reproducible as the software is updated. Alongside the software library, OpenAI Gym has a website (gym.openai.com) where one can ï¬ nd score- boards for all of the environments, showcasing results submitted by users. Users are encouraged to provide links to source code and detailed instructions on how to reproduce their results. # 2 Background
1606.01540#1
1606.01540#3
1606.01540
[ "1602.01783" ]
1606.01540#3
OpenAI Gym
Reinforcement learning assumes that there is an agent that is situated in an environment. Each step, the agent takes an action, and it receives an observation and reward from the environment. An RL algorithm seeks to maximize some measure of the agentâ s total reward, as the agent interacts with the environment. In the RL literature, the environment is formalized as a partially observable Markov decision process (POMDP) [12]. OpenAI Gym focuses on the episodic setting of reinforcement learning, where the agentâ s experience is broken down into a series of episodes. In each episode, the agentâ s initial state is randomly sampled from a distribution, and the interaction proceeds until the environment reaches a terminal state. The goal in episodic reinforcement learning is to maximize the expectation of total reward per episode, and to achieve a high level of performance in as few episodes as possible. The following code snippet shows a single episode with 100 timesteps. It assumes that there is an object called agent, which takes in the observation at each timestep, and an object called env, which is the 1gym.openai.com 1 environment.
1606.01540#2
1606.01540#4
1606.01540
[ "1602.01783" ]
1606.01540#4
OpenAI Gym
OpenAI Gym does not include an agent class or specify what interface the agent should use; we just include an agent here for demonstration purposes. ob0 = env.reset() # sample environment state, return first observation a0 = agent.act(ob0) # agent chooses first action ob1, rew0, done0, info0 = env.step(a0) # environment returns observation, # reward, and boolean flag indicating if the episode is complete. a1 = agent.act(ob1) ob2, rew1, done1, info1 = env.step(a1) ... a99 = agent.act(o99) ob100, rew99, done99, info2 = env.step(a99) # done99 == True => terminal
1606.01540#3
1606.01540#5
1606.01540
[ "1602.01783" ]
1606.01540#5
OpenAI Gym
# 3 Design Decisions The design of OpenAI Gym is based on the authorsâ experience developing and comparing reinforcement learning algorithms, and our experience using previous benchmark collections. Below, we will summarize some of our design decisions. Environments, not agents. Two core concepts are the agent and the environment. We have chosen to only provide an abstraction for the environment, not for the agent. This choice was to maximize convenience for users and allow them to implement different styles of agent interface. First, one could imagine an â online learningâ style, where the agent takes (observation, reward, done) as an input at each timestep and performs learning updates incrementally. In an alternative â batch updateâ style, a agent is called with observation as input, and the reward information is collected separately by the RL algorithm, and later it is used to compute an update. By only specifying the agent interface, we allow users to write their agents with either of these styles. Emphasize sample complexity, not just ï¬
1606.01540#4
1606.01540#6
1606.01540
[ "1602.01783" ]
1606.01540#6
OpenAI Gym
nal performance. The performance of an RL algorithm on an environment can be measured along two axes: ï¬ rst, the ï¬ nal performance; second, the amount of time it takes to learnâ the sample complexity. To be more speciï¬ c, ï¬ nal performance refers to the average reward per episode, after learning is complete. Learning time can be measured in multiple ways, one simple scheme is to count the number of episodes before a threshold level of average performance is exceeded. This threshold is chosen per-environment in an ad-hoc way, for example, as 90% of the maximum performance achievable by a very heavily trained agent.
1606.01540#5
1606.01540#7
1606.01540
[ "1602.01783" ]
1606.01540#7
OpenAI Gym
Both ï¬ nal performance and sample complexity are very interesting, however, arbitrary amounts of computation can be used to boost ï¬ nal performance, making it a comparison of computational resources rather than algorithm quality. Encourage peer review, not competition. The OpenAI Gym website allows users to compare the performance of their algorithms. One of its inspiration is Kaggle, which hosts a set of machine learning contests with leaderboards. However, the aim of the OpenAI Gym scoreboards is not to create a competition, but rather to stimulate the sharing of code and ideas, and to be a meaningful benchmark for assessing different methods. RL presents new challenges for benchmarking. In the supervised learning setting, performance is measured by prediction accuracy on a test set, where the correct outputs are hidden from contestants. In RL, itâ s less straightforward to measure generalization performance, except by running the usersâ code on a collection of unseen environments, which would be computationally expensive. Without a hidden test set, one must check that an algorithm did not â overï¬ tâ
1606.01540#6
1606.01540#8
1606.01540
[ "1602.01783" ]
1606.01540#8
OpenAI Gym
on the problems it was tested on (for example, through parameter tuning). We would like to encourage a peer review process for interpreting results submitted by users. Thus, OpenAI Gym asks users to create a Writeup describing their algorithm, parameters used, and linking to code. Writeups should allow other users to reproduce the results. With the source code available, it is possible to make a nuanced judgement about whether the algorithm â overï¬ tâ to the task at hand. Strict versioning for environments. If an environment changes, results before and after the change would be incomparable. To avoid this problem, we guarantee than any changes to an environment will be accompanied by an increase in version number. For example, the initial version of the CartPole task is named Cartpole-v0, and if its functionality changes, the name will be updated to Cartpole-v1.
1606.01540#7
1606.01540#9
1606.01540
[ "1602.01783" ]
1606.01540#9
OpenAI Gym
2 Figure 1: Images of some environments that are currently part of OpenAI Gym. Monitoring by default. By default, environments are instrumented with a Monitor, which keeps track of every time step (one step of simulation) and reset (sampling a new initial state) are called. The Monitorâ s behavior is conï¬ gurable, and it can record a video periodically. It also is sufï¬ cient to produce learning curves. The videos and learning curve data can be easily posted to the OpenAI Gym website.
1606.01540#8
1606.01540#10
1606.01540
[ "1602.01783" ]
1606.01540#10
OpenAI Gym
# 4 Environments OpenAI Gym contains a collection of Environments (POMDPs), which will grow over time. See Figure 1 for examples. At the time of Gymâ s initial beta release, the following environments were included: Classic control and toy text: small-scale tasks from the RL literature. â ¢ Algorithmic: perform computations such as adding multi-digit numbers and reversing sequences. Most of these tasks require memory, and their difï¬ culty can be chosen by varying the sequence length. Atari: classic Atari games, with screen images or RAM as input, using the Arcade Learning Environment [5].
1606.01540#9
1606.01540#11
1606.01540
[ "1602.01783" ]
1606.01540#11
OpenAI Gym
â ¢ Board games: currently, we have included the game of Go on 9x9 and 19x19 boards, where the Pachi engine [13] serves as an opponent. â ¢ 2D and 3D robots: control a robot in simulation. These tasks use the MuJoCo physics engine, which was designed for fast and accurate robot simulation [14]. A few of the tasks are adapted from RLLab [6]. Since the initial release, more environments have been created, including ones based on the open source physics engine Box2D or the Doom game engine via VizDoom [15]. # 5 Future Directions In the future, we hope to extend OpenAI Gym in several ways.
1606.01540#10
1606.01540#12
1606.01540
[ "1602.01783" ]
1606.01540#12
OpenAI Gym
â ¢ Multi-agent setting. It will be interesting to eventually include tasks in which agents must collaborate or compete with other agents. â ¢ Curriculum and transfer learning. Right now, the tasks are meant to be solved from scratch. Later, it will be more interesting to consider sequences of tasks, so that the algorithm is trained on one task after the other. Here, we will create sequences of increasingly difï¬ cult tasks, which are meant to be solved in order. â ¢ Real-world operation. Eventually, we would like to integrate the Gym API with robotic hardware, validating reinforcement learning algorithms in the real world.
1606.01540#11
1606.01540#13
1606.01540
[ "1602.01783" ]
1606.01540#13
OpenAI Gym
3 # References [1] Dimitri P Bertsekas, Dimitri P Bertsekas, Dimitri P Bertsekas, and Dimitri P Bertsekas. Dynamic programming and optimal control. Athena Scientiï¬ c Belmont, MA, 1995. [2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, Sadik Beattie, C., Antonoglou A., H. I., King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning.
1606.01540#12
1606.01540#14
1606.01540
[ "1602.01783" ]
1606.01540#14
OpenAI Gym
Nature, 518(7540):529â 533, 2015. [3] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, pages 1889â 1897, 2015. [4] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016. [5] M. G. Bellemare, Y. Naddaf, J. Veness, and M.
1606.01540#13
1606.01540#15
1606.01540
[ "1602.01783" ]
1606.01540#15
OpenAI Gym
Bowling. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253â 279, 2013. [6] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016. [7] A. Geramifard, C. Dann, R. H. Klein, W. Dabney, and J. P. How. RLPy: A value-function-based reinforcement learning framework for education and research.
1606.01540#14
1606.01540#16
1606.01540
[ "1602.01783" ]
1606.01540#16
OpenAI Gym
J. Mach. Learn. Res., 16:1573â 1578, 2015. [8] B. Tanner and A. White. RL-Glue: Language-independent software for reinforcement-learning experiments. J. Mach. Learn. Res., 10:2133â 2136, 2009. [9] T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. R¨uckstieà , and J. Schmidhuber. PyBrain. J. Mach. Learn. Res., 11:743â 746, 2010. [10] S. Abeyruwan.
1606.01540#15
1606.01540#17
1606.01540
[ "1602.01783" ]