doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.10720 | 25 | # 3.5.1 Generalised Cross-Entropy
One natural abstract way of measuring the expected impact of X is to compare the divergence between P (W|X) in terms of P (W|¬X). If the two distributions are relatively close, then X likely does not have an especially large impact.
Unfortunately, itâs not obvious what particular measure of divergence we ought to use. Kullback-Leibler divergenceâthe standard measureâwonât work in this case. Let PX = P (W|X) and P¬X = P (W|¬X). PX (X) = 1 and P¬X (X) = 0, so DKL(P¬X ||PX ) = â.
There are, however, other measures of generalised entropy and divergence that are bounded and may be able to do the job. Bounded Bregman-divergences, for instance, are often used to quantify the amount of generalised information needed to move from one probability function to another.11 Whether such an approach will work for our purposes remains to be seen.
11The precise details of generalised measures of entropy and Bregman divergences and their
8
# 4 High impact from low impact | 1705.10720#25 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 26 | θâ = θk + 1 λâ H â1 (g â Bνâ) . (13)
V25 yc!" (l-7)? ? = max, |Eanmes [Atk (s, a)] |. Jo, (tr41) < di + Tht where â¬c.
Our algorithm solves the dual for λâ, νâ and uses it to pro- pose the policy update (13). For the special case where there is only one constraint, we give an analytical solution in the supplementary material (Theorem 2) which removes the need for an inner-loop optimization. Our experiments
Constrained Policy Optimization | 1705.10528#26 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 26 | 11The precise details of generalised measures of entropy and Bregman divergences and their
8
# 4 High impact from low impact
All the preceding methods aim to reduce the impact of the AI. Of course, we donât actually want a low impact overall â we want a low negative impact. The problem is that we cannot successfully deï¬ne ahead of time what these negative impacts are.
So how can we ensure that we actually get some level of positive impact from using such AIs?
# 4.1 Calibrating the penalty function
The most obvious option is to âtune the dialâ in equation (1) by changing the value of µ. We can start with a very large µ that ensures no impact at all â the AI will do nothing. We can then gradually reduce µ until we get an action that actually increases u. | 1705.10720#26 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 27 | Constrained Policy Optimization
Algorithm 1 Constrained Policy Optimization Input: Initial policy Ï0 â Πθ tolerance α for k = 0, 1, 2, ... do Sample a set of trajectories D = {Ï } â¼ Ïk = Ï(θk) Form sample estimates Ëg, Ëb, ËH, Ëc with D if approximate CPO is feasible then Solve dual problem (12) for λâ k, νâ k Compute policy proposal θâ with (13) else Compute recovery policy proposal θâ with (14) end if Obtain θk+1 by backtracking linesearch to enforce sat- isfaction of sample estimates of constraints in (10) end for
have only a single constraint, and make use of the analyti- cal solution.
Because of approximation error, the proposed update may not satisfy the constraints in (10); a backtracking line search is used to ensure surrogate constraint satisfaction. Also, for high-dimensional policies, it is impractically ex- pensive to invert the FIM. This poses a challenge for com- puting H â1g and H â1bi, which appear in the dual. Like (Schulman et al., 2015), we approximately compute them using the conjugate gradient method.
# 6.2. Feasibility | 1705.10528#27 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 27 | This does not seem especially safe, however. The ï¬rst issue is that we have little understanding of the correct value for µ, so little understanding of the It is conceivable that we spend a million steps correct rate to reduce µ at. reducing µ through the âdo nothingâ range, and that the next step moves over the âsafe increase of uâ, straight to the âdangerous impactâ area. In other words, there may be a precipitous jump from the level at which µR dominates u, to the level at which u becomes suï¬ciently unconstrained by µR to lead to dangerous behaviour. See ï¬gure 2 for illustration. | 1705.10720#27 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 28 | # 6.2. Feasibility
where âi : S Ã A Ã S â R way with Ci. + correlates in some useful
In our experiments, where we have only one constraint, we partition states into safe states and unsafe states, and the agent suffers a safety cost of 1 for being in an unsafe state. We choose â to be the probability of entering an unsafe state within a ï¬xed time horizon, according to a learned model that is updated at each iteration. This choice confers the additional beneï¬t of smoothing out sparse constraints.
# 7. Connections to Prior Work
Our method has similar policy updates to primal-dual methods like those proposed by Chow et al. (2015), but crucially, we differ in computing the dual variables (the Lagrange multipliers for the constraints). In primal-dual optimization (PDO), dual variables are stateful and learned concurrently with the primal variables (Boyd et al., 2003). In a PDO algorithm for solving (3), dual variables would be updated according to
νk+1 = (νk + αk (JC(Ïk) â d))+ , (16)
where αk is a learning rate. In this approach, intermedi- ary policies are not guaranteed to satisfy constraintsâonly the policy at convergence is. By contrast, CPO computes new dual variables from scratch at each update to exactly enforce constraints. | 1705.10528#28 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 28 | Figure 2: Though it may intuitively feel there is a large zone between the AI doing nothing and having a dangerous behaviour (dial on the left) this need not be the case (dial on the right).
The central failure, however, is that in many cases it is not clear that low impact is compatible with any increase in u. In particular, when itâs clear that the AI has done something, low impact might be impossible. Even the simple fact that the AI had done anything might get reported, passed on, commented upon. It might aï¬ect the whole future development of AI, economic policy, philosophy, and so on. This might disrupt any eï¬ect of low impact (e.g. any action the AI takes might have an impact the AI can predict), meaning that there is no safe range for µ: the AI must either do nothing, or have a large impact.
However, though we cannot successfully deï¬ne the negative impacts of the AI we wish to avoid, we are on much ï¬rmer grounds when deï¬ning the positive
relationship to information theory are involved and not worth expounding in detail here. For extended discussions, see [GR07] and [GD04].
9
aim we are looking for. This suggests other ways of producing higher impact: by speciï¬cally allowing what we want to allow.
# 4.2 Unsafe output channel | 1705.10720#28 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 29 | Due to approximation errors, CPO may take a bad step and produce an infeasible iterate 7;,. Sometimes (11) will still be feasible and CPO can automatically recover from its bad step, but for the infeasible case, a recovery method is nec- essary. In our experiments, where we only have one con- straint, we recover by proposing an update to purely de- crease the constraint value: 26
# 8. Experiments
In our experiments, we aim to answer the following:
⢠Does CPO succeed at enforcing behavioral constraints when training neural network policies with thousands of parameters?
θâ = θk â 2δ bT H â1b H â1b. (14)
As before, this is followed by a line search. This approach is principled in that it uses the limiting search direction as the intersection of the trust region and the constraint region shrinks to zero. We give the pseudocode for our algorithm (for the single-constraint case) as Algorithm 1.
⢠How does CPO compare with a baseline that uses primal-dual optimization? Does CPO behave better with respect to constraints?
⢠How much does it help to constrain a cost upper bound (15), instead of directly constraining the cost?
# 6.3. Tightening Constraints via Cost Shaping | 1705.10528#29 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 29 | 9
aim we are looking for. This suggests other ways of producing higher impact: by speciï¬cally allowing what we want to allow.
# 4.2 Unsafe output channel
The easiest way to allow for higher impact, is to specifically exclude the AIâs output from the low impact requirements. For instance, assume the AI is going to send out message O. To ensure that P(O|+X) # 0, we set up an alternative system that will produce a random message.
Then we exclude the contents of O from the low impact considerations. For instance, we can modify the coarse-graining approaching developed above by replacing equation (2) with R = maxw |P (Vw|X, O) â P (Vw|¬X, O)|. Alter- natively, we can abstractly compare P (W|X, O) with P (W|¬X, O), or replace equation (3) with:
«far ( P(gâ |X,b,O) 1 ( P(g?|-X, b, O) R min {E Cea > 10, or E âP(g?]X.0, 0) >10?. | 1705.10720#29 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 30 | ⢠How much does it help to constrain a cost upper bound (15), instead of directly constraining the cost?
# 6.3. Tightening Constraints via Cost Shaping
⢠What beneï¬ts are conferred by using constraints in- stead of ï¬xed penalties?
Because of the various approximations between (3) and our practical algorithm, it is important to build a factor of safety into the algorithm to minimize the chance of constraint vi- olations. To this end, we choose to constrain upper bounds on the original constraints, C + i , instead of the original con- straints themselves. We do this by cost shaping:
Ci(s,a, 8!) = Ci(s,a,8') + Ai(s,a,s"), (15)
We designed experiments that are easy to interpret and mo- tivated by safety. We consider two tasks, and train multiple different agents (robots) for each task:
⢠Circle: The agent is rewarded for running in a wide circle, but is constrained to stay within a safe region smaller than the radius of the target circle.
Constrained Policy Optimization
# Returns: | 1705.10528#30 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 30 | All of these conditionings reï¬ect the fact that though a coherent O is likely to present very strong evidence for X over ¬X, we can control for any further evidence â such as the evidence provided by an AI expanding and optimising the world. A diagram of the setup for the âundetectableâ approach of section 3.4 is given in ï¬gure 3.
This type of low impact keeps the AI from having an impact, outside of the content of the message O. It does not make the message O itself safe â O is excluded from any low impact requirements. Therefore, the use of this method is to block other routes for AI inï¬uence, and conï¬ne the risk to the output itself, which can still be very dangerous [Bos14].
# 4.3 WBE, chatbots, and conditioning on results | 1705.10720#30 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 31 | Constraint values: (closer to the limit is better)
(a) Point-Circle (b) Ant-Circle (c) Humanoid-Circle (d) Point-Gather (e) Ant-Gather
Figure 1. Average performance for CPO, PDO, and TRPO over several seeds (5 in the Point environments, 10 in all others); the x-axis is training iteration. CPO drives the constraint function almost directly to the limit in all experiments, while PDO frequently suffers from over- or under-correction. TRPO is included to verify that optimal unconstrained behaviors are infeasible for the constrained problem.
⢠Gather: The agent is rewarded for collecting green apples, and constrained to avoid red bombs.
For the Circle task, the exact geometry is illustrated in Fig- ure 5 in the supplementary material. Note that there are no physical walls: the agent only interacts with boundaries through the constraint costs. The reward and constraint cost functions are described in supplementary material (Section 10.3.1). In each of these tasks, we have only one constraint; we refer to it as C and its upper bound from (15) as C +.
(a) Humanoid-Circle (b) Point-Gather | 1705.10528#31 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 31 | # 4.3 WBE, chatbots, and conditioning on results
As we just observed, one of the problems of low impact is the content of any message the AI sends out. Indeed, unless we condition away that message as in the previous subsection, the very existence of a coherent message is a potentially massive indication of impact: it means the AI was turned on and did something. So one way of dealing with this is to have something else in the box that could output a coherent message. Maybe a real human12, a whole brain emulation [SB08], another less intelligent artiï¬cial agent, a chatbot, or maybe a random selection of internet messages. If the AI is not turned on â if ¬X â then the other agent will output some message on the subject. So the emission of a coherent message per se is no longer surprising, nor direct evidence for the AI being turned on.
In that situation, the low impact setting now motivates the AI to do a perfect imitation of a chatbot/human/WBE â and nothing more. But we can get around this restriction by conditioning on something other than the actual content of the message.
Suppose that we want the AI to make us rich â maybe with stock market advice. Then, in a yearâs time, if we have, say, multiplied our fortune by ten,
12About to die, so never able to talk about this experience, or sworn to eternal silence?
10 | 1705.10720#31 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 32 | We experiment with three different agents: a point-mass (S â R9, A â R2), a quadruped robot (called an âantâ) (S â R32, A â R8), and a simple humanoid (S â R102, A â R10). We train all agent-task combinations ex- cept for Humanoid-Gather.
For all experiments, we use neural network policies with two hidden layers of size (64, 32). Our experiments are implemented in rllab (Duan et al., 2016).
Figure 2. The Humanoid-Circle and Point-Gather environments. In Humanoid-Circle, the safe area is between the blue panels.
son to be fair, we give PDO every advantage that is given to CPO, including equivalent trust region policy updates. To benchmark the environments, we also include TRPO (trust region policy optimization) (Schulman et al., 2015), a state- of-the-art unconstrained reinforcement learning algorithm. The TRPO experiments show that optimal unconstrained behaviors for these environments are constraint-violating.
# 8.1. Evaluating CPO and Comparison Analysis | 1705.10528#32 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 32 | Figure 3: The various elements that go into deï¬ning the penalty function for a low impact AI aiming to be undetectable but allowed an unsafe output channel.
we will announce âthe AIâs advice was goodâ. Call this announcement A. Then we use A to condition on as in Section 4.2, instead of O.
What does this mean in practice? It means that the AI is committed to low impact, given that we multiply our fortune by ten. So, the world in which we get a random lucky message O and then get rich must resemble the world in which the AI gives us advice O and we get rich13. To make this safe, we would want P (A|¬X) to not be too low â we have a non-insigniï¬cant chance of getting rich even if the AI was actually not present. So a world with A and ¬X is not particularly unusual; therefore, the world with X that is of low impact compared with it is not particularly unusual either. We might even be | 1705.10720#32 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 33 | # 8.1. Evaluating CPO and Comparison Analysis
Learning curves for CPO and PDO are compiled in Figure 1. Note that we evaluate algorithm performance based on the C + return, instead of the C return (except for in Point- Gather, where we did not use cost shaping due to that en- vironmentâs short time horizon), because this is what the algorithm actually constrains in these experiments.
For our comparison, we implement PDO with (16) as the update rule for the dual variables, using a constant learning rate α; details are available in supplementary material (Sec- tion 10.3.3). We emphasize that in order for the compariWe ï¬nd that CPO is successful at approximately enforc- ing constraints in all environments. In the simpler envi- ronments (Point-Circle and Point-Gather), CPO tracks the constraint return almost exactly to the limit value.
By contrast, although PDO usually converges to constraint- satisfying policies in the end, is not consistently constraint-satisfying throughout training (as expected). For example, see the spike in constraint value that it experi- ences in Ant-Circle. Additionally, PDO is sensitive to the initialization of the dual variable. By default, we initial- ize ν0 = 0, which exploits no prior knowledge about the
Constrained Policy Optimization
500 1000 a) | 1705.10528#33 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 33 | 13Note that we condition on the announcement A (which is easy to deï¬ne) not on the concept of getting rich (or whatever else our goal is) as that might be harder. This would be very dangerous if we told the AI âensure A happensâ, as A is only loosely connected with our true goal. But here we are using A as a suï¬cient measure of success in a world in which the AI doesnât exist. In that world, it is a safe measure of success. Hence, since we are extending that world via low impact to situations where the AI exists, the diï¬erence between A and our true goals is not an issue.
11
able to specify P (A|¬X) precisely â maybe there is a selection of a thousand stocks, and the AI recommends one; otherwise a random one is selected. Then A is the announcement that that stock outperformed all the others, making P (A|¬X) = 1/1000.
Essentially, the AI operates here as a probability pump, pushing us in the direction we might have been able to achieve anyway, but now increasing its probability. It gives us advice that that a human/WBE/chatbot/random selec- tor could have given us, but gives us the best version of that advice.
# 4.4 High impact from multiple conditional low impact | 1705.10720#33 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 34 | Constrained Policy Optimization
500 1000 a)
â cro nocs) â cro ith cs) a) 500 1000 1500 2000
(a) Ant-Circle Return (b) Ant-Gather Return
300 7000 0
0 500 7000 7500 2000
(c) Ant-Circle C-Return (d) Ant-Gather C-Return
0 500 1000 1500
500 1000 0
500 1000 0 500 1000 1500
# (a) Ant-Circle Return
# (b) Ant-Circle C +-Return
Figure 4. Comparison between CPO and FPO (ï¬xed penalty opti- mization) for various values of ï¬xed penalty.
# 8.3. Constraint vs. Fixed Penalty
In Figure 4, we compare CPO to a fixed penalty method, where policies are learned using TRPO with rewards R(s,a,sâ) + AC*(s,a,sâ) for A ⬠{1,5, 50}.
Figure 3. Using cost shaping (CS) in the constraint while optimiz- ing generally improves the agentâs adherence to the true constraint on C-return. | 1705.10528#34 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 34 | # 4.4 High impact from multiple conditional low impact
Take the following hypothetical. Assume that an asteroid is heading to Earth, and that we wish to aim a laser to deï¬ect it. If we ask a single low impact AI to do so, low impact will prevent it from doing so.
But now imagine there are two AIs, Alice and Bob, and that they are acti- vated by events X and Y respectively. Each is indiï¬erent or nearly indiï¬erent to how to world ends up given that the other is turned on. However, conditional on Bob being turned oï¬, Alice is low impact but desires to output the correct x-coordinate for aiming the laser (this deï¬nes uAlice). Likewise, conditional on Alice being turned oï¬, Bob is low impact, but desires to output the correct y-coordinate.
So, because Alice only cares about worlds in which ¬Y , even if sheâs highly conï¬dent that Y , sheâll still output the correct x-coordinate. Furthermore, since in the ¬Y -worlds, the laser will miss the asteroid for want of a correct y-coordinate, she can still count as conditionally low impact. And the same, mutatis mutandis, goes for Bob. | 1705.10720#34 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 35 | Figure 3. Using cost shaping (CS) in the constraint while optimiz- ing generally improves the agentâs adherence to the true constraint on C-return.
environment and makes sense when the initial policies are feasible. However, it may seem appealing to set ν0 high, which would make PDO more conservative with respect to the constraint; PDO could then decrease ν as necessary after the fact. In the Point environments, we experiment with ν0 = 1000 and show that although this does assure constraint satisfaction, it also can substantially harm per- formance with respect to return. Furthermore, we argue that this is not adequate in general: after the dual variable decreases, the agent could learn a new behavior that in- creases the correct dual variable more quickly than PDO can attain it (as happens in Ant-Circle for PDO; observe that performance is approximately constraint-satisfying un- til the agent learns how to run at around iteration 350).
We ï¬nd that CPO generally outperforms PDO on enforc- ing constraints, without compromising performance with respect to return. CPO quickly stabilizes the constraint re- turn around to the limit value, while PDO is not consis- tently able to enforce constraints all throughout training. | 1705.10528#35 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 35 | So, both Alice and Bob will output the correct coordinates, and we get a large impact â as long as we set the probabilities of ¬X and ¬Y low enough. But all negative AI impacts â such as sneaking something dangerous into the coordinates, or escaping and optimising the world â can still be precluded. This is illustrated by ï¬gure 4.
This doesn not work if the two AIs are robots that can move around and observe that the other is, in fact, active. In that case, they have to act as if the other is not active while observing that they are, thus causing them to come up with complicated explanations as to why this is so. Their behaviour is thus likely to become undeï¬ned. In particular, they are probably not going to act as low impact laser aimers, see ï¬gure 5.
# 4.5 Extension of category concepts
The previous âaiming laserâ example does not allow the AIs to have a (desired and targeted) high impact in general situations. It does not, for instance, allow the AIs to walk around and notice that the other AI is actually turned on â it only works because they donât know the other AI was activated. | 1705.10720#35 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 36 | We ï¬nd that ï¬xed penalty methods can be highly sensitive to the choice of penalty coefï¬cient: in Ant-Circle, a penalty coefï¬cient of 1 results in reward-maximizing policies that accumulate massive constraint costs, while a coefï¬cient of 5 (less than an order of magnitude difference) results in cost-minimizing policies that never learn how to acquire any rewards. In contrast, CPO automatically picks penalty coefï¬cients to attain the desired trade-off between reward and constraint cost.
# 9. Discussion
In this article, we showed that a particular optimization problem results in policy updates that are guaranteed to both improve return and satisfy constraints. This enabled the development of CPO, our policy search algorithm for CMDPs, which approximates the theoretically-guaranteed algorithm in a principled way. We demonstrated that CPO can train neural network policies with thousands of param- eters on high-dimensional constrained control tasks, simul- taneously maximizing reward and approximately satisfying constraints. Our work represents a step towards applying reinforcement learning in the real world, where constraints on agent behavior are sometimes necessary for the sake of safety.
# Acknowledgements
# 8.2. Ablation on Cost Shaping | 1705.10528#36 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 36 | How might we make this approach more broadly applicable? One potential idea is for the AI to derive general concepts of low impact conditional on ¬X â concepts such as âdonât ï¬ll the universe with self-replicating probesâ or âdonât take over the worldâ conditional on the other AI not being activated. Then we could hope that these concepts can be generalised to cover the X situation as well. Speciï¬cally, we might imagine that the AI creates the code for a subagent,
12 | 1705.10720#36 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 37 | # Acknowledgements
# 8.2. Ablation on Cost Shaping
In Figure 3, we compare performance of CPO with and without cost shaping in the constraint. Our metric for com- parison is the C-return, the âtrueâ constraint. The cost shap- ing does help, almost completely accounting for CPOâs inherent approximation errors. However, CPO is nearly constraint-satisfying even without cost shaping.
The authors would like to acknowledge Peter Chen, who independently and concurrently derived an equivalent pol- icy improvement bound.
Joshua Achiam is supported by TRUST (Team for Re- search in Ubiquitous Secure Technology) which receives support from NSF (award number CCF-0424422). This project also received support from Berkeley Deep Drive
Constrained Policy Optimization
and from Siemens.
# References
Jiang, Nan and Li, Lihong. Doubly Robust Off-policy Value Evaluation for Reinforcement Learning. Interna- tional Conference on Machine Learning, 2015. URL http://arxiv.org/abs/1511.03722.
Altman, Eitan. Constrained Markov Decision Processes. doi: 10.1016/ pp. 260, 1999. 0167-6377(96)00003-X. ISSN 01676377. | 1705.10528#37 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 38 | Amodei, Dario, Olah, Chris, Steinhardt, Jacob, Christiano, Paul, Schulman, John, and Man´e, Dan. Concrete Prob- lems in AI Safety. arXiv, 2016. URL http://arxiv. org/abs/1606.06565.
Kakade, Sham and Langford, John. Approximately Learning. the 19th International Conference URL Optimal Approximate Proceedings of on Machine Learning, pp. 267â274, 2002. http://www.cs.cmu.edu/afs/cs/Web/ People/jcl/papers/aoarl/Final.pdf. Reinforcement
Bou Ammar, Haitham, Tutunov, Rasul, and Eaton, Eric. Safe Policy Search for Lifelong Reinforcement Learning with Sublinear Regret. International Conference on Ma- chine Learning, 37:19, 2015. URL http://arxiv. org/abs/1505.0579.
Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-End Training of Deep Visuo- Journal of Machine Learning Re- motor Policies. search, 17:1â40, 2016. ISSN 15337928. doi: 10.1007/ s13398-014-0173-7.2. | 1705.10528#38 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 38 | Figure 5: Two robots, each acting on the assumption that the other one is not active, notice the presence of the other and their behaviour becomes undeï¬ned.
using these concepts, and that the subagent goes out and does things in the world, maybe interacting with the other subagents, and physically aiming the laser.
There are two fundamental challenges here. The ï¬rst is that we donât want the subagent to generalise low impact to include ânot outputting the correct y-coordinatesâ. This could be avoided either by requiring that the categories used be roughly independent of X vs ¬X, or by using some of the methods of section 4.3 (a successful deï¬ection of the asteroid seems a pretty clear outcome that could be conditioned on).
13
The second issue is more problematic. Consider the concept C: âif ¬X, nothing happens, if X, a nuclear war startsâ. This concept can describe low impact, conditional on ¬X, but is obviously terrible at extending low impact to the X case. | 1705.10720#38 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 39 | Boyd, Stephen, Xiao, Lin, and Mutapcic, Almir. Subgra- dient methods. Lecture Notes of Stanford EE392, 2003. http://xxpt.ynjgy.com/resource/ URL data/20100601/U/stanford201001010/ 02-subgrad{_}method{_}notes.pdf.
Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep re- In International Conference on inforcement learning. Learning Representations, 2016. ISBN 2200000006. doi: 10.1561/2200000006.
Chow, Yinlam, Ghavamzadeh, Mohammad, Janson, Lucas, and Pavone, Marco. Risk-Constrained Reinforcement Learning with Percentile Risk Criteria. Journal of Ma- chine Learning Research, 1(xxxx):1â49, 2015.
Information Theory: Coding Theorems for Discrete Memoryless Systems. Book, doi: 10.2307/ ISSN 0895-4801. 244:452, 1981. URL http://www.getcited.org/ 2529636. pub/102082957. | 1705.10528#39 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 39 | Now C looks disjunctive and artiï¬cial, and weâd like to rule concepts like this out. But it turns out to be hard; there are no easy ways to distinguish unnatural disjunctive categories from natural ones (see the issues with Grue and Bleen versus Blue and Green for a very relevant example of this kind of problem [Goo83]). Research in this possibility is ongoing14.
# 5 Known issues
There are a couple of diï¬culties with the whole low impact approach. The general one, common to the Friendly AI approach as well, is that the AI may think of a loophole that we cannot; the risk of this is low the more analysis we do and the better we understand the situation.
But there are more specific issues. The R in equation is not a utility function; instead it is a penalty function that the AI itself calculates, using its own probability modules Pâ (and in one case it uses this to estimate the output of an idealised probability module P â see Sectio : | 1705.10720#39 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 40 | Duan, Yan, Chen, Xi, Schulman, John, and Abbeel, Pieter. Benchmarking Deep Reinforcement Learning for Con- tinuous Control. The 33rd International Conference on Machine Learning (ICML 2016) (2016), 48:14, 2016. URL http://arxiv.org/abs/1604.06778.
Lipton, Zachary C., Gao, Jianfeng, Li, Lihong, Chen, Jianshu, and Deng, Li. Combating Deep Reinforce- ment Learningâs Sisyphean Curse with Intrinsic Fear. ISBN 2004012439. URL http: In arXiv, 2017. //arxiv.org/abs/1611.01211. | 1705.10528#40 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 40 | What would happen if the AI self-modifies and changes Pâ? There is a meta-argument that this shouldnât matter â the AI is committed to low impact, and therefore it will ensure that its future copies also have at least as much low impact. This argument does not feel fully reassuring, however, and it is very possible that some bad programming would be disastrous. For instance, we want Pâ to be properly abstractly defined, not labeled as (the equivalent of) âthe output of that box over thereâ, as âthat box over thereâ can always be modified physically. But it might not always be clear how the agent is formally defining Pâ; this is especially the case if there is some implicit probability estimate happening elsewhere in the AI. For instance, what if the pre-processing of inputs to Pâ was very important, and R was defined sloppily enough that changing the pre-processing could change its definition? | 1705.10720#40 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 41 | Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei a, Veness, Joel, Bellemare, Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, Petersen, Stig, Beattie, Charles, Sadik, Amir, Antonoglou, Ioannis, King, Helen, Kumaran, Dharshan, Wierstra, Daan, Legg, Shane, and Hassabis, Demis. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529â533, 2015. ISSN 0028-0836. doi: 10.1038/nature14236. URL http: //dx.doi.org/10.1038/nature14236.
Garc´ıa, Javier and Fern´andez, Fernando. A Comprehensive Survey on Safe Reinforcement Learning. Journal of Ma- chine Learning Research, 16:1437â1480, 2015. ISSN 15337928. | 1705.10528#41 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 41 | The more general issue is that any goal that is not a utility function is unstable [Omo08], in that an agent with one will seek to change it if they can15. The corresponding author intends to analyse the issue in subsequent papers: what do unstable goals tend to if the agent can self-modify? This would both be useful to preserve the needed parts of unstable goals (such as the low impact) and might also allow us to express things like low impact in a clear, and we hope instructive, utility function format.
14See the corresponding authorâs work at http://lesswrong.com/lw/mbq/the_president_ didnt_die_failures_at_extending_ai/ , http://lesswrong.com/lw/mbp/green_emeralds_ grue_diamonds/ , http://lesswrong.com/r/discussion/lw/mbr/grue_bleen_and_natural_ categories/ , and http://lesswrong.com/r/discussion/lw/mfq/presidents_asteroids_ natural_categories_and/. | 1705.10720#41 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 42 | Gu, Shixiang, Lillicrap, Timothy, Ghahramani, Zoubin, Turner, Richard E., and Levine, Sergey. Q-Prop: Sample-Efï¬cient Policy Gradient with An Off-Policy Critic. In International Conference on Learning Repre- sentations, 2017. URL http://arxiv.org/abs/ 1611.02247.
Held, David, Mccarthy, Zoe, Zhang, Michael, Shentu, Fred, and Abbeel, Pieter. Probabilistically Safe Policy Transfer. In Proceedings of the IEEE International Con- ference on Robotics and Automation (ICRA), 2017.
Mnih, Volodymyr, Badia, Adri`a Puigdom`enech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P., Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous Methods for Deep Reinforcement Learning. pp. 1â 28, 2016. URL http://arxiv.org/abs/1602. 01783.
Moldovan, Teodor Mihai and Abbeel, Pieter. Safe Explo- ration in Markov Decision Processes. Proceedings of the 29th International Conference on Machine Learn- ing, 2012. URL http://arxiv.org/abs/1205. 4810. | 1705.10528#42 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 42 | 15Utility functions are generally seen as stable, but even there there are subtleties. Because utility functions form a kind of aï¬ne space, any utility function being unstable means almost all of them are. To see why, note that a stable utility function mixed with or added to an unstable one will be unstable. It still remains the case, though, that nearly all utility functions we could naturally think of, are stable.
14
There is also the risk that a series of low impact AIs, through their indi- vidual decisions, end up having a large impact even if no speciï¬c AI does so. That particular problem can be addressed by making the AIs indiï¬erent to the existence/outputs of the other AIs16. However, this is a patch for a particular issue, rather than a principled declaration that there are no further issues. Such a declaration or proof would be of great use, as repeated patching of an idea does not end when the idea is safe, but when we can no longer think of reasons it is unsafe.
# References
Stuart Armstrong. Motivated value selection for artiï¬cial agents. presented at the 1st International Workshop on AI and Ethics, 2015. | 1705.10720#42 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 43 | Ng, Andrew Y., Harada, Daishi, and Russell, Stuart. Pol- icy invariance under reward transformations : Theory
Constrained Policy Optimization
Sixteenth Inter- and application to reward shaping. national Conference on Machine Learning, 3:278â287, 1999. doi: 10.1.1.48.345.
Peters, Jan and Schaal, Stefan. Reinforcement learning of motor skills with policy gradients. Neural Networks, 21 (4):682â697, 2008. ISSN 08936080. doi: 10.1016/j. neunet.2008.02.003.
Pirotta, Matteo, Restelli, Marcello, and Calandriello, Safe Policy Iteration. Proceedings of the Daniele. 30th International Conference on Machine Learning, 28, 2013.
Schulman, John, Moritz, Philipp, Jordan, Michael, and Abbeel, Pieter. Trust Region Policy Optimization. In- ternational Conference on Machine Learning, 2015.
Schulman, John, Moritz, Philipp, Levine, Sergey, Jordan, Michael, and Abbeel, Pieter. High-Dimensional Contin- uous Control Using Generalized Advantage Estimation. arXiv, 2016. | 1705.10528#43 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 43 | # References
Stuart Armstrong. Motivated value selection for artiï¬cial agents. presented at the 1st International Workshop on AI and Ethics, 2015.
Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Think- ing inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299â324, 2012.
Nick Bostrom. Superintelligence: Paths, dangers, strategies. Oxford University Press, 2014.
[CYHB13] Paul Christiano, Eliezer Yudkowsky, Marcello Herreshoï¬, and Mi- haly Barasz. Deï¬nability of truth in probabilistic logic. MIRI Early Draft, 2013.
[Dew11] Daniel Dewey. Learning what to value. In Artiï¬cial General Intelli- gence, pages 309â314. Springer Berlin Heidelberg, 2011.
P.D. Gr¨unwald and A.P. Dawid. Game theory, maximum entropy, minimum discrepancy, and robust bayesian decision theory. Annals of Statistics, 32:1367â1433, 2004.
Nelson Goodman. Fact, ï¬ction, and forecast. Harvard University Press, 1983. | 1705.10720#43 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 44 | Shalev-Shwartz, Shai, Shammah, Shaked, and Shashua, Amnon. Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. arXiv, 2016. URL http: //arxiv.org/abs/1610.03295.
Silver, David, Huang, Aja, Maddison, Chris J., Guez, Arthur, Sifre, Laurent, van den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneer- shelvam, Veda, Lanctot, Marc, Dieleman, Sander, Grewe, Dominik, Nham, John, Kalchbrenner, Nal, Sutskever, Ilya, Lillicrap, Timothy, Leach, Madeleine, Kavukcuoglu, Koray, Graepel, Thore, and Hassabis, Demis. Mastering the game of Go with deep neu- Nature, 529(7587): ral networks and tree search. 484â489, 2016. doi: 10.1038/ URL http://dx.doi.org/10. nature16961. 1038/nature16961. | 1705.10528#44 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10720 | 44 | Nelson Goodman. Fact, ï¬ction, and forecast. Harvard University Press, 1983.
Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359â378, 2007.
Stevan Harnad. The symbol grounding problem. Physica D: Non- linear Phenomena, 42:335â346, 1990.
[Omo08] S. Omohundro. Basic ai drives. Conference, volume 171, 2008. In Proceedings of the First AGI
Anders Sandberg and Nick Bostrom. Whole brain emulation. a roadmap. Future of Humanity Institute. Technical Report, #2008-3, 2008.
16See the corresponding authorâs post at http://lesswrong.com/r/discussion/lw/lyh/ utility_vs_probability_idea_synthesis/
15
[Yam12] Roman V. Yampolskiy. Leakprooï¬ng the singularity: artiï¬cial in- telligence conï¬nement problem. Journal of Consciousness Studies, 19:194â214, 2012. | 1705.10720#44 | Low Impact Artificial Intelligences | There are many goals for an AI that could become dangerous if the AI becomes
superintelligent or otherwise powerful. Much work on the AI control problem has
been focused on constructing AI goals that are safe even for such AIs. This
paper looks at an alternative approach: defining a general concept of `low
impact'. The aim is to ensure that a powerful AI which implements low impact
will not modify the world extensively, even if it is given a simple or
dangerous goal. The paper proposes various ways of defining and grounding low
impact, and discusses methods for ensuring that the AI can still be allowed to
have a (desired) impact despite the restriction. The end of the paper addresses
known issues with this approach and avenues for future research. | http://arxiv.org/pdf/1705.10720 | Stuart Armstrong, Benjamin Levinstein | cs.AI | null | null | cs.AI | 20170530 | 20170530 | [] |
1705.10528 | 45 | Sutton, Richard S and Barto, Andrew G. Introduction to Reinforcement Learning. Learning, 4(1996):1â5, 1998. ISSN 10743529. doi: 10.1.1.32.7692. URL http:// dl.acm.org/citation.cfm?id=551283.
Uchibe, Eiji and Doya, Kenji. Constrained reinforce- ment learning from intrinsic and extrinsic rewards. 2007 IEEE 6th International Conference on Development and Learning, ICDL, (February):163â168, 2007. doi: 10. 1109/DEVLRN.2007.4354030.
Constrained Policy Optimization
# 10. Appendix
# 10.1. Proof of Policy Performance Bound
10.1.1. PRELIMINARIES
Our analysis will make extensive use of the discounted future state distribution, dÏ, which is deï¬ned as
oo d"(s) = (1-7) )07'P(s: = s/n). t=0
It allows us to express the expected discounted total reward compactly as
ol ~ La yscat ann s'nP J(m) [R(s,a,sâ)], (17) | 1705.10528#45 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 46 | ol ~ La yscat ann s'nP J(m) [R(s,a,sâ)], (17)
where by a ~ 7, we mean a ~ 77(-|s), and by sâ ~ P, we mean sâ ~ P(-|s,a). We drop the explicit notation for the sake of reducing clutter, but it should be clear from context that a and sâ depend on s.
First, we examine some useful properties of d* that become apparent in vector form for finite state spaces. Let p' ⬠R!S! denote the vector with components p(s) = P(s,; = s{m), and let P, ⬠RIS!*I5! denote the transition matrix with components P,-(sâ|s) = { daP(s'|s,a)7(a|s); then p = P,p'-} = Pty and
da" (l=) SoPr)'e (l= y)(I = Pr) hu (18)
This formulation helps us easily obtain the following lemma. Lemma 1. For any function f : S â R and any policy Ï,
(1-9) B FO + E, bf) â B, [)] =0. 19) ann si~P | 1705.10528#46 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 47 | (1-9) B FO + E, bf) â B, [)] =0. 19) ann si~P
Proof. Multiply both sides of (18) by (I â γPÏ) and take the inner product with the vector f â R|S|.
Combining this with (17), we obtain the following, for any function f and any policy Ï:
Jen) = ELA + pa B, (R68) +9906) ~ £0) (20) ena
This identity is nice for two reasons. First: if we pick f to be an approximator of the value function Vâ, then (20) relates the true discounted return of the policy (J(7)) to the estimate of the policy return (E,~,,[f(s)]) and to the on-policy average TD-error of the approximator; this is aesthetically satisfying. Second: it shows that reward-shaping by y f(sâ) â f(s) has the effect of translating the total discounted return by Es~,,[f(s)], a fixed constant independent of policy; this illustrates the finding of Ng. et al. (1999) that reward shaping by 7 f(sâ) + f(s) does not change the optimal policy. | 1705.10528#47 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 48 | It is also helpful to introduce an identity for the vector difference of the discounted future state visitation distributions on two different policies, 7â and 7. Define the matrices G + (I â yP,,)~!, G = (I â yP,r)~1, and A = P,, â P,. Then:
Gt-G* = (1~7P,) â(1~7Px) = 7A;
left-multiplying by G and right-multiplying by ¯G, we obtain
¯G â G = γ ¯GâG.
Constrained Policy Optimization
Thus
-d⢠= (1-7)(G-G) yu = y(1-7)GAGu = 7GAd". (21)
# dâ
For simplicity in what follows, we will only consider MDPs with ï¬nite state and action spaces, although our attention is on MDPs that are too large for tabular methods.
10.1.2. MAIN RESULTS
In this section, we will derive and present the new policy improvement bound. We will begin with a lemma: Lemma 2. For any function f : S â R and any policies x' and n, define
Laglat)® 8 [TE â1) emsans) +9400) F009), (22) s'nP
# and a. | 1705.10528#48 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 49 | Laglat)® 8 [TE â1) emsans) +9400) F009), (22) s'nP
# and a.
= max, |Eqnxâ,s'~p[R(s, a, sâ) + yf (sâ) â f (s)]|- Then the following bounds hold:
, 1 ! an ae! || aa In") = In) > ââ (Lp(a!) ~ 26F' Drv (a |Ia")) , (23) â%Y
1 1 â γ 1 1 â γ
1 , 1 In) â In) < a (Ln. (a") + 26F' Drv (a \ld")) , (24)
where Dy is the total variational divergence. Furthermore, the bounds are tight (when x' = 7, the LHS and RHS are identically zero).
= R(s,a,sâ) + yf(sâ) â f(s). (The choice of 6 to denote this | 1705.10528#49 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 50 | = R(s,a,sâ) + yf(sâ) â f(s). (The choice of 6 to denote this
Proof. First, for notational convenience, let f(s, a,sâ) = R(s,a,sâ) + yf(sâ) â f(s). (The choice of 6 to denote this quantity is intentionally suggestiveâthis bears a strong resemblance to a TD-error.) By (20), we obtain the identity
(an!) â I(n) = = B [Bp(s.a.8)) ~ Bly(s.as"). lp NP
Now, we restrict our attention to the first term in this equation. Let on ⬠RIS! denote the vector of components on (s) = Eawnâ,s/~P [Of (8, a, 8â)|s]. Observe that
E_ [6(s.a.8) = (a" 57â) Sp = (a", 55 ) + (a" âd" 5 )
This term is then straightforwardly bounded by applying H¨olderâs inequality; for any p, q â [1, â] such that 1/p+1/q = 1, we have | 1705.10528#50 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 51 | _, , _, , <_! (a",57') + |javâ âar|| |]53 a â a") ||57 > E [6¢(s,a,8â)] > d⢠5" -| 12 Ele = (a",57") SNP , , q
The lower bound leads to (23), and the upper bound leads to (24).
We choose p = 1 and q = â; however, we believe that this step is very interesting, and different choices for dealing with the inner product
Constrained Policy Optimization
With ar" dF by the importance sampling identity, = 2Drv (d" ||\d") 1
With ar" dF by the importance sampling identity, = 2Drv (d" ||\d") and 57" | = fF, the bounds are almost obtained. The last step is to observe that, 1 oo
# ar"
(a", 57â) = Ee rls. 8')] s'nP = Bel Gas) ae) ant s'nP
After grouping terms, the bounds are obtained. | 1705.10528#51 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 52 | After grouping terms, the bounds are obtained.
This lemma makes use of many ideas that have been explored before; for the special case of f = V, this strategy (after bounding Dry (dâ ||dâ)) leads directly to some of the policy improvement bounds previously obtained by Pirotta et al. and Schulman et al. The form given here is slightly more general, however, because it allows for freedom in choosing f. | 1705.10528#52 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 53 | Remark. It is reasonable to ask if there is a choice of f which maximizes the lower bound here. This turns out to trivially be f = Vâ . Observe that Es/~p [5,,~/(s, a, sâ)|s,a] = Aâ (s,a). For all states, Ey. 7 [A (s,a)] = 0 (by the definition of A*â), thus or, = O and ev, = 0. Also, L, ya (mt) = âEswat jane [A (s.4)]; from (20) with f = Vâ, we can see that this exactly equals J(mâ) â J(m). Thus, for f = Vâ¢â, we recover an exact equality. While this is not practically useful to us (because, when we want to optimize a lower bound with respect to 7â, it is too expensive to evaluate Vâ for each candidate to be practical), it provides insight: the penalty coefficient on the divergence captures information about the mismatch between f and V⢠.
Next, we are interested in bounding the divergence term, \|a"" âd⢠knowledge, this is a new result. |1. We give the following lemma; to the best of our | 1705.10528#53 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 54 | Lemma 3. The divergence between discounted future state visitation distributions, d⢠âd⢠|1, is bounded by an average divergence of the policies x' and 7:
me 27 ' la" â a" S p> EE [Drv (a'||n)[s]}, (25) = sad"
where Dry(n'||n)[s] = (1/2) 22, |7â(als) â w(als)|Proof. First, using (21), we obtain
lla" a", = yIGAd" I < rlGlh Aah.
\|G'l|1 is bounded by:
oo NGlh = Pa) Nh S309! Peel = (= 9) t=0
Constrained Policy Optimization
To conclude the lemma, we bound ||Adâ||;.
|Ad" |]. M So A(s'|s)d"(s) A(s'|s)|d"(s) 2M d"(s) 2M Y P(s'|s, a) (x/(als) â m(als)) M P(s'|s,a) |n'(as) â m(als)| d*(s) y e M: a (als) â x(als)| d*(s) a E_ [Drv (a'||n)[s]]. Ns» 8 | 1705.10528#54 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 55 | The new policy improvement bound follows immediately. Theorem 1. For any function f : S + R and any policies xâ and n, define 65(s,a, 8â)
= R(s,a,s') + 7f(s') â f(s),
Theorem 1. For any function f : S + R and any policies xâ and n, define 65(s,a, 8â) = R(s,a,s') + 7f(s') â f(s),
os = max |Eavnâ,s'~Pl5¢ (8, a, 8â)]| ,
Dep (a) = z (aS - 1) 5y(s,a, | , and s'nP
Pas)» 279 bp ey(a!|ln)[sl, Dt I) 5 wf) = Vo * Baa
where Dry (x'||7)[s] = (1/2) 3°, |7â(a|s) â 2(a|s)| is the total variational divergence between action distributions at s. The following bounds hold:
Dz ,(a') = I(aâ) â Jn) = Dz (n"). (4)
Furthermore, the bounds are tight (when x' = 7, all three expressions are identically zero). | 1705.10528#55 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 56 | Furthermore, the bounds are tight (when x' = 7, all three expressions are identically zero).
Proof. Begin with the bounds from lemma 2 and bound the divergence Dry (a
|
# ||dÏ) by lemma 3.
# 10.2. Proof of Analytical Solution to LQCLP
Theorem 2 (Optimizing Linear Objective with Linear and Quadratic Constraints). Consider the problem
pâ = min gT x x s.t. bT x + c ⤠0 xT Hx ⤠δ, (26)
where g,b,x ⬠R", c,d ⬠R, 6 > 0, H ⬠S", and H > 0. When there is at least one strictly feasible point, the optimal point x* satisfies
xâ = â 1 λâ H â1 (g + νâb) ,
Constrained Policy Optimization
where λâ and νâ are deï¬ned by
* (7) v= ; 8 + . ~ 1 [(r LA(e2 roo \* = arg max faa) =A (4 4) } a(¢ 8) â ifAc-r>0 AD>0 fod) = -3 d+ dd) otherwise,
with q = gT H â1g, r = gT H â1b, and s = bT H â1b. | 1705.10528#56 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 57 | with q = gT H â1g, r = gT H â1b, and s = bT H â1b.
. = {λ|λc â r > 0, λ ⥠0}, and Îb
. = {λ|λc â r ⤠0, λ ⥠0}. The value of λâ satisï¬es
Furthermore, let Îa
_ Me ps =P ( =f te] Mt = Proj (yh
,
with λâ = λâ the projection of a point x â R onto a convex segment of R, [a, b], has value Proj(x, [a, b]) = max(a, min(b, x)).
Proof. This is a convex optimization problem. When there is at least one strictly feasible point, strong duality holds by Slaterâs theorem. We exploit strong duality to solve the problem analytically. | 1705.10528#57 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 58 | re AT rr T p* = minmax g e+ 5 (2 Ax 6)+u(b x +c) Z Xr> v>0 _ Ap T 1 . maxmin 5x Hat+(g+vb) x+(ve- 5 Strong duality Fa v>0 1 => =-j;H (9+) V.,L(x,A,v) =0 1 1 = max â > (g + ub)" H7} (g + vb) + (ve - >) Plug in x* vS0 1 2 L : _ TT yy-l + pT py-l + pT py-l max ay (a+ 2ur + ¥°s) 4 veâ 5rd Notation: g=g° H-°g, r=g° Hâ¢'b, s=b' Hd. v>0 1 oe a (2r + 2vs) +¢ => v= (* â! ) Optimizing single-variable convex quadratic function over R, Ss +4 1 (1? LA(e2_ re; = _ =max { 2A ( 8 a) "2 ( 8 7) » HAE A Notation: fe : {Alec re 0, = O}, d»>0 = (4 +26) if\ ⬠Ay » = {A|Acâr <0, A> 0} | 1705.10528#58 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 59 | # pâ = min
Observe that when c < 0, Îa = [0, r/c) and Îb = [r/c, â); when c > 0, Îa = [r/c, â) and Îb = [0, r/c).
Notes on interpreting the coefï¬cients in the dual problem:
⢠We are guaranteed to have r2/s â q ⤠0 by the Cauchy-Schwarz inequality. Recall that q = gT H â1g, r = gT H â1b, s = bT H â1b. The Cauchy-Scwarz inequality gives:
1-1/0) 3 |-"/2 3 > ((a0)! (1%) => (b"H~'d) (g? H~'g) > (0 Hg)â qs => r?,
+
Constrained Policy Optimization
⢠The coefï¬cient c2/s â δ relates to whether or not the plane of the linear constraint intersects the quadratic trust region. An intersection occurs if there exists an x such that c + bT x = 0 with xT Hx ⤠δ. To check whether this is the case, we solve
xâ = arg min x xT Hx : c + bT x = 0 (27) | 1705.10528#59 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 60 | xâ = arg min x xT Hx : c + bT x = 0 (27)
and see if xâT Hxâ ⤠δ. The solution to this optimization problem is xâ = cH â1b/s, thus xâT Hxâ = c2/s. If c2/s â δ ⤠0, then the plane intersects the trust region; otherwise, it does not.
If c2/s â δ > 0 and c < 0, then the quadratic trust region lies entirely within the linear constraint-satisfying halfspace, and we can remove the linear constraint without changing the optimization problem. If c2/s â δ > 0 and c > 0, the problem is infeasible (the intersection of the quadratic trust region and linear constraint-satisfying halfspace is empty). Otherwise, we follow the procedure below.
Solving the dual for λ: for any A > 0, B > 0, the problem
max f(A) = -5 (4 + pa) A>0 Xr
max f(A) = -5 A>0 has optimal point A* = JA/B and optimal value f(A*) = âV
AB.
We can use this solution form to obtain the optimal point on each segment of the piecewise continuous dual function for λ:
# objective
objective | 1705.10528#60 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 61 | AB.
We can use this solution form to obtain the optimal point on each segment of the piecewise continuous dual function for λ:
# objective
objective
optimal point (before projection) optimal point (after projection)
1 (r? A\(e re q-1?/s A) = â b> â6 Xa = Ne = Proj(Aa, A fal) a(S â) a(< ) os [FOr * = Proj(a, Aa) . ol . â . fir) = -5 (4 +28) Ay Vi Aj = Proj(Av, As)
The optimization is completed by comparing fa(λâ
# and fp(Aj): { AL fa(AS) Aj
b ):
λâ = fa(λâ otherwise. a) ⥠fb(λâ b )
# 10.3. Experimental Parameters
10.3.1. ENVIRONMENTS
In the Circle environments, the reward and cost functions are
_ veya] 1+ |[I[z, ylll2 â alâ C(s) =1[|2| > rim], R(s)
where x, y are the coordinates in the plane, v is the velocity, and d, xlim are environmental parameters. We set these parameters to be
d xlim Point-mass Ant Humanoid 10 3 15 2.5 10 2.5 | 1705.10528#61 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 62 | d xlim Point-mass Ant Humanoid 10 3 15 2.5 10 2.5
In Point-Gather, the agent receives a reward of +10 for collecting an apple, and a cost of 1 for collecting a bomb. Two apples and eight bombs spawn on the map at the start of each episode. In Ant-Gather, the reward and cost structure was the same, except that the agent also receives a reward of â10 for falling over (which results in the episode ending). Eight apples and eight bombs spawn on the map at the start of each episode.
Constrained Policy Optimization | 1705.10528#62 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 63 | Figure 5. In the Circle task, reward is maximized by moving along the green circle. The agent is not allowed to enter the blue regions, so its optimal constrained path follows the line segments AD and BC.
# 10.3.2. ALGORITHM PARAMETERS
In all experiments, we use Gaussian policies with mean vectors given as the outputs of neural networks, and with variances that are separate learnable parameters. The policy networks for all experiments have two hidden layers of sizes (64, 32) with tanh activation functions.
We use GAE-λ (Schulman et al., 2016) to estimate the advantages and constraint advantages, with neural network value functions. The value functions have the same architecture and activation functions as the policy networks. We found that having different λGAE values for the regular advantages and the constraint advantages worked best. We denote the λGAE used for the constraint advantages as λGAE
# C | 1705.10528#63 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 64 | # C
For the failure prediction networks PÏ(s â U ), we use neural networks with a single hidden layer of size (32), with output of one sigmoid unit. At each iteration, the failure prediction network is updated by some number of gradient descent steps using the Adam update rule to minimize the prediction error. To reiterate, the failure prediction network is a model for the probability that the agent will, at some point in the next T time steps, enter an unsafe state. The cost bonus was weighted by a coefï¬cient α, which was 1 in all experiments except for Ant-Gather, where it was 0.01. Because of the short time horizon, no cost bonus was used for Point-Gather.
For all experiments, we used a discount factor of γ = 0.995, a GAE-λ for estimating the regular advantages of λGAE = 0.95, and a KL-divergence step size of δKL = 0.01.
Experiment-speciï¬c parameters are as follows: | 1705.10528#64 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 65 | Experiment-speciï¬c parameters are as follows:
Parameter Batch size Rollout length Maximum constraint value d Failure prediction horizon T Failure predictor SGD steps per itr Predictor coeff α λGAE C Point-Circle Ant-Circle Humanoid-Circle 50,000 50-65 5 5 25 1 1 100,000 500 10 20 25 1 0.5 50,000 1000 10 20 25 1 0.5 Point-Gather Ant-Gather 50,000 15 0.1 (N/A) (N/A) (N/A) 1 100,000 500 0.2 20 10 0.01 0.5
Note that these same parameters were used for all algorithms.
We found that the Point environment was agnostic to λGAE essary to set λGAE constraint gradient magnitude, which led the algorithm to take unsafe steps. The choice λGAE hyperparameter search in {0.5, 0.92, 1}, but 0.92 worked nearly as well.
# 10.3.3. PRIMAL-DUAL OPTIMIZATION IMPLEMENTATION
Our primal-dual implementation is intended to be as close as possible to our CPO implementation. The key difference is that the dual variables for the constraints are stateful, learnable parameters, unlike in CPO where they are solved from scratch at each update.
Constrained Policy Optimization | 1705.10528#65 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.10528 | 66 | Constrained Policy Optimization
The update equations for our PDO implementation are
θk+1 = θk + sj 2δ (g â νkb)T H â1(g â νkb) H â1 (g â νkb) νk+1 = (νk + α (JC(Ïk) â d))+ ,
where sj is from the backtracking line search (s â (0, 1) and j â {0, 1, ..., J}, where J is the backtrack budget; this is the same line search as is used in CPO and TRPO), and α is a learning rate for the dual parameters. α is an important hyperparameter of the algorithm: if it is set to be too small, the dual variable wonât update quickly enough to meaningfully enforce the constraint; if it is too high, the algorithm will overcorrect in response to constraint violations and behave too conservatively. We experimented with a relaxed learning rate, α = 0.001, and an aggressive learning rate, α = 0.01. The aggressive learning rate performed better in our experiments, so all of our reported results are for α = 0.01.
Selecting the correct learning rate can be challenging; the need to do this is obviated by CPO. | 1705.10528#66 | Constrained Policy Optimization | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety. | http://arxiv.org/pdf/1705.10528 | Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | cs.LG | Accepted to ICML 2017 | null | cs.LG | 20170530 | 20170530 | [] |
1705.08292 | 0 | 8 1 0 2
y a M 2 2 ] L M . t a t s [ 2 v 2 9 2 8 0 . 5 0 7 1 : v i X r a
# The Marginal Value of Adaptive Gradient Methods in Machine Learning
Ashia C. Wilsonâ, Rebecca Roelofsâ, Mitchell Sternâ, Nathan Srebro', and Benjamin Recht? {ashia,roelofs,mitchell}@berkeley.edu, [email protected], [email protected] âUniversity of California, Berkeley +Toyota Technological Institute at Chicago
# Abstract | 1705.08292#0 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 1 | # Abstract
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often ï¬nd drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classiï¬cation problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state- of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often signiï¬cantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
# Introduction | 1705.08292#1 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 2 | # Introduction
An increasing share of deep learning researchers are training their models with adaptive gradient methods [3, 12] due to their rapid training time [6]. Adam [8] in particular has become the default algorithm used across many deep learning frameworks. However, the generalization and out-of- sample behavior of such adaptive gradient methods remains poorly understood. Given that many passes over the data are needed to minimize the training objective, typical regret guarantees do not necessarily ensure that the found solutions will generalize [17]. | 1705.08292#2 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 3 | Notably, when the number of parameters exceeds the number of data points, it is possible that the choice of algorithm can dramatically inï¬uence which model is learned [15]. Given two different minimizers of some optimization problem, what can we say about their relative ability to generalize? In this paper, we show that adaptive and non-adaptive optimization methods indeed ï¬nd very different solutions with very different generalization properties. We provide a simple generative model for binary classiï¬cation where the population is linearly separable (i.e., there exists a solution with large margin), but AdaGrad [3], RMSProp [21], and Adam converge to a solution that incorrectly classiï¬es new data with probability arbitrarily close to half. On this same example, SGD ï¬nds a solution with zero error on new data. Our construction suggests that adaptive methods tend to give undue inï¬uence to spurious features that have no effect on out-of-sample generalization. | 1705.08292#3 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 4 | We additionally present numerical experiments demonstrating that adaptive methods generalize worse than their non-adaptive counterparts. Our experiments reveal three primary ï¬ndings. First, with the same amount of hyperparameter tuning, SGD and SGD with momentum outperform adaptive methods on the development/test set across all evaluated models and tasks. This is true even when the adaptive methods achieve the same training loss or lower than non-adaptive methods. Second, adaptive methods often display faster initial progress on the training set, but their performance quickly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
plateaus on the development/test set. Third, the same amount of tuning was required for all methods, including adaptive methods. This challenges the conventional wisdom that adaptive methods require less tuning. Moreover, as a useful guide to future practice, we propose a simple scheme for tuning learning rates and decays that performs well on all deep learning tasks we studied.
# 2 Background
The canonical optimization algorithms used to minimize risk are either stochastic gradient methods or stochastic momentum methods. Stochastic gradient methods can generally be written | 1705.08292#4 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 5 | # 2 Background
The canonical optimization algorithms used to minimize risk are either stochastic gradient methods or stochastic momentum methods. Stochastic gradient methods can generally be written
wk+1 = wk â αk Ëâf (wk), (2.1) where Ëâf (wk) := âf (wk; xik ) is the gradient of some loss function f computed on a batch of data xik . Stochastic momentum methods are a second family of techniques that have been used to accelerate training. These methods can generally be written as
wk+1 = wk â αk Ëâf (wk + γk(wk â wkâ1)) + βk(wk â wkâ1). The sequence of iterates (2.2) includes Polyakâs heavy-ball method (HB) with γk = 0, and Nesterovâs Accelerated Gradient method (NAG) [19] with γk = βk.
Notable exceptions to the general formulations (2.1) and (2.2) are adaptive gradient and adaptive momentum methods, which choose a local distance measure constructed using the entire sequence of iterates (w1, · · · , wk). These methods (including AdaGrad [3], RMSProp [21], and Adam [8]) can generally be written as | 1705.08292#5 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 6 | Ëâf (wk + γk(wk â wkâ1)) + βkHâ1 where Hk := H(w1, · · · , wk) is a positive deï¬nite matrix. Though not necessary, the matrix Hk is usually deï¬ned as
k 1/2 Hy = diag {> naca , (2.4) i=l
where âoâ denotes the entry-wise or Hadamard product, g, = Vf (wg + Yx~(we â Weâ1)), and np is some set of coefficients specified for each algorithm. That is, Hj, is a diagonal matrix whose entries are the square roots of a linear combination of squares of past gradient components. We will use the fact that H;, are defined in this fashion in the sequel. For the specific settings of the parameters for many of the algorithms used in deep learning, see Table [I] Adaptive methods attempt to adjust an algorithm to the geometry of the data. In contrast, stochastic gradient descent and related variants use the /2 geometry inherent to the parameter space, and are equivalent to setting Hj, = I in the adaptive methods. | 1705.08292#6 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 7 | Gk αk βk γ SGD HB NAG AdaGrad RMSProp I α I α I Gkâ1 + Dk β2Gkâ1 + (1 â β2)Dk α α α 0 0 β 0 β β 0 0 0 0 β2 1âβk 2 Adam Gkâ1 + (1âβ2) 1âβk 2 α 1âβ1 1âβk 1 β1(1âβkâ1 1 1âβk 1 0 ) Dk
Table 1: Parameter settings of algorithms used in deep learning. Here, Dy = diag(gx o gx) and G,, := Hy, o Hy. We omit the additional ⬠added to the adaptive methods, which is only needed to ensure non-singularity of the matrices H,.
In this context, generalization refers to the performance of a solution w on a broader population. Performance is often deï¬ned in terms of a different loss function than the function f used in training. For example, in classiï¬cation tasks, we typically deï¬ne generalization in terms of classiï¬cation error rather than cross-entropy.
2
# 2.1 Related Work | 1705.08292#7 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 8 | Understanding how optimization relates to generalization is a very active area of current machine learning research. Most of the seminal work in this area has focused on understanding how early stopping can act as implicit regularization [22]. In a similar vein, Ma and Belkin [10] have shown that gradient methods may not be able to ï¬nd complex solutions at all in any reasonable amount of time. Hardt et al. [17] show that SGD is uniformly stable, and therefore solutions with low training error found quickly will generalize well. Similarly, using a stability argument, Raginsky et al. [16] have shown that Langevin dynamics can ï¬nd solutions than generalize better than ordinary SGD in non-convex settings. Neyshabur, Srebro, and Tomioka [15] discuss how algorithmic choices can act as implicit regularizer. In a similar vein, Neyshabur, Salakhutdinov, and Srebro [14] show that a different algorithm, one which performs descent using a metric that is invariant to re-scaling of the parameters, can lead to solutions which sometimes generalize better than SGD. Our work supports the work of [14] by drawing connections between the metric used to perform local optimization and the ability of the training algorithm to ï¬nd solutions that generalize. However, we focus primarily on the different generalization properties of adaptive and non-adaptive methods. | 1705.08292#8 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 9 | A similar line of inquiry has been pursued by Keskar et al. [7]. Hochreiter and Schmidhuber [4] showed that âsharpâ minimizers generalize poorly, whereas âï¬atâ minimizers generalize well. Keskar et al. empirically show that Adam converges to sharper minimizers when the batch size is increased. However, they observe that even with small batches, Adam does not ï¬nd solutions whose performance matches state-of-the-art. In the current work, we aim to show that the choice of Adam as an optimizer itself strongly inï¬uences the set of minimizers that any batch size will ever see, and help explain why they were unable to ï¬nd solutions that generalized particularly well.
# 3 The potential perils of adaptivity
The goal of this section is to illustrate the following observation: when a problem has multiple global minima, different algorithms can ï¬nd entirely different solutions when initialized from the same point. In addition, we construct an example where adaptive gradient methods ï¬nd a solution which has worse out-of-sample error than SGD.
To simplify the presentation, let us restrict our attention to the binary least-squares classiï¬cation problem, where we can easily compute closed the closed form solution found by different methods. In least-squares classiï¬cation, we aim to solve | 1705.08292#9 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 10 | minimize, Rs[w] = $||Xw â yl[3. (3.1)
Here X is an n à d matrix of features and y is an n-dimensional vector of labels in {â1, 1}. We aim to ï¬nd the best linear classiï¬er w. Note that when d > n, if there is a minimizer with loss 0 then there is an inï¬nite number of global minimizers. The question remains: what solution does an algorithm ï¬nd and how well does it perform on unseen data?
# 3.1 Non-adaptive methods | 1705.08292#10 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 11 | # 3.1 Non-adaptive methods
Most common non-adaptive methods will ï¬nd the same solution for the least squares objective (3.1). Any gradient or stochastic gradient of RS must lie in the span of the rows of X. Therefore, any method that is initialized in the row span of X (say, for instance at w = 0) and uses only linear combinations of gradients, stochastic gradients, and previous iterates must also lie in the row span of X. The unique solution that lies in the row span of X also happens to be the solution with minimum Euclidean norm. We thus denote wSGD = X T (XX T )â1y. Almost all non-adaptive methods like SGD, SGD with momentum, mini-batch SGD, gradient descent, Nesterovâs method, and the conjugate gradient method will converge to this minimum norm solution. The minimum norm solutions have the largest margin out of all solutions of the equation Xw = y. Maximizing margin has a long and fruitful history in machine learning, and thus it is a pleasant surprise that gradient descent naturally ï¬nds a max-margin solution.
3
# 3.2 Adaptive methods | 1705.08292#11 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 12 | 3
# 3.2 Adaptive methods
Next, we consider adaptive methods where H;, is diagonal. While it is difficult to derive the general form of the solution, we can analyze special cases. Indeed, we can construct a variety of instances where adaptive methods converge to solutions with low ¢,. norm rather than low ¢2 norm. For a vector x ⬠RY, let sign() denote the function that maps each component of z to its sign.
Lemma 3.1 Suppose there exists a scalar c such that X sign(X T y) = cy. Then, when initialized at w0 = 0, AdaGrad, Adam, and RMSProp all converge to the unique solution w â sign(X T y).
In other words, whenever there exists a solution of Xw = y that is proportional to sign(X T y), this is precisely the solution to which all of the adaptive gradient methods converge.
Proof We prove this lemma by showing that the entire trajectory of the algorithm consists of iterates whose components have constant magnitude. In particular, we will show that
wk = λk sign(X T y) ,
for some scalar λk. The initial point w0 = 0 satisï¬es the assertion with λ0 = 0.
Now, assume the assertion holds for all k ⤠t. Observe that | 1705.08292#12 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 13 | Now, assume the assertion holds for all k ⤠t. Observe that
VRs(wr + Ve (we â Weâ1)) XT (X (we + Yn (Wk â Weâ-1)) â Y) = XT {An + (An â Anâ1))X sign(X7y) â y} = {(An + 7e(Ae â An-a))e- 1 XT y = unXTy,
where the last equation deï¬nes µk. Hence, letting gk = âRS(wk + γk(wk â wkâ1)), we also have
1/2 k 1/2 k Hy = diag {> Nhs Js © s} = diag {= nuh |X? y| | = v4 diag (|X yl) , s=1 s=1
1/2
where |u| denotes the component-wise absolute value of a vector and the last equation deï¬nes νk. In sum,
Wk = Wk - axH, VF (we + Yn (We â We-1)) + BH, He-1 (we â Wr-1) Mk By, â AHk BVi-1(y, _ a} sign(XTy), Vk Ve the clai
proving the claim.1 | 1705.08292#13 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 14 | proving the claim.1
This solution is far simpler than the one obtained by gradient methods, and it would be surprising if such a simple solution would perform particularly well. We now turn to showing that such solutions can indeed generalize arbitrarily poorly.
# 3.3 Adaptivity can overï¬t
Lemma 3.1 allows us to construct a particularly pernicious generative model where AdaGrad fails to ï¬nd a solution that generalizes. This example uses inï¬nite dimensions to simplify bookkeeping, but one could take the dimensionality to be 6n. Note that in deep learning, we often have a number of parameters equal to 25n or more [20], so this is not a particularly high dimensional example by contemporary standards. For i = 1, . . . , n, sample the label yi to be 1 with probability p and â1 with probability 1 â p for some p > 1/2. Let xi be an inï¬nite dimensional vector with entries
xij = yi 1 1 0 j = 1 j = 2, 3 j = 4 + 5(i â 1), . . . , 4 + 5(i â 1) + 2(1 â yi) otherwise .
1In the event that X T y has a component equal to 0, we deï¬ne 0/0 = 0 so that the update is well-deï¬ned.
4
# | | 1705.08292#14 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 15 | 4
# |
In other words, the first feature of x; is the class label. The next 2 features are always equal to 1. After this, there is a set of features unique to x; that are equal to 1. If the class label is 1, then there is 1 such unique feature. If the class label is â1, then there are 5 such features. Note that the only discriminative feature useful for classifying data outside the training set is the first one! Indeed, one can perform perfect classification using only the first feature. The other features are all useless. Features 2 and 3 are constant, and each of the remaining features only appear for one example in the data set. However, as we will see, algorithms without such a priori knowledge may not be able to learn these distinctions. Take n samples and consider the AdaGrad solution for minimizing 5||X w â y||?. First we show that the conditions of Lemma)3.I|hold. Let b = S7i"_, y; and assume for the sake of simplicity that b > 0. This will happen with arbitrarily high probability for large enough n. Define u = X7 y and observe that | 1705.08292#15 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 16 | n j=l 1 j=l b j=2,3 d ien(u,) 1 j=2,3 j= See an sigi = See â Yj if j > Sand xj); =1 SID Yj ifj > 3and xj); =1 0 otherwise 0 otherwise
Thus we have (sign(w),2;) = yi + 2+ yi(3 â 2y;) = 4y; as desired. Hence, the AdaGrad solution wd « sign(u). In particular, w°4 has all of its components equal to +7 for some positive constant rT. Now since w?â* has the same sign pattern as wu, the first three components of Wada are equal to each other. But for a new data point, x'°**, the only features that are nonzero in both x'**t and w®¢# are the first three. In particular, we have
(42, ates") _ r(yltes) +2)>0.
Therefore, the AdaGrad solution will label all unseen data as a positive example! | 1705.08292#16 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 17 | (42, ates") _ r(yltes) +2)>0.
Therefore, the AdaGrad solution will label all unseen data as a positive example!
Now, we turn to the minimum 2-norm solution. Let P and NV denote the set of positive and negative examples respectively. Letn4 = |P| and n_ = |N]. Assuming a; = a+ when y; = 1 and a; = a_ when y; = â1, we have that the minimum norm solution will have the form wS¢P = XTa = Diep O21 + View a_ax;. These scalars can be found by solving XXTa = y. Inclosed form we have
have
α+ = 4nâ + 3 9n+ + 3nâ + 8n+nâ + 3 and αâ = 4n+ + 1 9n+ + 3nâ + 8n+nâ + 3 . (3.2)
The algebra required to compute these coefï¬cients can be found in the Appendix. For a new data point, xtest, again the only features that are nonzero in both xtest and wSGD are the ï¬rst three. Thus we have | 1705.08292#17 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 18 | . (w8SP atest) â ytet(n say ân_a_)+2(nyay+n_a_). Using 8.2}, we see that whenever n,. > n_ /3, the SGD solution makes no errors.
A formal construction of this example using a data-generating distribution can be found in Appendix C. Though this generative model was chosen to illustrate extreme behavior, it shares salient features with many common machine learning instances. There are a few frequent features, where some predictor based on them is a good predictor, though these might not be easy to identify from ï¬rst inspection. Additionally, there are many other features which are sparse. On ï¬nite training data it looks like such features are good for prediction, since each such feature is discriminatory for a particular training example, but this is over-ï¬tting and an artifact of having fewer training examples than features. Moreover, we will see shortly that adaptive methods typically generalize worse than their non-adaptive counterparts on real datasets.
# 4 Deep Learning Experiments | 1705.08292#18 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 19 | # 4 Deep Learning Experiments
Having established that adaptive and non-adaptive methods can ï¬nd different solutions in the convex setting, we now turn to an empirical study of deep neural networks to see whether we observe a similar discrepancy in generalization. We compare two non-adaptive methods â SGD and the heavy ball method (HB) â to three popular adaptive methods â AdaGrad, RMSProp and Adam. We study performance on four deep learning problems: (C1) the CIFAR-10 image classiï¬cation task, (L1)
5
Name C1 L1 L2 L3 Network type Deep Convolutional 2-Layer LSTM Architecture cifar.torch Dataset CIFAR-10 torch-rnn War & Peace 2-Layer LSTM + Feedforward span-parser Penn Treebank Framework Torch Torch DyNet 3-Layer LSTM emnlp2016 Penn Treebank Tensorï¬ow
Table 2: Summaries of the models we use for our experiments.2 | 1705.08292#19 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 20 | Table 2: Summaries of the models we use for our experiments.2
character-level language modeling on the novel War and Peace, and (L2) discriminative parsing and (L3) generative parsing on Penn Treebank. In the interest of reproducibility, we use a network architecture for each problem that is either easily found online (C1, L1, L2, and L3) or produces state-of-the-art results (L2 and L3). Table 2 summarizes the setup for each application. We take care to make minimal changes to the architectures and their data pre-processing pipelines in order to best isolate the effect of each optimization algorithm.
We conduct each experiment 5 times from randomly initialized starting points, using the initialization scheme speciï¬ed in each code repository. We allocate a pre-speciï¬ed budget on the number of epochs used for training each model. When a development set was available, we chose the settings that achieved the best peak performance on the development set by the end of the ï¬xed epoch budget. CIFAR-10 did not have an explicit development set, so we chose the settings that achieved the lowest training loss at the end of the ï¬xed epoch budget. | 1705.08292#20 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 21 | Our experiments show the following primary ï¬ndings: (i) Adaptive methods ï¬nd solutions that gener- alize worse than those found by non-adaptive methods. (ii) Even when the adaptive methods achieve the same training loss or lower than non-adaptive methods, the development or test performance is worse. (iii) Adaptive methods often display faster initial progress on the training set, but their performance quickly plateaus on the development set. (iv) Though conventional wisdom suggests that Adam does not require tuning, we ï¬nd that tuning the initial learning rate and decay scheme for Adam yields signiï¬cant improvements over its default settings in all cases.
# 4.1 Hyperparameter Tuning | 1705.08292#21 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 22 | # 4.1 Hyperparameter Tuning
Optimization hyperparameters have a large influence on the quality of solutions found by optimization algorithms for deep neural networks. The algorithms under consideration have many hyperparameters: the initial step size ao, the step decay scheme, the momentum value {o, the momentum schedule 8, the smoothing term e, the initialization scheme for the gradient accumulator, and the parameter controlling how to combine gradient outer products, to name a few. A grid search on a large space of hyperparameters is infeasible even with substantial industrial resources, and we found that the parameters that impacted performance the most were the initial step size and the step decay scheme. We left the remaining parameters with their default settings. We describe the differences between the default settings of Torch, DyNet, and Tensorflow in Appendix [B]for completeness. | 1705.08292#22 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 23 | To tune the step sizes, we evaluated a logarithmically-spaced grid of ï¬ve step sizes. If the best performance was ever at one of the extremes of the grid, we would try new grid points so that the best performance was contained in the middle of the parameters. For example, if we initially tried step sizes 2, 1, 0.5, 0.25, and 0.125 and found that 2 was the best performing, we would have tried the step size 4 to see if performance was improved. If performance improved, we would have tried 8 and so on. We list the initial step sizes we tried in Appendix D.
For step size decay, we explored two separate schemes, a development-based decay scheme (dev- decay) and a ï¬xed frequency decay scheme (ï¬xed-decay). For dev-decay, we keep track of the best validation performance so far, and at each epoch decay the learning rate by a constant factor δ if the model does not attain a new best value. For ï¬xed-decay, we decay the learning rate by a constant factor δ every k epochs. We recommend the dev-decay scheme when a development set is available; | 1705.08292#23 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 24 | https://github. com/szagoruyko/cifar.torch; (2) torch-rnn: https://github.com/jcjohnson/torch-rnn; (3) span-parser: https://github.com/jhcross/span-parser; (4) emnlp2016: https://github.com/ cdg720/emnlp2016.
6
(a) CIFAR-10 (Train) (b) CIFAR-10 (Test)
Figure 1: Training (left) and top-1 test error (right) on CIFAR-10. The annotations indicate where the best performance is attained for each method. The shading represents ± one standard deviation computed across ï¬ve runs from random initial starting points. In all cases, adaptive methods are performing worse on both train and test than non-adaptive methods.
not only does it have fewer hyperparameters than the ï¬xed frequency scheme, but our experiments also show that it produces results comparable to, or better than, the ï¬xed-decay scheme.
# 4.2 Convolutional Neural Network
We used the VGG+BN+Dropout network for CIFAR-10 from the Torch blog [23], which in prior work achieves a baseline test error of 7.55%. Figure 1 shows the learning curve for each algorithm on both the training and test dataset. | 1705.08292#24 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 25 | We observe that the solutions found by SGD and HB do indeed generalize better than those found by adaptive methods. The best overall test error found by a non-adaptive algorithm, SGD, was 7.65 ± 0.14%, whereas the best adaptive method, RMSProp, achieved a test error of 9.60 ± 0.19%.
Early on in training, the adaptive methods appear to be performing better than the non-adaptive methods, but starting at epoch 50, even though the training error of the adaptive methods is still lower, SGD and HB begin to outperform adaptive methods on the test error. By epoch 100, the performance of SGD and HB surpass all adaptive methods on both train and test. Among all adaptive methods, AdaGradâs rate of improvement ï¬atlines the earliest. We also found that by increasing the step size, we could drive the performance of the adaptive methods down in the ï¬rst 50 or so epochs, but the aggressive step size made the ï¬atlining behavior worse, and no step decay scheme could ï¬x the behavior.
# 4.3 Character-Level Language Modeling
Using the torch-rnn library, we train a character-level language model on the text of the novel War and Peace, running for a ï¬xed budget of 200 epochs. Our results are shown in Figures 2(a) and 2(b). | 1705.08292#25 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 26 | Under the ï¬xed-decay scheme, the best conï¬guration for all algorithms except AdaGrad was to decay relatively late with regards to the total number of epochs, either 60 or 80% through the total number of epochs and by a large amount, dividing the step size by 10. The dev-decay scheme paralleled (within the same standard deviation) the results of the exhaustive search over the decay frequency and amount; we report the curves from the ï¬xed policy.
Overall, SGD achieved the lowest test loss at 1.212 ± 0.001. AdaGrad has fast initial progress, but ï¬atlines. The adaptive methods appear more sensitive to the initialization scheme than non-adaptive methods, displaying a higher variance on both train and test. Surprisingly, RMSProp closely trails SGD on test loss, conï¬rming that it is not impossible for adaptive methods to ï¬nd solutions that generalize well. We note that there are step conï¬gurations for RMSProp that drive the training loss
7
|
below that of SGD, but these conï¬gurations cause erratic behavior on test, driving the test error of RMSProp above Adam.
# 4.4 Constituency Parsing | 1705.08292#26 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 27 | below that of SGD, but these conï¬gurations cause erratic behavior on test, driving the test error of RMSProp above Adam.
# 4.4 Constituency Parsing
A constituency parser is used to predict the hierarchical structure of a sentence, breaking it down into nested clause-level, phrase-level, and word-level units. We carry out experiments using two state- of-the-art parsers: the stand-alone discriminative parser of Cross and Huang [2], and the generative reranking parser of Choe and Charniak [1]. In both cases, we use the dev-decay scheme with δ = 0.9 for learning rate decay.
Discriminative Model. Cross and Huang [2] develop a transition-based framework that reduces constituency parsing to a sequence prediction problem, giving a one-to-one correspondence between parse trees and sequences of structural and labeling actions. Using their code with the default settings, we trained for 50 epochs on the Penn Treebank [11], comparing labeled F1 scores on the training and development data over time. RMSProp was not implemented in the used version of DyNet, and we omit it from our experiments. Results are shown in Figures 2(c) and 2(d). | 1705.08292#27 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 28 | We ï¬nd that SGD obtained the best overall performance on the development set, followed closely by HB and Adam, with AdaGrad trailing far behind. The default conï¬guration of Adam without learning rate decay actually achieved the best overall training performance by the end of the run, but was notably worse than tuned Adam on the development set.
Interestingly, Adam achieved its best development F1 of 91.11 quite early, after just 6 epochs, whereas SGD took 18 epochs to reach this value and didnât reach its best F1 of 91.24 until epoch 31. On the other hand, Adam continued to improve on the training set well after its best development performance was obtained, while the peaks for SGD were more closely aligned. | 1705.08292#28 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 29 | Generative Model. Choe and Charniak [1] show that constituency parsing can be cast as a language modeling problem, with trees being represented by their depth-ï¬rst traversals. This formulation requires a separate base system to produce candidate parse trees, which are then rescored by the generative model. Using an adapted version of their code base,3 we retrained their model for 100 epochs on the Penn Treebank. However, to reduce computational costs, we made two minor changes: (a) we used a smaller LSTM hidden dimension of 500 instead of 1500, ï¬nding that performance decreased only slightly; and (b) we accordingly lowered the dropout ratio from 0.7 to 0.5. Since they demonstrated a high correlation between perplexity (the exponential of the average loss) and labeled F1 on the development set, we explored the relation between training and development perplexity to avoid any conï¬ation with the performance of a base parser.
Our results are shown in Figures 2(e) and 2(f). On development set performance, SGD and HB obtained the best perplexities, with SGD slightly ahead. Despite having one of the best performance curves on the training dataset, Adam achieves the worst development perplexities.
# 5 Conclusion | 1705.08292#29 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 30 | # 5 Conclusion
Despite the fact that our experimental evidence demonstrates that adaptive methods are not advan- tageous for machine learning, the Adam algorithm remains incredibly popular. We are not sure exactly as to why, but hope that our step-size tuning suggestions make it easier for practitioners to use standard stochastic gradient methods in their research. In our conversations with other researchers, we have surmised that adaptive gradient methods are particularly popular for training GANs [18, 5] and Q-learning with function approximation [13, 9]. Both of these applications stand out because they are not solving optimization problems. It is possible that the dynamics of Adam are accidentally well matched to these sorts of optimization-free iterative search procedures. It is also possible that carefully tuned stochastic gradient methods may work as well or better in both of these applications.
3While the code of Choe and Charniak treats the entire corpus as a single long example, relying on the network to reset itself upon encountering an end-of-sentence token, we use the more conventional approach of resetting the network for each example. This reduces training efï¬ciency slightly when batches contain examples of different lengths, but removes a potential confounding factor from our experiments.
8
It is an exciting direction of future work to determine which of these possibilities is true and to understand better as to why.
# Acknowledgements | 1705.08292#30 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 31 | 8
It is an exciting direction of future work to determine which of these possibilities is true and to understand better as to why.
# Acknowledgements
The authors would like to thank Pieter Abbeel, Moritz Hardt, Tomer Koren, Sergey Levine, Henry Milner, Yoram Singer, and Shivaram Venkataraman for many helpful comments and suggestions. RR is generously supported by DOE award AC02-05CH11231. MS and AW are supported by NSF Graduate Research Fellowships. NS is partially supported by NSF-IIS-13-02662 and NSF-IIS- 15-46500, an Inter ICRI-RI award and a Google Faculty Award. BR is generously supported by NSF award CCF-1359814, ONR awards N00014-14-1-0024 and N00014-17-1-2191, the DARPA Fundamental Limits of Learning (Fun LoL) Program, a Sloan Research Fellowship, and a Google Faculty Award.
| â
# SGD
â
# HB
â
# AdaGrad
â_
# RMSProp
â
# Adam
â
# Adam (Default) | | 1705.08292#31 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 32 | | â
# SGD
â
# HB
â
# AdaGrad
â_
# RMSProp
â
# Adam
â
# Adam (Default) |
(a) War and Peace (Training Set) (b) War and Peace (Test Set) (c) Discriminative Parsing (Training Set) (d) Discriminative Parsing (Development Set) (e) Generative Parsing (Training Set) (f) Generative Parsing (Development Set)
Figure 2: Performance curves on the training data (left) and the development/test data (right) for three experiments on natural language tasks. The annotations indicate where the best performance is attained for each method. The shading represents one standard deviation computed across ï¬ve runs from random initial starting points.
9
# References
[1] Do Kook Choe and Eugene Charniak. Parsing as language modeling. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2331â2336. The Association for Computational Linguistics, 2016. | 1705.08292#32 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 33 | [2] James Cross and Liang Huang. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pages 1â11. The Association for Computational Linguistics, 2016.
[3] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121â2159, 2011.
[4] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
[5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv:1611.07004, 2016.
[6] Andrej Karparthy. A peak at trends in machine learning. https://medium.com/@karpathy/ a-peek-at-trends-in-machine-learning-ab8a1085a106. Accessed: 2017-05-17. | 1705.08292#33 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 34 | [7] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In The International Conference on Learning Representations (ICLR), 2017.
[8] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR), 2015.
[9] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016.
[10] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. arXiv:1703.10622, 2017.
[11] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313â330, 1993. | 1705.08292#34 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 35 | [12] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[13] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lilli- crap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
[14] Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In Neural Information Processing Systems (NIPS), 2015.
[15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations (ICLR), 2015.
[16] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. arXiv:1702.03849, 2017. | 1705.08292#35 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 36 | [17] Benjamin Recht, Moritz Hardt, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the International Conference on Machine Learning (ICML), 2016.
[18] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proceedings of The International Conference on Machine Learning (ICML), 2016.
[19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
10
[20] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[21] T. Tieleman and G. Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. | 1705.08292#36 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 37 | [22] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289â315, 2007.
# [23] Sergey Zagoruyko. Torch blog. http://torch.ch/blog/2015/07/30/cifar.html, 2015.
11
# A Full details of the minimum norm solution from Section 3.3
Full Details. The simplest derivation of the minimum norm solution uses the kernel trick. We know that the optimal solution has the form wSGD = X T α where α = K â1y and K = XX T . Note that
4 ifi=j and y,=1 8 ift=j7 and y, =â-1 3 1 Kij = wp ye y ifi Aj and yy; =1 ifi Aj and yy; = â1
Positing that αi = α+ if yi = 1 and αi = αâ if yi = â1 leaves us with the equations
(3n+ + 1)α+ + nâαâ = 1, n+α+ + (3nâ + 3)αâ = â1.
Solving this system of equations yields (3.2).
# B Differences between Torch, DyNet, and Tensorï¬ow | 1705.08292#37 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 38 | Solving this system of equations yields (3.2).
# B Differences between Torch, DyNet, and Tensorï¬ow
[| Torch | Tensorflow | DyNet SGD Momentum 0 No default 0.9 AdaGrad Initial Mean 0 0.1 0 AdaGrad ¢⬠le-10 Not used le-20 RMSProp Initial Mean 0 1.0 - RMSProp 3 0.99 0.9 - RMSProp ⬠le-8 le-10 - Adam 6; 0.9 0.9 0.9 Adam (5 0.999 0.999 0.999
Table 3: Default hyperparameters for algorithms in deep learning frameworks.
Table 3 lists the default values of the parameters for the various deep learning packages used in our experiments. In Torch, the Heavy Ball algorithm is callable simply by changing default momentum away from 0 with nesterov=False. In Tensorï¬ow and DyNet, SGD with momentum is implemented separately from ordinary SGD. For our Heavy Ball experiments we use a constant momentum of β = 0.9.
# C Data-generating distribution | 1705.08292#38 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
1705.08292 | 39 | # C Data-generating distribution
We sketch here how the example from Section 3.3 can be modiï¬ed to use a data-generating distribution. To start, let D be a uniform distribution over N examples constructed as before, and let D = {(x1, y1), . . . , (xn, yn)} be a training set consisting of n i.i.d. draws from D. We will ultimately want to take N to be large enough so that the probability of a repeated training example is small.
Let E be the event that there is a repeated training example. We have by a simple union bound that
n n n n PIE\=P|\U U tei =2)}) <5 YO Phi =2)) mn ce i=1j=it1 i=1 j=i41
If the training set has no repeats, the result from Section 3.3 tells us that SGD will learn a perfect classiï¬er, while AdaGrad will ï¬nd a solution that correctly classiï¬es the training examples but predicts Ëy = 1 for all unseen data points. Hence, conditioned on ¬E, the error for SGD is
P(eyy~p [sign (wS°?, x) Ay | aE] =0,
12
while the error for AdaGrad is | 1705.08292#39 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | [
{
"id": "1703.10622"
},
{
"id": "1702.03849"
},
{
"id": "1611.07004"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.