doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.02163 | 8 | θâ D (θG) = argmax f (θG, θD) , θD (3)
where f is commonly chosen to be
f (θG, θD) = Exâ¼pdata [log (D (x; θD))] + Ezâ¼N (0,I) [log (1 â D (G (z; θG) ; θD))] . Here x â X is the data variable, z â Z is the latent variable, pdata is the data distribution, the discriminator D (·; θD) : X â [0, 1] outputs the estimated probability that a sample x comes from the data distribution, θD and θG are the discriminator and generator parameters, and the generator function G (·; θG) : Z â X transforms a sample in the latent space into a sample in the data space.
2
(4)
Published as a conference paper at ICLR 2017
For the minimax loss in Eq. 4, the optimal discriminator Dâ (x) is a known smooth function of the generator probability pG (x) (Goodfellow et al., 2014),
Dâ (x) = pdata (x) pdata (x) + pG (x) . (5) | 1611.02163#8 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 8 | 2.3 OPENAI GYM
The OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creating an interface between RL environments and algorithms for evaluation and comparison purposes. OpenAI Gym is currently very popular due to the large number of environments supported by it. For example ALE, Go, MouintainCar and VizDoom (Zhu et al., 2016), an environment for the learning of the 3D ï¬rst-person-shooter game âDoomâ. OpenAI Gymâs recent appearance and wide usage indicates the growing interest and research done in the ï¬eld of RL.
2.4 OPENAI UNIVERSE
Universe (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms can train on over a thousand games. Universe includes very advanced games such as GTA V, Portal as well as other tasks (e.g. browser tasks). Unlike RLE, Universe doesnât run the games locally and requires a VNC interface to a server that runs the games. This leads to a lower frame rate and thus longer training times.
2
2.5 MALMO | 1611.02205#8 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 8 | advantage approximations. The method helps unify policy gradient and actor-critic methods: it can be seen as using the off-policy critic to reduce variance in policy gradient or using on-policy Monte Carlo returns to correct for bias in the critic gradient. We further provide theoretical analy- sis of the control variate, and derive two additional variants of Q-Prop. The method can be easily incorporated into any policy gradient algorithm. We show that Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage esti- mation (GAE) (Schulman et al., 2015; 2016), and improved stability over deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016) across a repertoire of continuous control tasks. | 1611.02247#8 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 9 | Ranking. While we focus on the search problem in this work, we brieï¬y mention the ranking problem here. A popular choice for ranking is to choose the shortest program consistent with input- output examples (Gulwani, 2016). A more sophisticated approach is employed by FlashFill (Singh & Gulwani, 2015). It works in a manner similar to max-margin structured prediction, where known ground truth programs are given, and the learning task is to assign scores to programs such that the ground truth programs score higher than other programs that satisfy the input-output speciï¬cation.
# 3 LEARNING INDUCTIVE PROGRAM SYNTHESIS (LIPS)
In this section we outline the general approach that we follow in this work, which we call Learning Inductive Program Synthesis (LIPS). The details of our instantiation of LIPS appear in Sect. 4. The components of LIPS are (1) a DSL speciï¬cation, (2) a data-generation procedure, (3) a machine learning model that maps from input-output examples to program attributes, and (4) a search pro- cedure that searches program space in an order guided by the model from (3). The framework is related to the formulation of Menon et al. (2013); the relationship and key differences are discussed in Sect. 6. | 1611.01989#9 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 9 | When the generator loss in Eq. 2 is rewritten directly in terms of pG (x) and Eq. 5 rather than θG and θâ D (θG), then it is similarly a smooth function of pG (x). These smoothness guarantees are typically lost when D (x; θD) and G (z; θG) are drawn from parametric families. They nonetheless suggest that the true generator objective in Eq. 2 will often be well behaved, and is a desirable target for direct optimization. Explicitly solving for the optimal discriminator parameters θâ D (θG) for every update step of the generator G is computationally infeasible for discriminators based on neural networks. Therefore this minimax optimization problem is typically solved by alternating gradient descent on θG and ascent on θD. The optimal solution θâ = {θâ D} is a ï¬xed point of these iterative learning dynamics. Addition- ally, if f (θG, θD) is convex in θG and concave in θD, then alternating gradient descent (ascent) trust region updates are guaranteed to | 1611.02163#9 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 9 | 2
2.5 MALMO
Malmo (Johnson et al., 2016) is an artiï¬cial intelligence experimentation platform of the famous game âMinecraftâ. Although Malmo consists of only a single game, it presents numerous challenges since the âMinecraftâ game can be conï¬gured differently each time. The input to the RL algorithms include speciï¬c features indicating the âstateâ of the game and the current reward.
2.6 DEEPMIND LAB
DeepMind Lab (?) is a ï¬rst-person 3D platform environment which allows training RL algorithms on several different challenges: static/random map navigation, collect fruit (a form of reward) and a laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In LAB the agent observations are the game screen (with an additional depth channel) and the velocity of the character. LAB supports four games (one game - four different modes).
2.7 DEEP Q-LEARNING | 1611.02205#9 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.01989 | 10 | (1) DSL and Attributes. The choice of DSL is important in LIPS, just as it is in any program synthesis system. It should be expressive enough to capture the problems that we wish to solve, but restricted as much as possible to limit the difï¬culty of the search. In LIPS we additionally specify an attribute function A that maps programs P of the DSL to ï¬nite attribute vectors a = A(P ). (Attribute vectors of different programs need not have equal length.) Attributes serve as the link between the machine learning and the search component of LIPS: the machine learning model predicts a distribution q(a | E), where E is the set of input-output examples, and the search procedure aims to search over programs P as ordered by q(A(P ) | E). Thus an attribute is useful if it is both predictable from input-output examples, and if conditioning on its value signiï¬cantly reduces the effective size of the search space. | 1611.01989#10 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 10 | θD) is convex in θG and concave in θD, then alternating gradient descent (ascent) trust region updates are guaranteed to converge to the ï¬xed point, under certain additional weak assump- tions (Juditsky et al., 2011). However in practice f (θG, θD) is typically very far from convex in θG and concave in θD, and updates are not constrained in an appropriate way. As a result GAN training suffers from mode collapse, undamped oscillations, and other problems detailed in Section 1.1. In order to address these difï¬culties, we will introduce a surrogate objective function fK (θG, θD) for training the generator which more closely resembles the true generator objective f (θG, θâ | 1611.02163#10 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 10 | 2.7 DEEP Q-LEARNING
In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015), an RL algorithm whose goal is to ï¬nd an optimal policy (i.e., given a current state, choose action that maximize the ï¬nal score). The state of the game is simply the game screen, and the action is a combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learns through trial and error while trying to estimate the âQ-functionâ, which predicts the cumulative discounted reward at the end of the episode given the current state and action while following a policy Ï. The Q-function is represented using a convolution neural network that receives the screen as input and predicts the best possible action at itâs output. The Q-function weights θ are updated according to:
O41(S2, an) =O,+ (Rigi + ymax(Qi(se41, a; 6) _ Q1(S¢, at; 4))VoQu(se, at; %), (1) | 1611.02205#10 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 10 | Reinforcement learning (RL) aims to learn a policy for an agent such that it behaves optimally according to a reward function. At a time step f and state s;, the agent chooses an action a; ac- cording to its policy 2(a;|s,), the state of the agent and the environment changes to new state 8; according to dynamics p(s;+1|8;,@;), the agent receives a reward r(s;,a;), and the process con- tinues. Let R, denote a y-discounted cumulative return from f for an infinite horizon problem, i.e R= Li, 崉'r(8y,4;'). The goal of reinforcement learning is to maximize the expected return J(@) = Ex, [Ro] with respect to the policy parameters @. In this section, we review several standard techniques for performing this optimization, and in the next section, we will discuss our proposed Q-Prop algorithm that combines the strengths of these approaches to achieve efficient, stable RL. Monte Carlo policy gradient refers to policy gradient methods that use full Monte Carlo returns, e.g. REINFORCE (Williams, 1992) and TRPO (Schulman et al., 2015), and policy | 1611.02247#10 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 11 | Possible attributes are the (perhaps position-dependent) presence or absence of high-level functions (e.g., does the program contain or end in a call to SORT). Other possible attributes include control ï¬ow templates (e.g., the number of loops and conditionals). In the extreme case, one may set A to the identity function, in which case the attribute is equivalent to the program; however, in our experiments we ï¬nd that performance is improved by choosing a more abstract attribute function.
n=1 of programs P (n) in Step 2 is to generate a dataset ((P (n), a(n), E (n)))N (2) Data Generation. the chosen DSL, their attributes a(n), and accompanying input-output examples E (n). Different ap- proaches are possible, ranging from enumerating valid programs in the DSL and pruning, to training a more sophisticated generative model of programs in the DSL. The key in the LIPS formulation is to ensure that it is feasible to generate a large dataset (ideally millions of programs). | 1611.01989#11 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 11 | 2.2 UNROLLING GANS
A local optimum of the discriminator parameters θâ iterative optimization procedure, D can be expressed as the ï¬xed point of an
θ0 D = θD (6)
. df (0g, 0% ok + nf (6c, D lim 65,
θk+1 D = θk dθk D D (7)
θâ D (θG) = lim kââ (8)
where ηk is the learning rate schedule. For clarity, we have expressed Eq. 7 as a full batch steepest gradient ascent equation. More sophisticated optimizers can be similarly unrolled. In our experi- ments we unroll Adam (Kingma & Ba, 2014).
By unrolling for Kv steps, we create a surrogate objective for the update of the generator, fic 0a, 9D) = f (8a, 05 (Oa,9D)) When K = 0 this objective corresponds exactly to the standard GAN objective, while as K â â it corresponds to the true generator objective function f (θG, θâ D (G)). By adjusting the number of unrolling steps K, we are thus able to interpolate between standard GAN training dynamics with their associated pathologies, and more costly gradient descent on the true generator loss. | 1611.02163#11 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 11 | where s;, S;41 are the current and next states, a; is the action chosen, a is the step size, y is the discounting factor R;,,1 is the reward received by applying a; at s;. 6â represents the previous weights of the network that are updated periodically. Other than DQN, we examined two leading algorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al. 2015p, a DQN based algorithm with a modified network update rule. Dueling Double DQN (Wang et al} 2015p, a modification of D-DQNâs architecture in which the Q-function is modeled using a state (screen) dependent estimator and an action dependent estimator.
3 THE RETRO LEARNING ENVIRONMENT
3.1 SUPER NINTENDO ENTERTAINMENT SYSTEM
The Super Nintendo Entertainment System (SNES) is a home video game console developed by Nintendo and released in 1990. A total of 783 games were released, among them, the iconic Super Mario World, Donkey Kong Country and The Legend of Zelda. Table (1) presents a comparison between Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and Genesis games are far more complex.
3.2 IMPLEMENTATION | 1611.02205#11 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.01989 | 12 | (3) Machine Learning Model. The machine learning problem is to learn a distribution of at- tributes given input-output examples, q(a | E). There is freedom to explore a large space of models, so long as the input component can encode E, and the output is a proper distribution over attributes (e.g., if attributes are a ï¬xed-size binary vector, then a neural network with independent sigmoid
3
Published as a conference paper at ICLR 2017
outputs is appropriate; if attributes are variable size, then a recurrent neural network output could be used). Attributes are observed at training time, so training can use a maximum likelihood objective.
(4) Search. predicted q(a | E) to guide the search. We describe speciï¬c approaches in the next section.
# 4 DEEPCODER
Here we describe DeepCoder, our instantiation of LIPS including a choice of DSL, a data generation strategy, models for encoding input-output sets, and algorithms for searching over program space.
4.1 DOMAIN SPECIFIC LANGUAGE AND ATTRIBUTES | 1611.01989#12 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 12 | 2.3 PARAMETER UPDATES
The generator and discriminator parameter updates using this surrogate loss are
# dfK (θG, θD) dθG df (θG, θD) dθD
θG â θG â η (10)
θD â θD + η . (11)
For clarity we use full batch steepest gradient descent (ascent) with stepsize η above, while in ex- periments we instead use minibatch Adam for both updates. The gradient in Eq. 10 requires back- propagating through the optimization process in Eq. 7. A clear description of differentiation through
3
(9)
Published as a conference paper at ICLR 2017
â> Forward Pass a â»> 0, Gradients 9, 8, 2 6, Gradients > â» Dp! . 8p) SGD | f 05.) > SGD | £,(8,,8,) Unrolling - â~ SGD a A 4 Gradients 8, 8, 8, | 1611.02163#12 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 12 | 3.2 IMPLEMENTATION
To allow easier integration with current platforms and algorithms, we based our environment on the ALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly coupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learning environment from the emulator. This was achieved by incorporating an interface named LibRetro (li- bRetro site), that allows communication between front-end programs to game-console emulators. Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti- mated total of over 7,000 games that can potentially be supported using this interface. Examples of supported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,
# 1http://stella.sourceforge.net/
3
Saturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple- mented using the snes9x2 as itâs games present interesting, yet plausible to overcome challenges. Additionally, we utilized the Genesis-Plus-GX3 emulator, which supports several Sega consoles: Genesis/Mega Drive, Master System, Game Gear and SG-1000.
3.3 SOURCE CODE | 1611.02205#12 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 12 | 2.1 MONTE CARLO POLICY GRADIENT METHODS
Monte Carlo policy gradient methods apply direct gradient-based optimization to the reinforcement learning objective. This involves directly differentiating the J(@) objective with respect to the policy
Published as a conference paper at ICLR 2017
parameters 9. The standard form, known as the REINFORCE algorithm (Williams, 1992), is shown below:
VoJ(8) = Ey Vo log m9 (a1|81)Â¥ Ri] = El) Y Vo log 29 (a1|8:)(R: â b(s:))], (1) t=0 t=0
where b(s;) is known as the baseline. For convenience of later derivations, Eq. 1 can also be written as below, where Px (s) = L729 ¥ p(s; = 8) is the unnormalized discounted state visitation frequency,
VoJ (8) = Es,~pz(-),a,~n(-|3,) (V0 log 26 (a:| 81) (Rr â b(81))]- (2) | 1611.02247#12 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 13 | 4.1 DOMAIN SPECIFIC LANGUAGE AND ATTRIBUTES
We consider binary attributes indicating the presence or absence of high-level functions in the target program. To make this effective, the chosen DSL needs to contain constructs that are not so low- level that they all appear in the vast majority of programs, but at the same time should be common enough so that predicting their occurrence from input-output examples can be learned successfully.
Following this observation, our DSL is loosely inspired by query languages such as SQL or LINQ, where high-level functions are used in sequence to manipulate data. A program in our DSL is a sequence of function calls, where the result of each call initializes a fresh variable that is either a singleton integer or an integer array. Functions can be applied to any of the inputs or previously computed (intermediate) variables. The output of the program is the return value of the last function call, i.e., the last variable. See Fig. 1 for an example program of length T = 4 in our DSL. | 1611.01989#13 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 13 | Figure 1: An illustration of the computation graph for an unrolled GAN with 3 unrolling steps. The generator update in Equation 10 involves backpropagating the generator gradient (blue arrows) through the unrolled optimization. Each step k in the unrolled optimization uses the gradients of fk with respect to θk D, as described in Equation 7 and indicated by the green arrows. The discriminator update in Equation 11 does not depend on the unrolled optimization (red arrow).
gradient descent is given as Algorithm 2 in (Maclaurin et al., 2015), though in practice the use of an automatic differentiation package means this step does not need to be programmed explicitly. A pictorial representation of these updates is provided in Figure 1.
It is important to distinguish this from an approach suggested in (Goodfellow et al., 2014), that several update steps of the discriminator parameters should be run before each single update step for the generator. In that approach, the update steps for both models are still gradient descent (ascent) with respect to ï¬xed values of the other model parameters, rather than the surrogate loss we describe in Eq. 9. Performing K steps of discriminator update between each single step of generator update corresponds to updating the generator parameters θG using only the ï¬rst term in Eq. 12 below.
2.4 THE MISSING GRADIENT TERM | 1611.02163#13 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 13 | Eq. 2 is an unbiased gradient of the RL objective. However, in practice, most policy gradient meth- ods effectively use undiscounted state visitation frequencies, i.e. Y= 1 in the equal for pz, and are therefore biased; in fact, making them unbiased often hurts performance (Thomas, 2014). In this paper, we mainly discuss bias due to function approximation, off-policy learning, and value back-ups.
The gradient is estimated using Monte Carlo samples in practice and has very high variance. A proper choice of baseline is necessary to reduce the variance sufficiently such that learning becomes feasible. A common choice is to estimate the value function of the state Vz(s,) to use as the base- line, which provides an estimate of advantage function Az(s;,@;), which is a centered action-value function Q7(s;,a;), as defined below: | 1611.02247#13 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.02163 | 14 | 2.4 THE MISSING GRADIENT TERM
To better understand the behavior of the surrogate loss fK (θG, θD), we examine its gradient with respect to the generator parameters θG,
dfx (Gc,8p) _ OF (8a,05 (Oc,4D)) | AF (8a,9F (8a, OD)) As (8a, 9D) dO 0c 0K (6c,4p) dq (12)
Standard GAN training corresponds exactly to updating the generator parameters using only the ï¬rst term in this gradient, with θK D (θG, θD) being the parameters resulting from the discriminator update step. An optimal generator for any ï¬xed discriminator is a delta function at the x to which the discriminator assigns highest data probability. Therefore, in standard GAN training, each generator update step is a partial collapse towards a delta function.
The second term captures how the discriminator would react to a change in the generator. It reduces the tendency of the generator to engage in mode collapse. For instance, the second term reï¬ects that as the generator collapses towards a delta function, the discriminator reacts and assigns lower probability to that state, increasing the generator loss. It therefore discourages the generator from collapsing, and may improve stability. | 1611.02163#14 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 14 | # 3.4 RLE INTERFACE
RLE provides a uniï¬ed interface to all games in its supported consoles, acting as an RL-wrapper to the LibRetro interface. Initialization of the environment is done by providing a game (ROM ï¬le) and a gaming-console (denoted by âcoreâ). Upon initialization, the ï¬rst state is the initial frame of the game, skipping all menu selection screens. The cores are provided with the RLE and installed together with the environment. Actions have a bit-wise representation where each controller button is represented by a one-hot vector. Therefore a combination of several buttons is possible using the bit-wise OR operator. The number of valid buttons combinations is larger than 700, therefore only the meaningful combinations are provided. The environments observation is the game screen, provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The reward can be deï¬ned differently per game, usually we set it to be the score difference between two consecutive frames. By setting different conï¬guration to the environment, it is possible to alter in-game properties such as difï¬culty (i.e easy, medium, hard), its characters, levels, etc. | 1611.02205#14 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 14 | Va( s+) = Ex[Ri] = Ezy (a;|s,) [Ox(s1,@r)] On(8:,@1) = r(81,@r) + YEx[Ri+1] = r(81,@r) + YE p(s,.1|s,,0,) [V2(8:+1)] (3) An(81,@r) = On( 8,41) â Vr(Sr)Qx(S;,a@;) summarizes the performance of each action from a given state, assuming it follows 7 thereafter, and A;(s,;,a,) provides a measure of how each action compares to the average perfor- mance at the state s,, which is given by Vz(s,;). Using Az(s;,@;) centers the learning signal and reduces variance significantly.
Besides high variance, another problem with the policy gradient is that it requires on-policy samples. This makes policy gradient optimization very sample intensive. To achieve similar sample efficiency as off-policy methods, we can attempt to include off-policy data. Prior attempts use importance sampling to include off-policy trajectories; however, these are known to be difficult scale to high- dimensional action spaces because of rapidly degenerating importance weights (Precup, 2000). | 1611.02247#14 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 15 | Figure 1: An example program in our DSL that takes a single integer array as its input.
Overall, our DSL contains the ï¬rst-order functions HEAD, LAST, TAKE, DROP, ACCESS, MINI- MUM, MAXIMUM, REVERSE, SORT, SUM, and the higher-order functions MAP, FILTER, COUNT, ZIPWITH, SCANL1. Higher-order functions require suitable lambda functions for their behavior to be fully speciï¬ed: for MAP our DSL provides lambdas (+1), (-1), (*2), (/2), (*(-1)), (**2), (*3), (/3), (*4), (/4); for FILTER and COUNT there are predicates (>0), (<0), (%2==0), (%2==1) and for ZIPWITH and SCANL1 the DSL provides lambdas (+), (-), (*), MIN, MAX. A description of the semantics of all functions is provided in Appendix F.
Note that while the language only allows linear control ï¬ow, many of its functions do perform branching and looping internally (e.g., SORT, COUNT, ...). Examples of more sophisticated pro- grams expressible in our DSL, which were inspired by the simplest problems appearing on pro- gramming competition websites, are shown in Appendix A.
4.2 DATA GENERATION | 1611.01989#15 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 15 | As K â â, θK = 0, and therefore the second term in Eq. 12 goes to 0 (Danskin, 1967). The gradient of the unrolled surrogate loss fK (θG, θD) with respect to θG is thus identical to the gradient of the standard GAN loss f (θG, θD) both when K = 0 and when K â â, where we take K â â to imply that in the standard GAN the discriminator is also fully optimized between each generator update. Between these two extremes, fK (θG, θD) captures additional information about the response of the discriminator to changes in the generator.
4
Published as a conference paper at ICLR 2017
2.5 CONSEQUENCES OF THE SURROGATE LOSS
GANs can be thought of as a game between the discriminator (D) and the generator (G). The agents take turns taking actions and updating their parameters until a Nash equilibrium is reached. The optimal action for D is to evaluate the probability ratio pG(x)+pdata(x) for the generatorâs move x (Eq. 5). The optimal generator action is to move its mass to maximize this ratio. | 1611.02163#15 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 15 | 2.2 POLICY GRADIENT WITH FUNCTION APPROXIMATION
Policy gradient methods with function approximation (Sutton et al., 1999), or actor-critic methods, include a policy evaluation step, which often uses temporal difference (TD) learning to fit a critic Q,, for the current policy 2(@), and a policy improvement step which greedily optimizes the policy 1 against the critic estimate Q,,. Significant gains in sample efficiency may be achievable using off- policy TD learning for the critic, as in Q-learning and deterministic policy gradient (Sutton, 1990; Silver et al., 2014), typically by means of experience replay for training deep Q networks (Mnih et al., 2015; Lillicrap et al., 2016; Gu et al., 201 6b).
One particularly relevant example of such a method is the deep deterministic policy gradient (DDPG) (Silver et al., 2014; Lillicrap et al., 2016). The updates for this method are given below, where 79 (a;|s;) = 5(a; = j19(s;)) is a deterministic policy, B is arbitrary exploration distribution, and Pg corresponds to sampling from a replay buffer. Q(-,-) is the target network that slowly tracks Qy (Lillicrap et al., 2016). | 1611.02247#15 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 16 | 4.2 DATA GENERATION
To generate a dataset, we enumerate programs in the DSL, heuristically pruning away those with eas- ily detectable issues such as a redundant variable whose value does not affect the program output, or, more generally, existence of a shorter equivalent program (equivalence can be overapproximated by identical behavior on randomly or carefully chosen inputs). To generate valid inputs for a program, we enforce a constraint on the output value bounding integers to some predetermined range, and then propagate these constraints backward through the program to obtain a range of valid values for each input. If one of these ranges is empty, we discard the program. Otherwise, input-output pairs can be generated by picking inputs from the pre-computed valid ranges and executing the program to obtain the output values. The binary attribute vectors are easily computed from the program source codes.
4
Published as a conference paper at ICLR 2017
4.3 MACHINE LEARNING MODEL | 1611.01989#16 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 16 | The initial move for G will be to move as much mass as its parametric family and update step permits to the single point that maximizes the ratio of probability densities. The action D will then take is quite simple. It will track that point, and to the extent allowed by its own parametric family and update step assign low data probability to it, and uniform probability everywhere else. This cycle of G moving and D following will repeat forever or converge depending on the rate of change of the two agents. This is similar to the situation in simple matrix games like rock-paper-scissors and matching pennies, where alternating gradient descent (ascent) with a ï¬xed learning rate is known not to converge (Singh et al., 2000; Bowling & Veloso, 2002).
In the unrolled case, however, this undesirable behavior no longer occurs. Now Gâs actions take into account how D will respond. In particular, G will try to make steps that D will have a hard time responding to. This extra information helps the generator spread its mass to make the next D step less effective instead of collapsing to a point. | 1611.02163#16 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 16 | 3.5 ENVIRONMENT CHALLENGES
Integrating SNES and Genesis with RLE presents new challenges to the ï¬eld of RL where visual information in the form of an image is the only state available to the agent. Obviously, SNES games are signiï¬cantly more complex and unpredictable than Atari games. For example in sports games, such as NBA, while the player (agent) controls a single player, all the other nine playersâ behavior is determined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES games exhibit delayed rewards in the course of their play (i.e., reward for an actions is given many time steps after it was performed). Similarly, in some of the SNES games, an agent can obtain a reward that is indirectly related to the imposed task. For example, in platform games, such as Super Mario, reward is received for collecting coins and defeating enemies, while the goal of the challenge is to reach the end of the level which requires to move to keep moving to the right. Moreover, upon completing a level, a score bonus is given according to the time required for its completion. Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too much time. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of | 1611.02205#16 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 16 | w= arg min Ey, ~pp (.) a~B(-|s:) [(r(81,@r) + YQ( 8141, Ma ($141) â Qw(8;,4))"] (4) 6 = argmax Es, 4 (.)(Qw(Sr,H0(S+))]
When the critic and policy are parametrized with neural networks, full optimization is expensive, and instead stochastic gradient optimization is used. The gradient in the policy improvement phase is given below, which is generally a biased gradient of J(@).
Vol (8) © Es,~pp(-)[VaQw($1,)|a=y19(s,) V0 #0 ($r)] (5)
Published as a conference paper at ICLR 2017
The crucial benefits of DDPG are that it does not rely on high variance REINFORCE gradients and is trainable on off-policy data. These properties make DDPG and other analogous off-policy methods significantly more sample-efficient than policy gradient methods (Lillicrap et al., 2016; Gu et al., 2016b; Duan et al., 2016). However, the use of a biased policy gradient estimator makes analyzing its convergence and stability properties difficult.
# 3 Q-PROP | 1611.02247#16 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 17 | 4
Published as a conference paper at ICLR 2017
4.3 MACHINE LEARNING MODEL
Observe how the input-output data in Fig. 1 is informative of the functions appearing in the program: the values in the output are all negative, divisible by 4, they are sorted in decreasing order, and they happen to be multiples of numbers appearing in the input. Our aim is to learn to recognize such patterns in the input-output examples, and to leverage them to predict the presence or absence of individual functions. We employ neural networks to model and learn the mapping from input-output examples to attributes. We can think of these networks as consisting of two parts:
1. an encoder: a differentiable mapping from a set of M input-output examples generated by a single program to a latent real-valued vector, and
2. a decoder: a differentiable mapping from the latent vector representing a set of M input- output examples to predictions of the ground truth programâs attributes. | 1611.01989#17 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 17 | In principle, a surrogate loss function could be used for both D and G. In the case of 1-step unrolled optimization this is known to lead to convergence for games in which gradient descent (ascent) fails (Zhang & Lesser, 2010). However, the motivation for using the surrogate generator loss in Section 2.2, of unrolling the inner of two nested min and max functions, does not apply to using a surrogate discriminator loss. Additionally, it is more common for the discriminator to overpower the generator than vice-versa when training a GAN. Giving more information to G by allowing it to âsee into the futureâ may thus help the two models be more balanced.
# 3 EXPERIMENTS | 1611.02163#17 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 17 | # 2http://www.snes9x.com/ 3https://github.com/ekeeke/Genesis-Plus-GX 4https://github.com/nadavbh12/Retro-Learning-Environment
4
eight directions and one action button, SNES has eight-directions pad and six actions buttons. Since combinations of buttons are allowed, and required at times, the actual actions space may be larger than 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES is very rich, ï¬lled with details which may move locally or across the screen, effectively acting as non-stationary noise since it provided little to no information regarding the state itself. Finally, we note that SNES utilized the ï¬rst 3D games. In the game Wolfenstein, the player must navigate a maze from a ï¬rst-person perspective, while dodging and attacking enemies. The SNES offers plenty of other 3D games such as ï¬ight and racing games which exhibit similar challenges. These games are much more realistic, thus inferring from SNES games to âreal worldâ tasks, as in the case of self driving cars, might be more beneï¬cial. A visual comparison of two games, Atari and SNES, is presented in Figure (1).
pT q | 1611.02205#17 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 17 | # 3 Q-PROP
In this section, we derive the Q-Prop estimator for policy gradient. The key idea from this estimator comes from observing Equations 2 and 5 and noting that the former provides an almost unbiased (see Section 2.1), but high variance gradient, while the latter provides a deterministic, but biased gradient. By using the deterministic biased estimator as a particular form of control variate (Ross, 2006; Paisley et al., 2012) for the Monte Carlo policy gradient estimator, we can effectively use both types of gradient information to construct a new estimator that in practice exhibits improved sample efficiency through the inclusion of off-policy samples while preserving the stability of on-policy Monte Carlo policy gradient.
3.1 Q-PROP ESTIMATOR | 1611.02247#17 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 18 | 2. a decoder: a differentiable mapping from the latent vector representing a set of M input- output examples to predictions of the ground truth programâs attributes.
For the encoder we use a simple feed-forward architecture. First, we represent the input and output types (singleton or array) by a one-hot-encoding, and we pad the inputs and outputs to a maximum length L with a special NULL value. Second, each integer in the inputs and in the output is mapped to a learned embedding vector of size E = 20. (The range of integers is restricted to a ï¬nite range and each embedding is parametrized individually.) Third, for each input-output example separately, we concatenate the embeddings of the input types, the inputs, the output type, and the output into a single (ï¬xed-length) vector, and pass this vector through H = 3 hidden layers containing K = 256 sigmoid units each. The third hidden layer thus provides an encoding of each individual input-output example. Finally, for input-output examples in a set generated from the same program, we pool these representations together by simple arithmetic averaging. See Appendix C for more details. | 1611.01989#18 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 18 | # 3 EXPERIMENTS
In this section we demonstrate improved mode coverage and stability by applying this technique to ï¬ve datasets of increasing complexity. Evaluation of generative models is a notoriously hard problem (Theis et al., 2016). As such the de facto standard in GAN literature has become sample quality as evaluated by a human and/or evaluated by a heuristic (Inception score for example, (Salimans et al., 2016)). While these evaluation metrics do a reasonable job capturing sample quality, they fail to capture sample diversity. In our ï¬rst 2 experiments diversity is easily evaluated via visual inspection. In our later experiments this is not the case, and we will use a variety of methods to quantify coverage of samples. Our measures are individually strongly suggestive of unrolling reducing mode-collapse and improving stability, but none of them alone are conclusive. We believe that taken together however, they provide extremely compelling evidence for the advantages of unrolling.
When doing stochastic optimization, we must choose which minibatches to use in the unrolling updates in Eq. 7. We experimented with both a ï¬xed minibatch and re-sampled minibatches for each unrolling step, and found it did not signiï¬cantly impact the result. We use ï¬xed minibatches for all experiments in this section. | 1611.02163#18 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 18 | pT q
Figure 1: Atari 2600 and SNES game screen comparison: Left: âBoxingâ an Atari 2600 ï¬ghting game , Right: âMortal Kombatâ a SNES ï¬ghting game. Note the exceptional difference in the amount of details between the two games. Therefore, distinguishing a relevant signal from noise is much more difï¬cult.
Table 2: Comparison between RLE and the latest RL environments
Characteristics Number of Games In game adjustments1 Frame rate Observation (Input) RLE 8 out of 7000+ Yes 530fps2(SNES) screen, RAM OpenAI Universe 1000+ NO 60fps Screen Iniï¬nte Mario 1 No 5675fps2 hand crafted features ALE 74 No 120fps screen, RAM Project Malmo 1 Yes <7000fps hand crafted features DeepMind Lab 4 Yes <1000fps screen + depth and velocity
1 Allowing changes in-the game conï¬gurations (e.g., changing difï¬culty, characters, etc.)
2 Measured on an i7-5930k CPU
4 EXPERIMENTS
4.1 EVALUATION METHODOLOGY | 1611.02205#18 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 18 | 3.1 Q-PROP ESTIMATOR
To derive the Q-Prop gradient estimator, we start by using the first-order Taylor expansion of an arbitrary function f(s;,a;), f(;,ar) = f(8;,@:) + Vaf(S1,@)|a=a,(a; â @,) as the control vari- ate for the policy gradient estimator. We use O(s;,a;) = Li, Â¥ ~*r(s,,a,) to denote Monte Carlo return from state s, and action a;, i.e. E,[O(s:,a:)] = r(8;,@;) + YEp[Vx(s;41)], and He(8r) = Ex, (a,|s,)[@r] to denote the expected action of a stochastic policy 2. Full derivation is in Appendix A. | 1611.02247#18 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 19 | The advantage of this encoder lies in its simplicity, and we found it reasonably easy to train. A disadvantage is that it requires an upper bound L on the length of arrays appearing in the input and output. We conï¬rmed that the chosen encoder architecture is sensible in that it performs empirically at least as well as an RNN encoder, a natural baseline, which may however be more difï¬cult to train.
DeepCoder learns to predict presence or absence of individual functions of the DSL. We shall see this can already be exploited by various search techniques to large computational gains. We use a decoder that pre-multiplies the encoding of input-output examples by a learned C ÃK matrix, where C = 34 is the number of functions in our DSL (higher-order functions and lambdas are predicted independently), and treats the resulting C numbers as log-unnormalized probabilities (logits) of each function appearing in the source code. Fig. 2 shows the predictions a trained neural network made from 5 input-output examples for the program shown in Fig. 1.
S w z z = 7 Pa a a q 5 2 a ta B. bus EZ BB: a gs 5b 2 FF E88 8 zs 2233228 z SPs i igeF 628 2s. . FES EEG 0 20 0 oa owe oa 02a0 0 0 0 | 1611.01989#19 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 19 | We provide a reference implementation of this technique at github.com/poolio/unrolled gan.
3.1 MIXTURE OF GAUSSIANS DATASET
To illustrate the impact of discriminator unrolling, we train a simple GAN architecture on a 2D mixture of 8 Gaussians arranged in a circle. For a detailed list of architecture and hyperparameters see Appendix A. Figure 2 shows the dynamics of this model through time. Without unrolling the generator rotates around the valid modes of the data distribution but is never able to spread out mass. When adding in unrolling steps G quickly learns to spread probability mass and the system converges to the data distribution.
In Appendix B we perform further experiments on this toy dataset. We explore how unrolling compares to historical averaging, and compares to using the unrolled discriminator to update the
5
Published as a conference paper at ICLR 2017
- FV MO > 22/2: . . a > 7 - - ° Step 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target | 1611.02163#19 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 19 | 2 Measured on an i7-5930k CPU
4 EXPERIMENTS
4.1 EVALUATION METHODOLOGY
The evaluation methodology that we used for benchmarking the different algorithms is the popular method proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reached convergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by performing 30 episodes of every game. Each episode ends either by reaching a terminal state or after 5 minutes. The results are averaged per game and compared to the average result of a human player. For each game the human player was given two hours for training, and his performances were evaluated over 20 episodes. As the various algorithms donât use the game audio in the learning process, the audio was muted for both the agent and the human. From both, humans and agents
5 | 1611.02205#19 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 19 | VoJ(8) = Ep,.21Vo log 19 (a;|s;)(O(s;, ar) â f(s;,a;)| Tr Ep,,2|Vo log To (a1|8:)f (8:,a1)] = Ep,.21Vo log 9 (a;|s,)(O(s;, a1) â f(s,,a,)] Tr Ep, Val (S:,@)|a=a, Vote(Sr)] (6)
Eq. 6 is general for arbitrary function f(s;,a;) that is differentiable with respect to a, at an arbitrary value of G;; however, a sensible choice is to use the critic Q,, for f and j1@(s;) for @; to get,
VoJ(8) = Ep,,x[Vo log %9(ar| 8) (Os, a1) â Ow(8+,1)] + Ep, [VaQw(Sr,@)]a=y9(s;) Vo Ho (S1)]- (7)
Finally, since in practice we estimate advantages A(s;,a1), we write the Q-Prop estimator in terms of advantages to complete the basic derivation, | 1611.02247#19 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 20 | Figure 2: Neural network predicts the probability of each function appearing in the source code.
4.4 SEARCH
One of the central ideas of this work is to use a neural network to guide the search for a program consistent with a set of input-output examples instead of directly predicting the entire source code. This section brieï¬y describes the search techniques and how they integrate the predicted attributes.
Depth-ï¬rst search (DFS). We use an optimized version of DFS to search over programs with a given maximum length T (see Appendix D for details). When the search procedure extends a partial program by a new function, it has to try the functions in the DSL in some order. At this point DFS can opt to consider the functions as ordered by their predicted probabilities from the neural network.
âSort and addâ enumeration. A stronger way of utilizing the predicted probabilities of functions in an enumerative search procedure is to use a Sort and add scheme, which maintains a set of active functions and performs DFS with the active function set only. Whenever the search fails, the next
5
Published as a conference paper at ICLR 2017
most probable function (or several) are added to the active set and the search restarts with this larger active set. Note that this scheme has the deï¬ciency of potentially re-exploring some parts of the search space several times, which could be avoided by a more sophisticated search procedure. | 1611.01989#20 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 20 | - FV MO > 22/2: . . a > 7 - - ° Step 0 Step 5k Step 10k Step 15k Step 20k Step 25k Target
Figure 2: Unrolling the discriminator stabilizes GAN training on a toy 2D mixture of Gaussians dataset. Columns show a heatmap of the generator distribution after increasing numbers of training steps. The ï¬nal column shows the data distribution. The top row shows training for a GAN with 10 unrolling steps. Its generator quickly spreads out and converges to the target distribution. The bottom row shows standard GAN training. The generator rotates through the modes of the data distribution. It never converges to a ï¬xed distribution, and only ever assigns signiï¬cant probability mass to a single data mode at once. | 1611.02163#20 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 20 | 5
score, a random agent score (an agent performing actions randomly) was subtracted to assure that learning indeed occurred. It is important to note that DQNâs e-greedy approach (select a random action with a small probability â¬) is present during testing thus assuring that the same sequence of actions isnât repeated. While the screen dimensions in SNES are larger than those of Atari, in our experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to 84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesnât affect a humanâs ability to play the game, therefore suitable for RL algorithms as well. To handle the large action space, we limited the algorithmâs actions to the minimal button combinations which provide unique behavior. For example, on many games the R and L action buttons donât have any use therefore their use and combinations were omitted.
4.1.1 RESULTS
A thorough comparison of the four different agentsâ performances on SNES games can be seen in Figure 2 The full results can be found in Table (3). Only in the game Mortal Kombat a trained agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games. | 1611.02205#20 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.01989 | 21 | Sketch. Sketch (Solar-Lezama, 2008) is a successful SMT-based program synthesis tool from the programming languages research community. While its main use case is to synthesize programs by ï¬lling in âholesâ in incomplete source code so as to match speciï¬ed requirements, it is ï¬exible enough for our use case as well. The function in each step and its arguments can be treated as the âholesâ, and the requirement to be satisï¬ed is consistency with the provided set of input-output examples. Sketch can utilize the neural network predictions in a Sort and add scheme as described above, as the possibilities for each function hole can be restricted to the current active set.
λ2. λ2 (Feser et al., 2015) is a program synthesis tool from the programming languages com- munity that combines enumerative search with deduction to prune the search space. It is designed to infer small functional programs for data structure manipulation from input-output examples, by combining functions from a provided library. λ2 can be used in our framework using a Sort and add scheme as described above by choosing the library of functions according to the neural network predictions.
4.5 TRAINING LOSS FUNCTION | 1611.01989#21 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 21 | REGAN EROS DAVIS HN WOK ~â~VeKSourd arHOaâ yy i aPenNOQw new oawoed ~wOeOSENK LHHANL OD O~D 8 ae Ce ee PRIWWRODG wD Im DWN HW AN*OSAND SOvwe~n~ak SSN GG D2 eR 80 GY Fo Fe Ps BIE RAVES KK HOUSES OD PT PWIYPwWwWgG we tS S w PTIBvsxorust OWA PHAVHO~VAT Pee suâDHRe~ erereraererereres UelererereurGbGEbEGEEEEEELEELEEEL UeleveeewrejeEGggggGggGEbGhGhbeh Uv eGggGggeGeebGsebyeby, UvlereuwwwrwjGEGgggeGgEbGbeGheb UelereerngbGEGEGEEGEEEELEEEELEL UelereverrrrGbEEEEEEEELEELEELEL UorvrwwwnjegGggGbgGEEELELELEL steps 20k steps SOK steps 100k steps $3 4 a4 OL 33 3 & wz & G éé éé éé éé éé éé éé | 1611.02163#21 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 21 | One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks, navigating in a maze and detecting object. As evident from figure 2). all agents produce poor results indicating a lack of the required properties. By using e-greedy approach the agents werenât able to explore enough states (or even other rooms in our case). The algorithmâs final policy appeared as a random walk in a 3D space. Exploration based on visited states such as presented in[Bellemare] might help addressing this issue. An interesting case is Gradius III, a side-scrolling, flight-shooter game. While the trained agent was able to master the technical aspects of the game, which includes shooting incoming enemies and dodging their projectiles, itâs final score is still far from a humanâs. This is due to a hidden game mechanism in the form of âpower-upsâ, which can be accumulated, and significantly increase the players abilities. The more power-ups collected without being use â the larger their final impact will be. While this game-mechanism is evident to a human, the agent acts myopically and uses the power-up straight awaâ
4.2 REWARD SHAPING | 1611.02205#21 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 21 | Eq. 8 is composed of an analytic gradient through the critic as in Eq. 5 and a residual REINFORCE gradient in Eq. 2. From the above derivation, Q-Prop is simply a Monte Carlo policy gradient estimator with a special form of control variate. The important insight comes from the fact that Q,, can be trained using off-policy data as in Eq. 4. Under this setting, Q-Prop is no longer just a Monte Carlo policy gradient method, but more closely resembles an actor-critic method, where the critic can be updated off-policy but the actor is always updated on-policy with an additional REINFORCE correction term so that it remains a Monte Carlo policy gradient method regardless of the parametrization, training method, and performance of the critic. Therefore, Q-Prop can be directly combined with a number of prior techniques from both on-policy methods such as natural policy gradient (Kakade, 2001), trust-region policy optimization (TRPO) (Schulman et al., 2015) and generalized advantage estimation (GAE) (Schulman et al., 2016), and off-policy methods such as DDPG (Lillicrap et al., 2016) and Retrace(A) (Munos et al., 2016). | 1611.02247#21 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 22 | 4.5 TRAINING LOSS FUNCTION
We use the negative cross entropy loss to train the neural network described in Sect. 4.3, so that its predictions about each function can be interpreted as marginal probabilities. The LIPS framework dictates learning q(a | E), the joint distribution of all attributes a given the input-output examples, and it is not clear a priori how much DeepCoder loses by ignoring correlations between functions. However, under the simplifying assumption that the runtime of searching for a program of length T with C functions made available to a search routine is proportional to C T , the following result for Sort and add procedures shows that their runtime can be optimized using marginal probabilities. Lemma 1. For any ï¬xed program length T , the expected total runtime of a Sort and add search scheme can be upper bounded by a quantity that is minimized by adding the functions in the order of decreasing true marginal probabilities. | 1611.01989#22 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02205 | 22 | 4.2 REWARD SHAPING
As part of the environment and algorithm evaluation process, we investigated two case studies. First is a game on which DQN had failed to achieve a better-than-random score, and second is a game on which the training duration was signiï¬cantly longer than that of other games.
In the first case study, we used a 2D back-view racing game F-Zeroâ. In this game, one is required to complete four laps of the track while avoiding other race cars. The reward, as defined by the score of the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lap may last as long as 30 seconds, which span over 450 states (actions) before reward is received. Since DQNâs exploration is a simple e-greedy approach, it was not able to produce a useful strategy. We approached this issue using reward shaping, essentially a modification of the reward to be a function of the reward and the observation, rather than the reward alone. Here, we define the reward to be the sum of the score and the agentâs speed (a metric displayed on the screen of the game). Indeed when the reward was defined as such, the agents learned to finish the race in first place within a short training period. | 1611.02205#22 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 22 | Intuitively, if the critic Q,, approximates Q; well, it provides a reliable gradient, reduces the estima- tor variance, and improves the convergence rate. Interestingly, control variate analysis in the next section shows that this is not the only circumstance where Q-Prop helps reduce variance.
Published as a conference paper at ICLR 2017
3.2 CONTROL VARIATE ANALYSIS AND ADAPTIVE Q-PROP
For Q-Prop to be applied reliably, it is crucial to analyze how the variance of the estimator changes before and after the application of control variate. Following the prior work on control vari- ates (Ross, 2006; Paisley et al., 2012), we first introduce n(s,) to Eq. 8, a weighing variable that modulates the strength of control variate. This additional variable 1(s;) does not introduce bias to the estimator.
VoJ(8) =Ep,.x[Vo log %o( a,|s,)(A (81,41) â (81)Aw(s:,@r)] (9) +Ep,[1(81)VaQw(1,@)|a=9(s)) V0 Ho (81)]
The variance of this estimator is given below, where m = 1...M indexes the dimension of 0, | 1611.02247#22 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 23 | Proof. Predicting source code functions from input-output examples can be seen as a multi-label classiï¬cation problem, where each set of input-output examples is associated with a set of relevant labels (functions appearing in the ground truth source code). Dembczynski et al. (2010) showed that in multi-label classiï¬cation under a so-called Rank loss, it is Bayes optimal to rank the labels according to their marginal probabilities. If the runtime of search with C functions is proportional to C T , the total runtime of a Sort and add procedure can be monotonically transformed so that it is upper bounded by this Rank loss. See Appendix E for more details.
# 5 EXPERIMENTS
In this section we report results from two categories of experiments. Our main experiments (Sect. 5.1) show that the LIPS framework can lead to signiï¬cant performance gains in solving IPS by demonstrating such gains with DeepCoder. In Sect. 5.2 we illustrate the robustness of the method by demonstrating a strong kind of generalization ability across programs of different lengths.
5.1 DEEPCODER COMPARED TO BASELINES | 1611.01989#23 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 23 | Figure 3: Unrolled GAN training increases stability for an RNN generator and convolutional dis- criminator trained on MNIST. The top row was run with 20 unrolling steps. The bottom row is a standard GAN, with 0 unrolling steps. Images are samples from the generator after the indicated number of training steps.
generator, but without backpropagating through the generator. In both cases we ï¬nd that the unrolled objective performs better.
3.2 PATHOLOGICAL MODEL WITH MISMATCHED GENERATOR AND DISCRIMINATOR | 1611.02163#23 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 23 | The second case study is the famous game of Super Mario. In this game the agent, Mario, is required to reach the right-hand side of the screen, while avoiding enemies and collecting coins. We found this case interesting as it involves several challenges at once: dynamic background that can change drastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemies and pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach the end of the level without any reward shaping, this was possible since the agent receives rewards for events (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player, causing the agent to prefer moving right. However, the training time required for convergence was signiï¬cantly longer than other games. We deï¬ned the reward as the sum of the in-game reward and a bonus granted according the the playerâs position, making moving right preferable. This reward
5A video demonstration can be found at https://youtu.be/nUl9XLMveEU
6
RLE Benchmarks 120 100 m DQN ⢠D-DQN ⢠Duel-DDQN Normalized Score °°E-Zero (speed bonus) Gradius 3 Mortal Kombat Super Mario Wolfenstein Algorithms | 1611.02205#23 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 23 | The variance of this estimator is given below, where m = 1...M indexes the dimension of 0,
Var" = Ep, LVatan( Vo,, log 9 (a1|8r)(A(s;, a1) â 1 (81)A(S,a1))) | « (10)
If we choose 7(s;) such that Var* < Var, where Var = Ep, [Yn Vata, (Vo,, log T9(a;|$1)A A(s;,a;))] is the original estimator variance measure, then we have managed to reduce the variance. Directly analyzing the above variance measure is nontrivial, for the same reason that computing the optimal baseline is difficult (Weaver & Tao, 2001). In addition, it is often impractical to get multiple action samples from the same state, which prohibits using naive Monte Carlo to estimate the expectations. Instead, we propose a surrogate variance measure, Var = Ep, [Vara, (A(s;,a7))]. A similar surrogate is also used by prior work on learning state-dependent baseline (Mnih & Gregor, 2014), and the benefit is that the measure becomes more tractable, | 1611.02247#23 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 24 | 5.1 DEEPCODER COMPARED TO BASELINES
We trained a neural network as described in Sect. 4.3 to predict used functions from input-output examples and constructed a test set of P = 500 programs, guaranteed to be semantically disjoint from all programs on which the neural network was trained (similarly to the equivalence check described in Sect. 4.2, we have ensured that all test programs behave differently from all programs used during training on at least one input). For each test program we generated M = 5 input- output examples involving integers of magnitudes up to 256, passed the examples to the trained neural network, and fed the obtained predictions to the search procedures from Sect. 4.4. We also considered a RNN-based decoder generating programs using beam search (see Sect. 5.3 for details).
6
Published as a conference paper at ICLR 2017
To evaluate DeepCoder, we then recorded the time the search procedures needed to ï¬nd a program consistent with the M input-output examples. As a baseline, we also ran all search procedures using a simple prior as function probabilities, computed from their global incidence in the program corpus.
Table 1: Search speedups on programs of length T = 3 due to using neural network predictions. | 1611.01989#24 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 24 | 3.2 PATHOLOGICAL MODEL WITH MISMATCHED GENERATOR AND DISCRIMINATOR
To evaluate the ability of this approach to improve trainability, we look to a traditionally challenging family of models to train â recurrent neural networks (RNNs). In this experiment we try to generate MNIST samples using an LSTM (Hochreiter & Schmidhuber, 1997). MNIST digits are 28x28 pixel images. At each timestep of the generator LSTM, it outputs one column of this image, so that after 28 timesteps it has output the entire sample. We use a convolutional neural network as the discriminator. See Appendix C for the full model and training details. Unlike in all previously successful GAN models, there is no symmetry between the generator and the discriminator in this task, resulting in a more complex power balance. Results can be seen in Figure 3. Once again, without unrolling the model quickly collapses, and rotates through a sequence of single modes. Instead of rotating spatially, it cycles through proto-digit like blobs. When running with unrolling steps the generator disperses and appears to cover the whole data distribution, as in the 2D example.
6
Published as a conference paper at ICLR 2017 | 1611.02163#24 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 24 | Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the a random agentâs score and dividing by the human player score. Thus 100 represents a human player and zero a random agent.
proved useful, as training time required for convergence decreased signiï¬cantly. The two games above can be seen in Figure (3).
Figure (4) illustrates the agentâs average value function . Though both were able complete the stage trained upon, the convergence rate with reward shaping is signiï¬cantly quicker due to the immediate realization of the agent to move rightwards.
oo000 ee ne â pugs C1 0°00"
Figure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent to master them game after less training time. Right: The game F-Zero. By granting a reward for speed the agent was able to master this game, as oppose to using solely the in-game reward.
7
Super Mario Reward Shaping Comparison og Averaged Action Value (Q) g 02 â Super Mario With Right Bonus â Super Mario Without Right Bonus Epoch
Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right (blue) and without (red).
4.3 MULTI-AGENT REINFORCEMENT LEARNING | 1611.02205#24 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 24 | Var* = Ep, [Vata, ( A(s;, a;) â 9 (8;)A(s;, a))] ; 7 7 (11) = Var + Ep, [â21 (s;)Cova, (A(s;,a1),A(s;,ar)) +1(s+)?Vara, (A(s;,@r))]Since E,[A(s;,a;)] =E,[A(s;,@;)] =0, the terms can be simplified as below,
Cova, (A,A) = Ex {A(s;,a:)A(s;,a1)] (12) Vata, (A) = E,(A(s:,ar)7] = Va Ow($1,4)|o~s29(s,) 20 (81) VaQw( $12) la=peo(s1)>
where L9(s;) is the covariance matrix of the stochastic policy 2g. The nice property of Eq. 11 is that Vara, (A) is analytical and Cova, (A,A) can be estimated with single action sample. Using this estimate, we propose adaptive variants of Q-Prop that regulate the variance of the gradient estimate. | 1611.02247#24 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.02163 | 25 | 6
Published as a conference paper at ICLR 2017
Unrolling steps 1/4 size of D compared to G Modes generated KL(model ||data) 1/2 size of D compared to G Modes generated KL(model ||data) Discriminator Size 0 30.6 ± 20.73 5.99 ± 0.42 628.0 ± 140.9 2.58 ±0.751 1 65.4 ± 34.75 5.911 ± 0.14 523.6 ± 55.768 2.44 ±0.26 5 236.4 ± 63.30 4.67 ± 0.43 732.0 ± 44.98 1.66 ± 0.090
Table 1: Unrolled GANs cover more discrete modes when modeling a dataset with 1,000 data modes, corresponding to all combinations of three MNIST digits (103 digit combinations). The number of modes covered is given for different numbers of unrolling steps, and for two different architectures. The reverse KL divergence between model and data is also given. Standard error is provided for both measures.
3.3 MODE AND MANIFOLD COLLAPSE USING AUGMENTED MNIST | 1611.02163#25 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 25 | In this section we describe our experiments with RLEâs multi-agent capabilities. We consider the case where the number of agents, n = 2 and the goals of the agents are opposite, as in r1 = âr2. This scheme is known as fully competitive (Bus¸oniu et al., 2010). We used the simple single- agent RL approach (as described by Bus¸oniu et al. (2010) section 5.4.1) which is to apply to sin- gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto (1996) and Matari´c (1997). More elaborate schemes are possible such as the minimax-Q algo- rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conducted three experiments on this setup: the ï¬rst use was to train two different agents against the in-game AI, as done in previous sections, and evaluate their performance by letting them compete against each other. Here, rather than achieving the highest score, the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the | 1611.02205#25 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 25 | Adaptive Q-Prop. The optimal state-dependent factor 7(s;) can be computed per state, accord- ing to 1*(s;) = Cova,(A,A)/Vara,(A). This provides maximum reduction in variance according to Eq. 11. Substituting n*(s,) into Eq. 11, we get Var* = Ep, [(1 â Pcorr(A,A)*) Vara, (A)], where Pcorr is the correlation coefficient, which achieves guaranteed variance reduction if at any state Ais correlated with A. We call this the fully adaptive Q-Prop method. An important conclusion from this analysis is that, in adaptive Q-Prop, the critic Q,, does not necessarily need to be approximating Qx well to produce good results. Its Taylor expansion merely needs to be correlated with A, posi- tively or even negatively. This is in contrast with actor-critic methods, where performance is greatly dependent on the absolute accuracy of the criticâs approximation. | 1611.02247#25 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 26 | In the ï¬rst, smaller-scale experiment (program search space size â¼ 2 à 106) we trained the neural network on programs of length T = 3, and the test programs were of the same length. Table 1 shows the per-task timeout required such that a solution could be found for given proportions of the test tasks (in time less than or equal to the timeout). For example, in a hypothetical test set with 4 tasks and runtimes of 3s, 2s, 1s, 4s, the timeout required to solve 50% of tasks would be 2s. More detailed experimental results are discussed in Appendix B. | 1611.01989#26 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 26 | 3.3 MODE AND MANIFOLD COLLAPSE USING AUGMENTED MNIST
GANs suffer from two different types of model collapse â collapse to a subset of data modes, and collapse to a sub-manifold within the data distribution. In these experiments we isolate both effects using artiï¬cially constructed datasets, and demonstrate that unrolling can largely rescue both types of collapse.
3.3.1 DISCRETE MODE COLLAPSE
To explore the degree to which GANs drop discrete modes in a dataset, we use a technique similar to one from (Che et al., 2016). We construct a dataset by stacking three randomly chosen MNIST digits, so as to construct an RGB image with a different MNIST digit in each color channel. This new dataset has 1,000 distinct modes, corresponding to each combination of the ten MNIST classes in the three channels. | 1611.02163#26 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 26 | the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the in-game AI, and resume the training while competing against each other. In this case, we evaluated the agent by playing again against the in-game AI, separately. Finally, in our last experiment we try to boost the agent capabilities by alternated itâs opponents, switching between the in-game AI and other trained agents. | 1611.02205#26 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 26 | Conservative and Aggressive Q-Prop. In practice, the single-sample estimate of Cova, (A,A) has high variance itself, and we propose the following two practical implementations of adaptive Q-Prop: (1) n(s;) = 1 if Cova, (A,A) > 0 and 7(s;) = 0 if otherwise, and (2) 1(s;) = sign(Cova,(A,A)). The first implementation, which we call conservative Q-Prop, can be thought of as a more conservative version of Q-Prop, which effectively disables the control variate for some samples of the states. This is sensible as if A and A are negatively correlated, it is likely that the critic is very poor. The second variant can correspondingly be termed aggressive Q-Prop, since it makes more liberal use of the control variate.
3.3. Q-PROP ALGORITHM
Pseudo-code for the adaptive Q-Prop algorithm is provided in Algorithm 1. It is a mixture of policy gradient and actor-critic. At each iteration, it first rolls out the stochastic policy to collect on-policy
Published as a conference paper at ICLR 2017
Algorithm 1 Adaptive Q-Prop | 1611.02247#26 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 27 | In the main experiment, we tackled a large-scale problem of searching for programs consistent with input-output examples generated from programs of length T = 5 (search space size on the order of 1010), supported by a neural network trained with programs of shorter length T = 4. Here, we only consider P = 100 programs for reasons of computational efï¬ciency, after having veriï¬ed that this does not signiï¬cantly affect the results in Table 1. The table in Fig. 3a shows signiï¬cant speedups for DFS, Sort and add enumeration, and λ2 with Sort and add enumeration, the search techniques capable of solving the search problem in reasonable time frames. Note that Sort and add enumeration without the neural network (using prior probabilities of functions) exceeded the 104 second timeout in two cases, so the relative speedups shown are crude lower bounds.
Timeout needed DFS Enumeration λ2 to solve Baseline DeepCoder 20% 40% 60% 163s 2887s 6832s 24s 514s 2654s 20% 40% 60% 8181s >104s >104s 264s 4640s 9s 20% 463s 48s Speedup 6.8à 5.6à 2.6à 907à >37à >2à 9.6à (a) | 1611.01989#27 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 27 | We train a GAN on this dataset, and generate samples from the trained model (25,600 samples for all experiments). We then compute the predicted class label of each color channel using a pre-trained MNIST classiï¬er. To evaluate performance, we use two metrics: the number of modes for which the generator produced at least one sample, and the KL divergence between the model and the expected data distribution. Within this discrete label space, a KL divergence can be estimated tractably be- tween the generated samples and the data distribution over classes, where the data distribution is a uniform distribution over all 1,000 classes.
As presented in Table 1, as the number of unrolling steps is increased, both mode coverage and re- verse KL divergence improve. Contrary to (Che et al., 2016), we found that reasonably sized models (such as the one used in Section 3.4) covered all 1,000 modes even without unrolling. As such we use smaller convolutional GAN models. Details on the models used are provided in Appendix E. | 1611.02163#27 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 27 | 4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS
We chose the game Mortal Kombat, a two character side viewed ï¬ghting game (a screenshot of the game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties: both players share the same screen, the agentâs optimal policy is heavily dependent on the rivalâs behavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained using the same characters maintaining the identity of rival and agent. Furthermore, to remove the impact of the starting positions of both agents on their performances, the starting positions were initialized randomly.
In the ï¬rst experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN. Each agent was trained against the in-game AI until convergence. Then 50 matches were performed between the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 against D-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isnât far from the random case, since the algorithms converged into a policy in which movement towards the opponent is not
8 | 1611.02205#27 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 27 | 1: Initialize w for critic Q,,, 9 for stochastic policy 7, and replay buffer Z < 0. 2: repeat 3 fore =1,...,E do > Collect E episodes of on-policy experience using 4: 80, ~ P(S0) 5: fort =0,...,[â1do 6: 7 8 Gre ~ M9 (-|S1.e)s S1+1,¢ © P(-|S1,e,4r,e), Me =1(S1,e, 41.) Add batch data Z = { 80-7,1:£,@0:7â1,1:£,10:Tâ1,1:E} to replay buffer Z : Take E -T gradient steps on Qy using # and 7 9: Fit Vg (s;) using A 10: Compute Ate using GAE(A) and A, using Eq. 7 11: Set 1,¢ based on Section 3.2 12: Compute and center the learning signals /;,. = Ate - MeAte 13: Compute VoJ(8) © pp Le Es Vo log my (ai.elSre ne + MeV aQw($1.c:4)la=peo(e,.)V0H0 (Sie) 14: Take a gradient step on 7 using | 1611.02247#27 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 28 | Figure 3: Search speedups on programs of length T = 5 and inï¬uence of length of training pro- grams.
We hypothesize that the substantially larger performance gains on Sort and add schemes as com- pared to gains on DFS can be explained by the fact that the choice of attribute function (predicting presence of functions anywhere in the program) and learning objective of the neural network are better matched to the Sort and add schemes. Indeed, a more appropriate attribute function for DFS would be one that is more informative of the functions appearing early in the program, since ex- ploring an incorrect ï¬rst function is costly with DFS. On the other hand, the discussion in Sect. 4.5 provides theoretical indication that ignoring the correlations between functions is not cataclysmic for Sort and add enumeration, since a Rank loss that upper bounds the Sort and add runtime can still be minimized.
In Appendix G we analyse the performance of the neural networks used in these experiments, by investigating which attributes (program instructions) tend to be difï¬cult to distinguish from each other.
# 5.2 GENERALIZATION ACROSS PROGRAM LENGTHS
To investigate the encoderâs generalization ability across programs of different lengths, we trained a network to predict used functions from input-output examples that were generated from programs
7
Published as a conference paper at ICLR 2017 | 1611.01989#28 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 28 | We observe an additional interesting effect in this experiment. The beneï¬ts of unrolling increase as the discriminator size is reduced. We believe unrolling effectively increases the capacity of the discriminator. The unrolled discriminator can better react to any speciï¬c way in which the generator is producing non-data-like samples. When the discriminator is weak, the positive impact of unrolling is thus larger.
# 3.3.2 MANIFOLD COLLAPSE | 1611.02163#28 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 28 | 8
required rather than generalize the game. Therefore, in many episodes, little interaction between the two agents occur, leading to a semi-random outcome.
In our second experiment, we continued the training process of a the D-DQN network by letting it compete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30 episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet when faced again against the in-game AI its performance deteriorated drastically (from an average of 17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow et al., 2013) even though the agents played the same game. | 1611.02205#28 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.01989 | 29 | 7
Published as a conference paper at ICLR 2017
of length Tirain ⬠{1,...,4}. We then used each of these networks to predict functions on 5 test sets containing input-output examples generated from programs of lengths Tres ⬠{1,...,5}, respec- tively. The test programs of a given length T were semantically disjoint from all training programs of the same length T and also from all training and test programs of shorter lengths Tâ < T.
For each of the combinations of Ttrain and Ttest, Sort and add enumerative search was run both with and without using the neural networkâs predictions (in the latter case using prior probabilities) until it solved 20% of the test set tasks. Fig. 3b shows the relative speedup of the solver having access to predictions from the trained neural networks. These results indicate that the neural networks are able to generalize beyond programs of the same length that they were trained on. This is partly due to the search procedure on top of their predictions, which has the opportunity to correct for the presence of functions that the neural network failed to predict. Note that a sequence-to-sequence model trained on programs of a ï¬xed length could not be expected to exhibit this kind of generalization ability.
5.3 ALTERNATIVE MODELS | 1611.01989#29 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 29 | # 3.3.2 MANIFOLD COLLAPSE
In addition to discrete modes, we examine the effect of unrolling when modeling continuous mani- folds. To get at this quantity, we constructed a dataset consisting of colored MNIST digits. Unlike in the previous experiment, a single MNIST digit was chosen, and then assigned a single monochro- matic color. With a perfect generator, one should be able to recover the distribution of colors used to generate the digits. We use colored MNIST digits so that the generator also has to model the digits, which makes the task sufï¬ciently complex that the generator is unable to perfectly solve it. The color of each digit is sampled from a 3D normal distribution. Details of this dataset are provided in Appendix F. We will examine the distribution of colors in the samples generated by the trained GAN. As will also be true in the CIFAR10 example in Section 3.4, the lack of diversity in gener- ated colors is almost invisible using only visual inspection of the samples. Samples can be found in Appendix F.
7
Published as a conference paper at ICLR 2017 | 1611.02163#29 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 29 | In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in- game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, such that in each episode a different rival was playing as the opponent with the intention of preventing the agent from learning a policy suitable for just one opponent. The new agent was able to achieve a score of 162,966 (compared to the ânormalâ dueling D-DQN which achieved 169,633). As a new and objective measure of generalization, weâve conï¬gured the in-game AI difï¬culty to be âvery hardâ (as opposed to the default âmediumâ difï¬culty). In this metric the alternating version achieved 83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus, proving that the agent learned to generalize to other policies which werenât observed while training.
4.4 FUTURE CHALLENGES | 1611.02205#29 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 29 | samples, adds the batch to a replay buffer, takes a few gradient steps on the critic, computes A and A, and finally applies a gradient step on the policy 7g. In our implementation, the critic Q,, is fitted with off-policy TD learning using the same techniques as in DDPG (Lillicrap et al., 2016):
w= arg min Es, ~pg(.),a~B 81) [(r(S:, ar) + YEx[Q' (8: +1; @r41)] â Qw(S:,@r))].- (13)
Vo is fitted with the same technique in (Schulman et al., 2016). Generalized advantage estimation (GAE) (Schulman et al., 2016) is used to estimate A. The policy update can be done by any method that utilizes the first-order gradient and possibly the on-policy batch data, which includes trust region policy optimization (TRPO) (Schulman et al., 2015). Importantly, this is just one possible imple- mentation of Q-Prop, and in Appendix C we show a more general form that can interpolate between pure policy gradient and off-policy actor-critic.
3.4 LIMITATIONS | 1611.02247#29 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 30 | 5.3 ALTERNATIVE MODELS
Encoder We evaluated replacing the feed-forward architecture encoder (Sect. 4.3) with an RNN, a natural baseline. Using a GRU-based RNN we were able to achieve results almost as good as using the feed-forward architecture, but found the RNN encoder more difï¬cult to train.
Decoder We also considered a purely neural network-based approach, where an RNN decoder is trained to predict the entire program token-by-token. We combined this with our feed-forward encoder by initializing the RNN using the pooled ï¬nal layer of the encoder. We found it substantially more difï¬cult to train an RNN decoder as compared to the independent binary classiï¬ers employed above. Beam search was used to explore likely programs predicted by the RNN, but it only lead to a solution comparable with the other techniques when searching for programs of lengths T ⤠2, where the search space size is very small (on the order of 103). Note that using an RNN for both the encoder and decoder corresponds to a standard sequence-to-sequence model. However, we do do not rule out that a more sophisticated RNN decoder or training procedure could be possibly more successful.
# 6 RELATED WORK | 1611.01989#30 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 30 | 7
Published as a conference paper at ICLR 2017
Unrolling steps JS divergence with 1/4 layer size JS divergence with 1/2 layer size JS divergence with 1/1 layer size 0 0.073 ± 0.0058 0.095 ± 0.011 0.034 ± 0.0034 1 0.142 ± 0.028 0.119 ± 0.010 0.050 ± 0.0026 5 0.049 ± 0.0021 0.055 ± 0.0049 0.027 ± 0.0028 10 0.075 ± 0.012 0.074± 0.016 0.025 ± 0.00076
Table 2: Unrolled GANs better model a continuous distribution. GANs are trained to model ran- domly colored MNIST digits, where the color is drawn from a Gaussian distribution. The JS diver- gence between the data and model distributions over digit colors is then reported, along with standard error in the JS divergence. More unrolling steps, and larger models, lead to better JS divergence. | 1611.02163#30 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 30 | 4.4 FUTURE CHALLENGES
As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition to being able to learn all available games, the task of learning games in which reward delay is extreme, such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games, such as Super Mario, feature several stages that differ in background and the levels structure. The task of generalizing platform games, as in learning on one stage and being tested on the other, is another unexplored challenge. Likewise surpassing human performance remains a challenge since current state-of-the-art algorithms still struggling with the many SNES games.
# 5 CONCLUSION | 1611.02205#30 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 30 | 3.4 LIMITATIONS
A limitation with Q-Prop is that if data collection is very fast, e.g. using fast simulators, the compute time per episode is bound by the critic training at each iteration, and similar to that of DDPG and usually much more than that of TRPO. However, in applications where data collection speed is the bottleneck, there is sufficient time between policy updates to fit Q,, well, which can be done asynchronously from the data collection, and the compute time of Q-Prop will be about the same as that of TRPO.
Another limitation is the robustness to bad critics. We empirically show that our conservative Q-Prop is more robust than standard Q-Prop and much more robust than pure off-policy actor-critic methods such as DDPG; however, estimating when an off-policy critic is reliable or not is still a fundamental problem that shall be further investigated. We can also alleviate this limitation by adopting more stable off-policy critic learning techniques such as Retrace(A) (Munos et al., 2016).
# 4 RELATED WORK | 1611.02247#30 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 31 | Machine Learning for Inductive Program Synthesis. There is relatively little work on using machine learning for programming by example. The most closely related work is that of Menon et al. (2013), in which a hand-coded set of features of input-output examples are used as âclues.â When a clue appears in the input-output examples (e.g., the output is a permutation of the input), it reweights the probabilities of productions in a probabilistic context free grammar by a learned amount. This work shares the idea of learning to guide the search over program space conditional on input-output examples. One difference is in the domains. Menon et al. (2013) operate on short string manipulation programs, where it is arguably easier to hand-code features to recognize patterns in the input-output examples (e.g., if the outputs are always permutations or substrings of the input). Our work shows that there are strong cues in patterns in input-output examples in the domain of numbers and lists. However, the main difference is the scale. Menon et al. (2013) learns from a small (280 examples), manually-constructed dataset, which limits the capacity of the machine learning model that can be trained. Thus, | 1611.01989#31 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 31 | Figure 4: Visual perception of sample quality and diversity is very similar for models trained with different numbers of unrolling steps. Actual sample diversity is higher with more unrolling steps. Each pane shows samples generated after training a model on CIFAR10 with 0, 1, 5, and 10 steps of unrolling.
In order to recover the color the GAN assigned to the digit, we used k-means with 2 clusters, to pick out the foreground color from the background. We then performed this transformation for both the training data and the generated images. Next we ï¬t a Gaussian kernel density estimator to both distributions over digit colors. Finally, we computed the JS divergence between the model and data distributions over colors. Results can be found in Table 2 for several model sizes. Details of the models are provided in Appendix F.
In general, the best performing models are unrolled for 5-10 steps, and larger models perform better than smaller models. Counter-intuitively, taking 1 unrolling step seems to hurt this measure of diversity. We suspect that this is due to it introducing oscillatory dynamics into training. Taking more unrolling steps however leads to improved performance with unrolling.
IMAGE MODELING OF CIFAR10 | 1611.02163#31 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 31 | # 5 CONCLUSION
We introduced a rich environment for evaluating and developing reinforcement learning algorithms which presents signiï¬cant challenges to current state-of-the-art algorithms. In comparison to other environments RLE provides a large amount of games with access to both the screen and the in- game state. The modular implementation we chose allows extensions of the environment with new consoles and games, thus ensuring the relevance of the environment to RL algorithms for years to come (see Table (2)). Weâve encountered several games in which the learning process is highly dependent on the reward deï¬nition. This issue can be addressed and explored in RLE as reward deï¬nition can be done easily. The challenges presented in the RLE consist of: 3D interpretation, delayed reward, noisy background, stochastic AI behavior and more. Although some algorithms were able to play successfully on part of the games, to fully overcome these challenges, an agent must incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platform for future RL research.
# 6 ACKNOWLEDGMENTS
The authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred Agrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.
# REFERENCES | 1611.02205#31 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 31 | # 4 RELATED WORK
Variance reduction in policy gradient methods is a long-standing problem with a large body of prior work (Weaver & Tao, 2001; Greensmith et al., 2004; Schulman et al., 2016). However, exploration of action-dependent control variates is relatively recent, with most work focusing instead on simpler baselining techniques (Ross, 2006). A subtle exception is compatible feature approximation (Sutton et al., 1999) which can be viewed as a control variate as explained in Appendix B. Another exception is doubly robust estimator in contextual bandits (Dudik et al., 2011), which uses a different control variate whose bias cannot be tractably corrected. Control variates were explored recently not in RL but for approximate inference in stochastic models (Paisley et al., 2012), and the closest related work in that domain is the MuProp algorithm (Gu et al., 2016a) which uses a mean-field network as a surrogate for backpropagating a deterministic gradient through stochastic discrete variables. MuProp is not directly applicable to model-free RL because the dynamics are unknown; however, it
7% | 1611.02247#31 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 32 | al. (2013) learns from a small (280 examples), manually-constructed dataset, which limits the capacity of the machine learning model that can be trained. Thus, it forces the machine learning component to be relatively simple. Indeed, Menon et al. (2013) use a log-linear model and rely on hand-constructed features. LIPS automatically generates training data, which yields datasets with millions of programs and enables high-capacity deep learning models to be brought to bear on the problem. | 1611.01989#32 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 32 | IMAGE MODELING OF CIFAR10
Here we test our technique on a more traditional convolutional GAN architecture and task, similar to those used in (Radford et al., 2015; Salimans et al., 2016). In the previous experiments we tested models where the standard GAN training algorithm would not converge. In this section we improve a standard model by reducing its tendency to engage in mode collapse. We ran 4 conï¬gurations of this model, varying the number of unrolling steps to be 0, 1, 5, or 10. Each conï¬guration was run 5 times with different random seeds. For full training details see Appendix D. Samples from each of the 4 conï¬gurations can be found in Figure 4. There is no obvious difference in visual quality across these model conï¬gurations. Visual inspection however provides only a poor measure of sample diversity.
By training with an unrolled discriminator, we expect to generate more diverse samples which more closely resemble the underlying data distribution. We introduce two techniques to examine sample diversity: inference via optimization, and pairwise distance distributions.
8
Published as a conference paper at ICLR 2017 | 1611.02163#32 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 32 | # REFERENCES
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artiï¬cial Intelligence Research, 47:253â279, jun 2013.
M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count- based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016.
9
B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcement learning for robot navigation. In ESANN, 2013.
G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
L. Bus¸oniu, R. BabuËska, and B. De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in Multi-Agent Systems and Applications-1, pages 183â221. Springer, 2010. | 1611.02205#32 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 32 | Published as a conference paper at ICLR 2017 can be if the dynamics are learned as in model-based RL (Atkeson & Santamaria, 1997; Deisenroth & Rasmussen, 2011). This model-based Q-Prop is itself an interesting direction of research as it effectively corrects bias in model-based learning. Part of the benefit of Q-Prop is the ability to use off-policy data to improve on-policy policy gra- dient methods. Prior methods that combine off-policy data with policy gradients either introduce bias (Sutton et al., 1999; Silver et al., 2014) or use importance weighting, which is known to re- sult in degenerate importance weights in high dimensions, resulting in very high variance (Precup, 2000; Levine & Koltun, 2013). Q-Prop provides a new approach for using off-policy data to reduce variance without introducing further bias. Lastly, since Q-Prop uses both on-policy policy updates and off-policy critic learning, it can take advantage of prior work along both lines of research. We chose to implement Q-Prop on top of TRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com- bining Q-Prop with other | 1611.02247#32 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 33 | Learning Representations of Program State. Piech et al. (2015) propose to learn joint em- beddings of program states and programs to automatically extend teacher feedback to many similar programs in the MOOC setting. This work is similar in that it considers embedding program states, but the domain is different, and it otherwise speciï¬cally focuses on syntactic differences between semantically equivalent programs to provide stylistic feedback. Li et al. (2016) use graph neural networks (GNNs) to predict logical descriptions from program states, focusing on data structure shapes instead of numerical and list data. Such GNNs may be a suitable architecture to encode states appearing when extending our DSL to handle more complex data structures.
8
Published as a conference paper at ICLR 2017 | 1611.01989#33 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 33 | 8
Published as a conference paper at ICLR 2017
Unrolling Steps Average MSE Percent Best Rank 0 steps 0.0231 ± 0.0024 0.63% 1 step 0.0195 ± 0.0021 22.97% 5 steps 0.0200 ± 0.0023 15.31% 10 steps 0.0181 ± 0.0018 61.09%
Table 3: GANs trained with unrolling are better able to match images in the training set than standard GANs, likely due to mode dropping by the standard GAN. Results show the MSE between training images and the best reconstruction for a model with the given number of unrolling steps. The fraction of training images best reconstructed by a given model is given in the ï¬nal column. The best reconstructions is found by optimizing the latent representation z to produce the closest matching pixel output G (z; θG). Results are averaged over all 5 runs of each model with different random seeds.
# INFERENCE VIA OPTIMIZATION | 1611.02163#33 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 33 | M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artiï¬cial Intelligence, 134(1):57â83, 2002.
R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information Processing Systems 8. Citeseer, 1996.
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artiï¬cial intelligence experimentation. In International Joint Conference On Artiï¬cial Intelligence (IJCAI), page 4246, 2016.
libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.
M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed- ings of the eleventh international conference on machine learning, volume 157, pages 157â163, 1994.
M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re- search, 2(1):55â66, 2001. | 1611.02205#33 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 33 | Q-Prop on top of TRPO-GAE primarily for the purpose of enabling a fair comparison in the experiments, but com- bining Q-Prop with other on-policy update schemes and off-policy critic training methods is an interesting direction for future work. For example, Q-Prop can also be used with other on-policy policy gradient methods such as A3C (Mnih et al., 2016) and off-policy advantage estimation meth- ods such as Retrace(A) (Munos et al., 2016), GTD2 (Sutton et al., 2009), emphatic TD (Sutton et al., 2015), and WIS-LSTD (Mahmood et al., 2014). 5 EXPERIMENTS iN (a) (b) (c) (d) (e) () (g) Figure 1: Illustrations of OpenAI Gym MuJoCo domains (Brockman et al., 2016; Duan et al., 2016): (a) Ant, (b) HalfCheetah, (c) Hopper, (d) Humanoid, (e) Reacher, (f) Swimmer, (g) Walker. We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gym benchmark (Brockman et al., 2016) using the MuJoCo | 1611.02247#33 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 34 | 8
Published as a conference paper at ICLR 2017
Learning to Infer. Very recently, Alemi et al. (2016) used neural sequence models in tandem with an automated theorem prover. Similar to our Sort and Add strategy, a neural network com- ponent is trained to select premises that the theorem prover can use to prove a theorem. A recent extension (Loos et al., 2017) is similar to our DFS enumeration strategy and uses a neural network to guide the proof search at intermediate steps. The main differences are in the domains, and that they train on an existing corpus of theorems. More broadly, if we view a DSL as deï¬ning a model and search as a form of inference algorithm, then there is a large body of work on using discriminatively- trained models to aid inference in generative models. Examples include Dayan et al. (1995); Kingma & Welling (2014); Shotton et al. (2013); Stuhlm¨uller et al. (2013); Heess et al. (2013); Jampani et al. (2015).
# 7 DISCUSSION AND FUTURE WORK | 1611.01989#34 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 34 | # INFERENCE VIA OPTIMIZATION
Since likelihood cannot be tractably computed, over-ï¬tting of GANs is typically tested by taking samples and computing the nearest-neighbor images in pixel space from the training data (Goodfel- low et al., 2014). We will do the reverse, and measure the ability of the generative model to generate images that look like speciï¬c samples from the training data. If we did this by generating random samples from the model, we would need an exponentially large number of samples. We instead treat ï¬nding the nearest neighbor xnearest to a target image xtarget as an optimization task,
||G (z; θG) â xtarget||2 2 znearest = argmin (13)
# z xnearest = G (znearest; θG) .
(14)
This concept of backpropagating to generate images has been widely used in visualizing features from discriminative networks (Simonyan et al., 2013; Yosinski et al., 2015; Nguyen et al., 2016) and has been applied to explore the visual manifold of GANs in (Zhu et al., 2016). | 1611.02163#34 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 34 | M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re- search, 2(1):55â66, 2001.
M. J. Matari´c. Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73â83. Springer, 1997.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championship caliber checkers program. Artiï¬cial Intelligence, 53(2):273â289, 1992.
S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term prediction. arXiv preprint arXiv:1602.01580, 2016. | 1611.02205#34 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 34 | Walker. We evaluated Q-Prop and its variants on continuous control environments from the OpenAI Gym benchmark (Brockman et al., 2016) using the MuJoCo physics simulator (Todorov et al., 2012) as shown in Figure 1. Algorithms are identified by acronyms, followed by a number indicating batch size, except for DDPG, which is a prior online actor-critic algorithm (Lillicrap et al., 2016). âc-â and ây-â denote conservative and aggressive Q-Prop variants as described in Section 3.2. âTR-â denotes trust-region policy optimization (Schulman et al., 2015), while âV-â denotes vanilla policy gradient. For example, âTR-c-Q-Prop-5000â means convervative Q-Prop with the trust-region policy update, and a batch size of 5000. âVPGâ and âTRPOâ are vanilla policy gradient and trust-region policy op- timization respectively (Schulman et al., 2016; Duan et al., 2016). Unless otherwise stated, all policy gradient methods are implemented with GAE(A = 0.97) (Schulman et al., 2016). Note that TRPO- | 1611.02247#34 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 35 | # 7 DISCUSSION AND FUTURE WORK
We have presented a framework for improving IPS systems by using neural networks to translate cues in input-output examples to guidance over where to search in program space. Our empirical results show that for many programs, this technique improves the runtime of a wide range of IPS baselines by 1-3 orders. We have found several problems in real online programming challenges that can be solved with a program in our language, which validates the relevance of the class of problems that we have studied in this work. In sum, this suggests that we have made signiï¬cant progress towards being able to solve programming competition problems, and the machine learning component plays an important role in making it tractable. | 1611.01989#35 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 35 | We apply this technique to each of the models trained. We optimize with 3 random starts using LBFGS, which is the optimizer typically used in similar settings such as style transfer (Johnson et al., 2016; Champandard, 2016). Results comparing average mean squared errors between xnearest and xtarget in pixel space can be found in Table 3. In addition we compute the percent of images for which a certain conï¬guration achieves the lowest loss when compared to the other conï¬gurations.
In the zero step case, there is poor reconstruction and less than 1% of the time does it obtain the lowest error of the 4 conï¬gurations. Taking 1 unrolling step results in a signiï¬cant improvement in MSE. Taking 10 unrolling steps results in more modest improvement, but continues to reduce the reconstruction MSE. | 1611.02163#35 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.