doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.02205 | 35 | D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58â68, 1995.
J. Togelius, S. Karakovskiy, J. Koutn´ık, and J. Schmidhuber. Super mario evolution. In 2009 IEEE Symposium on Computational Intelligence and Games, pages 156â161. IEEE, 2009.
Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.
H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR, abs/1509.06461, 2015.
Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. | 1611.02205#35 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 35 | 2016). Unless otherwise stated, all policy gradient methods are implemented with GAE(A = 0.97) (Schulman et al., 2016). Note that TRPO- GAE is currently the state-of-the-art method on most of the OpenAI Gym benchmark tasks, though our experiments show that a well-tuned DDPG implementation sometimes achieves better results. Our algorithm implementations are built on top of the rllab TRPO and DDPG codes from Duan et al. (2016) and available at https://github.com/shaneshixiang/rllabplusplus. Policy and value function architectures and other training details including hyperparameter values are provided in Appendix D. 5.1 ADAPTIVE Q-PROP First, it is useful to identify how reliable each variant of Q-Prop is. In this section, we analyze standard Q-Prop and two adaptive variants, c-Q-Prop and a-Q-Prop, and demonstrate the stability of the method across different batch sizes. Figure 2a shows a comparison of Q-Prop variants with trust-region updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper- parameters. The results are consistent with theory: conservative Q-Prop achieves | 1611.02247#35 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 36 | There remain some limitations, however. First, the programs we can synthesize are only the simplest problems on programming competition websites and are simpler than most competition problems. Many problems require more complex algorithmic solutions like dynamic programming and search, which are currently beyond our reach. Our chosen DSL currently cannot express solutions to many problems. To do so, it would need to be extended by adding more primitives and allow for more ï¬ex- ibility in program constructs (such as allowing loops). Second, we currently use ï¬ve input-output examples with relatively large integer values (up to 256 in magnitude), which are probably more informative than typical (smaller) examples. While we remain optimistic about LIPSâs applicability as the DSL becomes more complex and the input-output examples become less informative, it re- mains to be seen what the magnitude of these effects are as we move towards solving large subsets of programming competition problems.
We foresee many extensions of DeepCoder. We are most interested in better data generation pro- cedures by using generative models of source code, and to incorporate natural language problem descriptions to lessen the information burden required from input-output examples. In sum, Deep- Coder represents a promising direction forward, and we are optimistic about the future prospects of using machine learning to synthesize programs.
# ACKNOWLEDGMENTS | 1611.01989#36 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 36 | To visually see this, we compare the result of the optimization process for 0, 1, 5, and 10 step configurations in Figure [5] To select for images where differences in behavior is most apparent, we sort the data by the absolute value of a fractional difference in MSE between the 0 and 10 step lostepâhostep models, step ester â| This highlights examples where either the 0 or 10 step model cannot T(lostep-tliostep) accurately fit the data example but the other can. In Appendix [G]we show the same comparison for models initialized using different random seeds. Many of the zero step images are fuzzy and ill- defined suggesting that these images cannot be generated by the standard GAN generative model, and come from a dropped mode. As more unrolling steps are added, the outlines become more clear and well defined â the model covers more of the distribution and thus can recreate these samples.
# 3.4.2 PAIRWISE DISTANCES | 1611.02163#36 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02205 | 36 | Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143, 2016.
10
# Appendices
Experimental Results
Table 3: Average results of DQN, D-DQN, Dueling D-DQN and a Human player
DQN D-DQN Dueling D-DQN Human F-Zero 3116 3636 5161 6298 Gradius III 7583 12343 16929 24440 Mortal Kombat 83733 56200 169300 132441 Super Mario 11765 16946 20030 36386 Wolfenstein 100 83 40 2952
11 | 1611.02205#36 | Playing SNES in the Retro Learning Environment | Mastering a video game requires skill, tactics and strategy. While these
attributes may be acquired naturally by human players, teaching them to a
computer program is a far more challenging task. In recent years, extensive
research was carried out in the field of reinforcement learning and numerous
algorithms were introduced, aiming to learn how to perform human tasks such as
playing video games. As a result, the Arcade Learning Environment (ALE)
(Bellemare et al., 2013) has become a commonly used benchmark environment
allowing algorithms to train on various Atari 2600 games. In many games the
state-of-the-art algorithms outperform humans. In this paper we introduce a new
learning environment, the Retro Learning Environment --- RLE, that can run
games on the Super Nintendo Entertainment System (SNES), Sega Genesis and
several other gaming consoles. The environment is expandable, allowing for more
video games and consoles to be easily added to the environment, while
maintaining the same interface as ALE. Moreover, RLE is compatible with Python
and Torch. SNES games pose a significant challenge to current algorithms due to
their higher level of complexity and versatility. | http://arxiv.org/pdf/1611.02205 | Nadav Bhonker, Shai Rozenberg, Itay Hubara | cs.LG, cs.AI | null | null | cs.LG | 20161107 | 20170207 | [
{
"id": "1609.05143"
},
{
"id": "1511.06581"
},
{
"id": "1602.01580"
},
{
"id": "1606.01540"
},
{
"id": "1606.01868"
}
] |
1611.02247 | 36 | updates on the HalfCheetah-v1 domain, along with the best performing TRPO hyper- parameters. The results are consistent with theory: conservative Q-Prop achieves much more stable performance than the standard and aggressive variants, and all Q-Prop variants significantly outper- form TRPO in terms of sample efficiency, e.g. conservative Q-Prop reaches average reward of 4000 using about 10 times less samples than TRPO. | 1611.02247#36 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 37 | # ACKNOWLEDGMENTS
The authors would like to express their gratitude to Rishabh Singh and Jack Feser for their valuable guidance and help on using the Sketch and λ2 program synthesis systems.
# REFERENCES
Alex A. Alemi, Franc¸ois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath - deep sequence models for premise selection. In Proocedings of the 29th Conference on Advances in Neural Information Processing Systems (NIPS), 2016.
Rudy R Bunel, Alban Desmaison, Pawan K Mudigonda, Pushmeet Kohli, and Philip Torr. Adaptive neural compilation. In Proceedings of the 29th Conference on Advances in Neural Information Processing Systems (NIPS), 2016.
Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The Helmholtz machine. Neural computation, 7(5):889â904, 1995.
9
Published as a conference paper at ICLR 2017
Krzysztof Dembczy´nski, Willem Waegeman, Weiwei Cheng, and Eyke H¨ullermeier. On label de- pendence and loss minimization in multi-label classiï¬cation. Machine Learning, 88(1):5â45, 2012. | 1611.01989#37 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 37 | # 3.4.2 PAIRWISE DISTANCES
A second complementary approach is to compare statistics of data samples to the corresponding statistics for samples generated by the various models. One particularly simple and relevant statistic is the distribution over pairwise distances between random pairs of samples. In the case of mode collapse, greater probability mass will be concentrated in smaller volumes, and the distribution over inter-sample distances should be skewed towards smaller distances. We sample random pairs of images from each model, as well as from the training data, and compute histograms of the (2 distances between those sample pairs. As illustrated in Figure (6 the standard GAN, with zero unrolling steps, has its probability mass skewed towards smaller ¢2 intersample distances, compared
9
Published as a conference paper at ICLR 2017
Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step | 1611.02163#37 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 37 | Published as a conference paper at ICLR 2017 6000 5000 5000 4000 3000 ⬠4000 e c 3000 c 2000 5 5 © 2000 & 1000 cosvesesasecerayasetararecauarenqvanast < 1000 < 0 TRPO-05000 + TRPO-05000 TRPO-25000 â TRaProp-05000 âTR-e-Q-Prop-01000 0 â TR.a-Q-Prop-05000 1000 â TR<-Q-Prop-05000 = 1Re-aProp-05000 â TRe.aProp-25000 -1000 -2000 oO 2000 4000 6000 8000 10000 12000 © 500 1000 1500 2000 2500 3000 3500 4000 4500 Episodes Episodes (a) Standard Q-Prop vs adaptive variants. (b) Conservative Q-Prop vs TRPO across batch sizes. Figure 2: Average return over episodes in HalfCheetah-v1 during learning, exploring adaptive Q- Prop methods and different batch sizes. All variants of Q-Prop substantially outperform TRPO in terms of sample efficiency. TR-c-QP, conservative Q-Prop with trust-region update performs most stably across different batch sizes. Figure 2b shows the performance of conservative Q-Prop against TRPO across different batch sizes. Due to high variance in gradient estimates, TRPO typically requires | 1611.02247#37 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 38 | Krzysztof J. Dembczynski, Weiwei Cheng, and Eyke Hllermeier. Bayes optimal multilabel classiï¬- cation via probabilistic classiï¬er chains. In Proceedings of the 27th International Conference on Machine Learning (ICML), 2010.
John K. Feser, Swarat Chaudhuri, and Isil Dillig. Synthesizing data structure transformations from input-output examples. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2015.
Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. CoRR, abs/1608.04428, 2016. URL http://arxiv.org/abs/1608.04428.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/abs/1410.5401. | 1611.01989#38 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 38 | Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step
Data 0 step 1 step 5 step 10step Data 0 step 1 step 5 step 10step
Figure 5: Training set images are more accurately reconstructed using GANs trained with unrolling than by a standard (0 step) GAN, likely due to mode dropping by the standard GAN. Raw data is on the left, and the optimized images to reach this target follow for 0, 1, 5, and 10 unrolling steps. The reconstruction MSE is listed below each sample. A random 1280 images where selected from the training set, and corresponding best reconstructions for each model were found via optimiza- tion. Shown here are the eight images with the largest absolute fractional difference between GANs trained with 0 and 10 unrolling steps.
to real data. As the number of unrolling steps is increased, the histograms over intersample distances increasingly come to resemble that for the data distribution. This is further evidence in support of unrolling decreasing the mode collapse behavior of GANs.
# 4 DISCUSSION
In this work we developed a method to stabilize GAN training and reduce mode collapse by deï¬ning the generator objective with respect to unrolled optimization of the discriminator. We then demon- strated the application of this method to several tasks, where it either rescued unstable training, or reduced the tendency of the model to drop regions of the data distribution. | 1611.02163#38 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 38 | different batch sizes. Figure 2b shows the performance of conservative Q-Prop against TRPO across different batch sizes. Due to high variance in gradient estimates, TRPO typically requires very large batch sizes, e.g. 25000 steps or 25 episodes per update, to perform well. We show that our Q-Prop methods can learn even with just 1 episode per update, and achieves better sample efficiency with small batch sizes. This shows that Q-Prop significantly reduces the variance compared to the prior methods. As we discussed in Section 1, stability is a significant challenge with state-of-the-art deep RL meth- ods, and is very important for being able to reliably use deep RL for real world tasks. In the rest of the experiments, we will use conservative Q-Prop as the main Q-Prop implementation. 5.2 EVALUATION ACROSS ALGORITHMS 8000 3500 6000 â $000 © ca ¢ 2500 5 4000 5 2 3 2000 Ey Ey 2 2000 2 1500 $ DpPG-0.1 $ Zz 9 DOPG-.0 < 1000 Trro-o5t00 v-c-Q-Prop-05000 500 ~2000 TRe-@-Prop-05000 0 oO 1000 2000 3000 4000 5000 6000 0 5000 10000 15000 20000 25000 Episodes | 1611.02247#38 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 39 | Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Proceedings of the 28th Conference on Advances in Neural Information Processing Systems (NIPS), 2015.
Sumit Gulwani. Programming by examples: Applications, algorithms, and ambiguity resolution. In Proceedings of the 8th International Joint Conference on Automated Reasoning (IJCAR), 2016.
Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. Synthesis of loop-free programs. In Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2011.
Nicolas Heess, Daniel Tarlow, and John Winn. Learning to pass expectation propagation messages. In Proceedings of the 26th Conference on Advances in Neural Information Processing Systems (NIPS), 2013. | 1611.01989#39 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 39 | The main drawback to this method is computational cost of each training step, which increases linearly with the number of unrolling steps. There is a tradeoff between better approximating the true generator loss and the computation required to make this estimate. Depending on the architecture, one unrolling step can be enough. In other more unstable models, such as the RNN case, more are needed to stabilize training. We have some initial positive results suggesting it may be sufï¬cient to further perturb the training gradient in the same direction that a single unrolling step perturbs it. While this is more computationally efï¬cient, further investigation is required.
The method presented here bridges some of the gap between theoretical and practical results for training of GANs. We believe developing better update rules for the generator and discriminator is an important line of work for GAN training. In this work we have only considered a small fraction of the design space. For instance, the approach could be extended to unroll G when updating D as well â letting the discriminator react to how the generator would move. It is also possible to unroll sequences of G and D updates. This would make updates that are recursive: G could react to maximize performance as if G and D had already updated.
# ACKNOWLEDGMENTS | 1611.02163#39 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 39 | 500 ~2000 TRe-@-Prop-05000 0 oO 1000 2000 3000 4000 5000 6000 0 5000 10000 15000 20000 25000 Episodes Episodes (a) Comparing algorithms on HalfCheetah-v1. (b) Comparing algorithms on Humanoid-v1. Figure 3: Average return over episodes in HalfCheetah-v1 and Humanoid-v1 during learning, com- paring Q-Prop against other model-free algorithms. Q-Prop with vanilla policy gradient outperforms TRPO on HalfCheetah. Q-Prop significantly outperforms TRPO in convergence time on Humanoid. In this section, we evaluate two versions of conservative Q-Prop, v-c-Q-Prop using vanilla pol- icy gradient and TR-c-Q-Prop using trust-region updates, against other model-free algorithms on the HalfCheetah-v1 domain. Figure 3a shows that c-Q-Prop methods significantly outperform the best TRPO and VPG methods. Even Q-Prop with vanilla policy gradient is comparable to TRPO, confirming the significant benefits from variance reduction. DDPG on the other hand exhibits incon- sistent performances. With proper reward scaling, i.e. | 1611.02247#39 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 40 | Varun Jampani, Sebastian Nowozin, Matthew Loper, and Peter V Gehler. The informed sampler: A discriminative approach to Bayesian inference in generative computer vision models. Computer Vision and Image Understanding, 136:32â44, 2015.
Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent In Proceedings of the 28th Conference on Advances in Neural Information Processing nets. Systems (NIPS), 2015.
Åukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In Proceedings of the 4th Inter- national Conference on Learning Representations, 2016.
Diederik P Kingma and Max Welling. Stochastic gradient VB and the variational auto-encoder. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. Proceedings of the 4th International Conference on Learning Representations 2016, 2015.
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neu- ral networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016. | 1611.01989#40 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 40 | # ACKNOWLEDGMENTS
We would like to thank Laurent Dinh, David Dohan, Vincent Dumoulin, Liam Fedus, Ishaan Gul- rajani, Julian Ibarz, Eric Jang, Matthew Johnson, Marc Lanctot, Augustus Odena, Gabriel Pereyra,
10
Published as a conference paper at ICLR 2017
Pairwise L2 Norm Distribution â data >? Bus â lstep £10 Bos > 0.0 235 0.4 0.6 08 1.0 B25 255 â data ais â 5step 1.0 0.5 0.0 3.0 04 0.6 08 1.0 2 â data 15 â 10 step 1.0 0.5 0.0 04 0.6 08 1.0 [2 norm
Figure 6: As the number of unrolling steps in GAN training is increased, the distribution of pairwise distances between model samples more closely resembles the same distribution for the data. Here we plot histograms of pairwise distances between randomly selected samples. The red line gives pairwise distances in the data, while each of the ï¬ve blue lines in each plot represents a model trained with a different random seed. The vertical lines are the medians of each distribution. | 1611.02163#40 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 40 | significant benefits from variance reduction. DDPG on the other hand exhibits incon- sistent performances. With proper reward scaling, i.e. âDDPG-r0.1â, it outperforms other methods as well as the DDPG results reported in prior work (Duan et al., 2016; Amos et al., 2016). This illustrates the sensitivity of DDPG to hyperparameter settings, while Q-Prop exhibits more stable, monotonic learning behaviors when compared to DDPG. In the next section we show this improved stability allows Q-Prop to outperform DDPG in more complex domains. | 1611.02247#40 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 41 | Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom´aËs KoËcisk´y, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.
10
Published as a conference paper at ICLR 2017
Sarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search. CoRR, abs/1701.06972, 2017. URL http://arxiv.org/abs/1701.06972.
Aditya Krishna Menon, Omer Tamuz, Sumit Gulwani, Butler W Lampson, and Adam Kalai. A machine learning framework for programming by example. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro- grams with gradient descent. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016. | 1611.01989#41 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 41 | Colin Raffel, Sam Schoenholz, Ayush Sekhari, Jon Shlens, and Dale Schuurmans for insightful conversation, as well as the rest of the Google Brain Team.
11
Published as a conference paper at ICLR 2017
# REFERENCES
Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang, and Pascal Vincent. Gsns : Generative stochastic networks. arXiv preprint arXiv:1503.05571, 2015.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016.
David Belanger and Andrew McCallum. Structured prediction energy networks. arXiv preprint arXiv:1511.06350, 2015.
Jorg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional helmholtz machines. arXiv preprint arXiv:1506.03877, 2015. | 1611.02163#41 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 41 | Published as a conference paper at ICLR 2017
5.3 EVALUATION ACROSS DOMAINS
Lastly, we evaluate Q-Prop against TRPO and DDPG across multiple domains. While the gym environments are biased toward locomotion, we expect we can achieve similar performance on ma- nipulation tasks such as those in Lillicrap et al. (2016). Table 1 summarizes the results, including the best attained average rewards and the steps to convergence. Q-Prop consistently outperform TRPO in terms of sample complexity and sometimes achieves higher rewards than DDPG in more complex domains. A particularly notable case is shown in Figure 3b, where Q-Prop substantially improves sample efficiency over TRPO on Humanoid-v1 domain, while DDPG cannot find a good solution.
The better performance on the more complex domains highlights the importance of stable deep RL algorithms: while costly hyperparameter sweeps may allow even less stable algorithms to perform well on simpler problems, more complex tasks might have such narrow regions of stable hyperpa- rameters that discovering them becomes impractical. | 1611.02247#41 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 42 | Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas J. Guibas. Learning program embeddings to propagate feedback on student code. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.
Oleksandr Polozov and Sumit Gulwani. FlashMeta: a framework for inductive program synthe- sis. In Proceedings of the International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2015.
Scott E. Reed and Nando de Freitas. Neural programmer-interpreters. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016.
Sebastian Riedel, Matko Bosnjak, and Tim Rockt¨aschel. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640, 2016. URL http://arxiv.org/abs/1605.06640.
Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic program optimization. Commununica- tions of the ACM, 59(2):114â122, 2016. | 1611.01989#42 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 42 | Michael Bowling and Manuela Veloso. Multiagent learning using a variable learning rate. Artiï¬cial Intelligence, 136(2):215â250, 2002.
Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
Alex J. Champandard. Semantic style transfer and turning two-bit doodles into ï¬ne artworks. arXiv preprint arXiv:1603.01768, 2016.
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. arXiv preprint arXiv: 1612.02136, 2016.
Info- gan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016.
John M Danskin. The theory of max-min and its application to weapons allocation problems, vol- ume 5. Springer Science & Business Media, 1967.
Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889â904, 1995. | 1611.02163#42 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 42 | TR-c-Q-Prop TRPO DDPG Domain Threshold | MaxReturn. Episodes | MaxReturn Epsisodes | MaxReturn Episodes Ant 3500 3534 4975 4239 13825 957 HalfCheetah 4700 4811 20785 4734 26370 7490 600 Hopper 2000 2957 5945 2486 5715 2604 965 Humanoid 2500 >3492 14750 918 >30000 552 Reacher -7 -6.0 2060 -6.7 2840 -6.6 1800 Swimmer 90 103 2045 110 3025 150 500 Walker 3000 4030 3685 3567 18875 3626 2125
Table 1: Q-Prop, TRPO and DDPG results showing the max average rewards attained in the first 30k episodes and the episodes to cross specific reward thresholds. Q-Prop often learns more sample efficiently than TRPO and can solve difficult domains such as Humanoid better than DDPG.
# 6 DISCUSSION AND CONCLUSION | 1611.02247#42 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 43 | Jamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake, Mat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116â124, 2013.
Rishabh Singh and Sumit Gulwani. Predicting a correct program in programming by example. In Proceedings of the 27th Conference on Computer Aided Veriï¬cation (CAV), 2015.
Armando Solar-Lezama. Program Synthesis By Sketching. PhD thesis, EECS Dept., UC Berkeley, 2008.
Andreas Stuhlm¨uller, Jessica Taylor, and Noah D. Goodman. Learning stochastic inverses. In Pro- ceedings of the 26th Conference on Advances in Neural Information Processing Systems (NIPS), 2013.
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In Proceedings of the 28th Conference on Advances in Neural Information Processing Systems (NIPS), 2015.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015. | 1611.01989#43 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 43 | Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: non-linear independent components esti- mation. arXiv preprint arXiv:1410.8516, 2014.
Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. arXiv preprint arXiv:1605.08803, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics (AISTATS 2010), volume 9, pp. 249â256, May 2010. | 1611.02163#43 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 43 | # 6 DISCUSSION AND CONCLUSION
We presented Q-Prop, a policy gradient algorithm that combines reliable, consistent, and poten- tially unbiased on-policy gradient estimation with a sample-efficient off-policy critic that acts as a control variate. The method provides a large improvement in sample efficiency compared to state- of-the-art policy gradient methods such as TRPO, while outperforming state-of-the-art actor-critic methods on more challenging tasks such as humanoid locomotion. We hope that techniques like these, which combine on-policy Monte Carlo gradient estimation with sample-efficient variance re- duction through off-policy critics, will eventually lead to deep reinforcement learning algorithms that are more stable and efficient, and therefore better suited for application to complex real-world learning tasks.
ACKNOWLEDGMENTS
We thank Rocky Duan for sharing and answering questions about rllab code, and Yutian Chen and Laurent Dinh for discussion on control variates. SG and RT were funded by NSERC, Google, and EPSRC grants EP/L000776/1 and EP/M026957/1. ZG was funded by EPSRC grant EP/JO12300/1 and the Alan Turing Institute (EP/N510129/1).
# REFERENCES | 1611.02247#43 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 44 | Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015.
Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms In Proceedings of the 33nd International Conference on Machine Learning from examples. (ICML), 2016.
11
Published as a conference paper at ICLR 2017
A EXAMPLE PROGRAMS
This section shows example programs in our Domain Speciï¬c Language (DSL), together with input- output examples and short descriptions. These programs have been inspired by simple tasks appear- ing on real programming competition websites, and are meant to illustrate the expressive power of our DSL.
Program 0: k â int b â [int] c â SORT b d â TAKE k c e â SUM d Input-output example: Input: 2, [3 5 4 7 5] Output: [7] Description: A new shop near you is selling n paintings. You have k < n friends and you would like to buy each of your friends a painting from the shop. Return the minimal amount of money you will need to spend.
Program 1: w â [int] t â [int] c â MAP (*3) w d â ZIPWITH (+) c t e â MAXIMUM d | 1611.01989#44 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 44 | Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 2672â2680. Curran Associates, Inc., 2014. URL http://papers. nips.cc/paper/5423-generative-adversarial-nets.pdf.
Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of The 32nd International Conference on Machine Learn- ing, pp. 1462â1471, 2015. URL http://www.jmlr.org/proceedings/papers/v37/ gregor15.html.
Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generator network, 2016. URL https://arxiv.org/abs/1606.08571.
12
Published as a conference paper at ICLR 2017 | 1611.02163#44 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 44 | # REFERENCES
Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. arXiv preprint arXiv: 1609.07152, 2016.
Christopher G Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based rein- forcement learning. In Jn International Conference on Robotics and Automation. Citeseer, 1997.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv: 1606.01540, 2016.
Published as a conference paper at ICLR 2017
Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011.
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. International Conference on Machine Learning (ICML), 2016.
Miroslav Dudik, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. arXiv preprint arXiv: 1103.4601, 2011. | 1611.02247#44 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 45 | Program 1: w â [int] t â [int] c â MAP (*3) w d â ZIPWITH (+) c t e â MAXIMUM d
Input-output example: Input: [6 2 4 7 9], [5 3 6 1 0] Output: 27
Description: In soccer leagues, match winners are awarded 3 points, losers 0 points, and both teams get 1 point in the case of a tie. Com- pute the number of points awarded to the winner of a league given two arrays w, t of the same length, where w[i] (resp. t[i]) is the number of times team i won (resp. tied).
Program 2: a â [int] b â [int] c â ZIPWITH (-) b a d â COUNT (>0) c
Input-output example: Input: [6 2 4 7 9], [5 3 2 1 0] Output: 4
Description: Alice and Bob are comparing their results in a recent exam. Given their marks per ques- tion as two arrays a and b, count on how many questions Alice got more points than Bob.
Program 3: h â [int] b â SCANL1 MIN h c â ZIPWITH (-) h b d â FILTER (>0) c e â SUM d | 1611.01989#45 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 45 | 12
Published as a conference paper at ICLR 2017
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â456, 2015. URL http://jmlr. org/proceedings/papers/v37/ioffe15.html.
Justin Johnson, Alexandre Alahi, and Fei-Fei Li. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
Anatoli Juditsky, Arkadi Nemirovski, et al. First order methods for nonsmooth convex large-scale optimization, i: general purpose methods. Optimization for Machine Learning, pp. 121â148, 2011. | 1611.02163#45 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 45 | Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov):1471-1530, 2004.
Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. International Conference on Learning Representations (ICLR), 201 6a.
Shixiang Gu, Tim Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning (ICML), 2016b.
Hado V Hasselt. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613-2621, 2010.
Sham Kakade. A natural policy gradient. In NJPS, volume 14, pp. 1531-1538, 2001.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Guy Lever. Deterministic policy gradient algorithms. 2014.
Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning (ICML), pp. 1-9, 2013. | 1611.02247#45 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 46 | # Input-output example: Input: [8 5 7 2 5] Output: 5
Description: Perditia is very peculiar about her garden and wants that the trees standing in a row are all of non-increasing heights. Given the tree heights in centimeters in order of the row as an array h, compute how many centimeters she needs to trim the trees in total.
Program 4: x â [int] y â [int] c â SORT x d â SORT y e â REVERSE d f â ZIPWITH (*) d e g â SUM f
Input-output example: Input: [7 3 8 2 5], [2 8 9 1 3] Output: 79
Description: Xavier and Yasmine are laying sticks to form non-overlapping rectangles on the ground. They both have ï¬xed sets of pairs of sticks of certain lengths (represented as ar- rays x and y of numbers). Xavier only lays sticks parallel to the x axis, and Yasmine lays sticks only parallel to y axis. Compute the area their rectangles will cover at least.
Program 5: a â [int] b â REVERSE a c â ZIPWITH MIN a b
Input-output example: Input: [3 7 5 2 8] Output: [3 2 5 2 3] | 1611.01989#46 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 46 | Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https: //arxiv.org/abs/1312.6114.
Diederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive ï¬ow. 2016.
Tejas D. Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B. Tenenbaum. Deep convolutional inverse graphics network. arXiv preprint arXiv:1503.03167, 2015.
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Jo- hannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network, 2016. URL https://arxiv.org/abs/1609.04802.
Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimiza- tion through reversible learning, 2015. | 1611.02163#46 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 46 | Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning (ICML), pp. 1-9, 2013.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. Interna- tional Conference on Learning Representations (ICLR), 2016.
A Rupam Mahmood, Hado P van Hasselt, and Richard S Sutton. Weighted importance sampling for off-policy learning with linear function approximation. In Advances in Neural Information Processing Systems, pp. 3014-3022, 2014.
Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Jnter- national Conference on Machine Learning (ICML), 2014.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. | 1611.02247#46 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 47 | Input-output example: Input: [3 7 5 2 8] Output: [3 2 5 2 3]
Description: A sequence called Billy is looking into the mirror, wondering how much weight it could lose by replacing any of its elements by their mirror images. Given a description of Billy as an array b of length n, return an array c of minimal sum where each el- ement c[i] is either b[i] or its mirror image b[n â i â 1].
12
Published as a conference paper at ICLR 2017
Program 6: t â [int] p â [int] c â MAP (-1) t d â MAP (-1) p e â ZIPWITH (+) c d f â MINIMUM e IO example: Input: [4 8 11 2], [2 3 4 1] Output: 1 Description: Umberto has a large collection of ties and match- ing pocket squaresâtoo large, his wife saysâand he needs to sell one pair. Given their values as arrays t and p, assuming that he sells the cheapest pair, and selling costs 2, how much will he lose from the sale?
Program 7: s â [int] p â [int] c â SCANL1 (+) p d â ZIPWITH (*) s c e â SUM d | 1611.01989#47 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 47 | Dougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimiza- tion through reversible learning, 2015.
Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304, 2016.
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classiï¬er gans. arXiv preprint arXiv:1610.09585, 2016. | 1611.02163#47 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 47 | Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe and efficient off- policy reinforcement learning. arXiv preprint arXiv: 1606.02647, 2016.
John Paisley, David Blei, and Michael Jordan. Variational bayesian inference with stochastic search. International Conference on Machine Learning (ICML), 2012.
Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In International Conference on Intelligent Robots and Systems (IROS), pp. 2219-2225. IEEE, 2006.
Jan Peters, Katharina Miilling, and Yasemin Altun. Relative entropy policy search. In AAAI. Atlanta, 2010.
10
Published as a conference paper at ICLR 2017
Doina Precup. Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, pp. 80, 2000.
Sheldon M Ross. Simulation. Burlington, MA: Elsevier, 2006. | 1611.02247#47 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 48 | Program 7: s â [int] p â [int] c â SCANL1 (+) p d â ZIPWITH (*) s c e â SUM d
IO example: Input: [4 7 2 3], [2 1 3 1] Output: 62
Description: Zack always promised his n friends to buy them candy, but never did. Now he won the lottery and counts how often and how much candy he promised to his friends, obtaining arrays p (number of promises) and s (number of promised sweets). He announces that to repay them, he will buy s[1]+s[2]+...+s[n] pieces of candy for the ï¬rst p[1] days, then s[2]+s[3]+...+s[n] for p[2] days, and so on, until he has fulï¬lled all promises. How much candy will he buy in total?
Program 8: s â [int] b â REVERSE s c â ZIPWITH (-) b s d â FILTER (>0) c e â SUM d IO example: Input: [1 2 4 5 7] Output: 9 | 1611.01989#48 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 48 | Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-mode ad in a functional framework: Lambda the ultimate backpropagator. ACM Trans. Program. Lang. Syst., 30(2):7:1â7:36, March 2008. ISSN 0164-0925. doi: 10.1145/1330017.1330018. URL http://doi.acm.org/10. 1145/1330017.1330018.
Ben Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generator objectives for gans. arXiv preprint arXiv:1612.02780, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn- ing what and where to draw. In NIPS, 2016a. | 1611.02163#48 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 48 | Sheldon M Ross. Simulation. Burlington, MA: Elsevier, 2006.
John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimization. In /nternational Conference on Machine Learning (ICML), pp. 1889-1897, 2015.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. International Confer- ence on Learning Representations (ICLR), 2016.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. De- terministic policy gradient algorithms. In /nternational Conference on Machine Learning (ICML), 2014.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approxi- mating dynamic programming. In Jnternational Conference on Machine Learning (ICML), pp. 216-224, 1990. | 1611.02247#48 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.02163 | 49 | Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In Proceedings of The 33rd International Confer- ence on Machine Learning, 2016b.
13
Published as a conference paper at ICLR 2017
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and In International Conference on Machine variational inference in deep latent gaussian models. Learning. Citeseer, 2014.
Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi- sualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. | 1611.02163#49 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 49 | Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Infor- mation Processing Systems (NIPS), volume 99, pp. 1057-1063, 1999.
Richard S Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba Szepesvari, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 993-1000. ACM, 2009.
Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 2015.
Philip Thomas. Bias in natural actor-critic algorithms. In JCML, pp. 441-448, 2014.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012.
Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992. | 1611.02247#49 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 50 | 0: SORT b| TAKE a c| SUM d 1: MAP (03) a | ZIPWITH + bc | MAXIMUM d 2: ZPWITH- ba | COUNT (>0) ¢ 3: SCANLL MIN a | ZIPWITH -a b | FILTER (0) ¢] SUM 4 4; SORT a | SORT b| REVERSE d|| ZIPWITH * de | SUM f 5: REVERSE a | ZIPWITH MIN a b 6: MAP (2) a| MAP (-1) b| ZIPWITH + ¢ d | MINIMUM & 7: SCANLI + b | ZIPWITH* a c| SUM 8: REVERSE a | ZIPWITH -b a | FILTER (>0) c| SUM & om 3B ozo 1 op o 1 Of EE 20 aaa ao offer o2aaa os D100 oy Fo ooo aa 0 0 o 0 0 32a 22000 0 ofafo ooo 0 1100 20 4 oo aac cao Dao aaa Bou a (Wo 220220000000 009000 ofl oo 00 0 0 YS 2 1.0 0 0 âBee ovo 0 OL OCA A AAAO| 00 0 0 203 Zao ano oa o afar a so o 2F]> » oo a | 1611.01989#50 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 50 | Satinder Singh, Michael Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general-sum games. In Proceedings of the Sixteenth conference on Uncertainty in artiï¬cial intelligence, pp. 541â548. Morgan Kaufmann Publishers Inc., 2000.
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In Proceedings of The 32nd International Conference on Machine Learning, pp. 2256â2265, 2015. URL http://arxiv.org/abs/ 1503.03585.
Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution, 2016. URL https://arxiv.org/abs/1610. 04490v1.
In Advances in Neu- ral Information Processing Systems 28, Dec 2015. URL http://arxiv.org/abs/1506. 03478/. | 1611.02163#50 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 50 | Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.
Lex Weaver and Nigel Tao. The optimal reward baseline for gradient-based reinforcement learning. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pp. 538- 545. Morgan Kaufmann Publishers Inc., 2001.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.
# A Q-PROP ESTIMATOR DERIVATION
The full derivation of the Q-Prop estimator is shown in Eq. 14. We make use of the following property that is commonly used in baseline derivations:
Ep 9[Â¥ologpa(x)] = [ Vopo(s)=Vo [ p(x) =0
11
Published as a conference paper at ICLR 2017 | 1611.02247#50 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 51 | Figure 4: Predictions of a neural network on the 9 example programs described in this section. Numbers in squares would ideally be close to 1 (function is present in the ground truth source code), whereas all other numbers should ideally be close to 0 (function is not needed).
# B EXPERIMENTAL RESULTS
Results presented in Sect. 5.1 showcased the computational speedups obtained from the LIPS frame- work (using DeepCoder), as opposed to solving each program synthesis problem with only the in13
Published as a conference paper at ICLR 2017
formation about global incidence of functions in source code available. For completeness, here we show plots of raw computation times of each search procedure to solve a given number of problems.
Fig. 5 shows the computation times of DFS, of Enumerative search with a Sort and add scheme, of the λ2 and Sketch solvers with a Sort and add scheme, and of Beam search, when searching for a program consistent with input-output examples generated from P = 500 different test programs of length T = 3. As discussed in Sect. 5.1, these test programs were ensured to be semantically disjoint from all programs used to train the neural networks, as well as from all programs of shorter length (as discussed in Sect. 4.2).
3 2 2 2 g & | 1611.01989#51 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 51 | In Advances in Neu- ral Information Processing Systems 28, Dec 2015. URL http://arxiv.org/abs/1506. 03478/.
L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In In- ternational Conference on Learning Representations, Apr 2016. URL http://arxiv.org/ abs/1511.01844.
T. Tieleman and G. Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
A¨aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, abs/1601.06759, 2016a. URL http://arxiv.org/abs/ 1601.06759.
A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- arXiv preprint ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv:1606.05328, 2016b. | 1611.02163#51 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 52 | 3 2 2 2 g &
500 ââ DFS: using neural network 400 ~â DFS: using prior order 300 = 12: Sort and add using neural network 200 L2: Sort and add in prior order 100 â Enumeration: Sort and add using neural network ob ; ; d â a ee Enumeration: Sort and add in prior order loâ 10% 10% 10% 10° 10% 10% 10 Beam search Sol tation ti olver computation time [s] =ââ Sketch: Sort and add using neural network ââ Sketch: Sort and add in prior order
Figure 5: Number of test problems solved versus computation time.
The âstepsâ in the results for Beam search are due to our search strategy, which doubles the size of the considered beam until reaching the timeout (of 1000 seconds) and thus steps occur whenever the search for a beam of size 2k is ï¬nished. For λ2, we observed that no solution for a given set of allowed functions was ever found after about 5 seconds (on the benchmark machines), but that λ2 continued to search. Hence, we introduced a hard timeout after 6 seconds for all but the last iterations of our Sort and add scheme. | 1611.01989#52 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 52 | Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371â3408, December 2010. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1756006.1953039.
Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
Chongjie Zhang and Victor R Lesser. Multi-agent learning with policy prediction. In Proceedings of the Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence, 2010.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
Jun-Yan Zhu, Philipp Kr¨ahenb¨uhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula- tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
14 | 1611.02163#52 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 52 | VoJ(0) = Ep, ,.2|Vo log m9 (ar|s;)(Q' 0(8;, a1) â F (81, @r)] + Ep, .2[Vo log 29 (a;|8,) f( 8, ar)] 8(9) = Ep,.2|Vo log 26 (ar|8+) f (Sr, a2)] = Ep,.a[Vo log Xo (ai|8)(f( 81,41) + Va f ($1, @)|a=a, (Ar â Gr))] = Ep,,2[Vo log 1 (a:|8:) Val (81, @) |a=a, 41] =Ep, | A Vor (as:)Vafl-A)lo-ava (14) = Ep, [Vaslsr aaa, |, Voro( ars.) = Ep, [Vaf(8:,@)|a=a, VoEn[{ai)] =E pe [Vas ($14) laa, Vote (#1)] VoJ (8) = Ep,.2[Vo log m9 (ai|8:)(O(s:, ar) â F(81,a1)] + 8(8) = Ep,,2[Vo log 9 (ar|s,)(' (81,41) â F(81,41)] + Ep, | 1611.02247#52 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 53 | Fig. 6 shows the computation times of DFS, Enumerative search with a Sort and add scheme, and λ2 with a Sort and add scheme when searching for programs consistent with input-output examples generated from P = 100 different test programs of length T = 5. The neural network was trained on programs of length T = 4.
B 2 a & &
100 eo = DFS: using neural network ââ DFS: using prior order =ââ 12: Sort and add using neural network 40 L2: Sort and add in prior order 20 ââ Enumeration: Sort and add using neural 0 RATT itil cal 1o* 10% 107 10% 10° 10% 10% 10% 104 Solver computation time [s] Enumeration: Sort and add in prior order
# network
Figure 6: Number of test problems solved versus computation time.
C THE NEURAL NETWORK
As brieï¬y described in Sect. 4.3, we used the following simple feed-forward architecture encoder:
⢠For each input-output example in the set generated from a single ground truth program:
â Pad arrays appearing in the inputs and in the output to a maximum length L = 20 with a special NULL value. | 1611.01989#53 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 53 | 14
Published as a conference paper at ICLR 2017
# Appendix
# A 2D GAUSSIAN TRAINING DETAILS
Network architecture and experimental details for the experiment in Section 3.1 are as follows:
The dataset is sampled from a mixture of 8 Gaussians of standard deviation 0.02. The means are equally spaced around a circle of radius 2.
The generator network consists of a fully connected network with 2 hidden layers of size 128 with relu activations followed by a linear projection to 2 dimensions. All weights are initialized to be orthogonal with scaling of 0.8.
The discriminator network ï¬rst scales its input down by a factor of 4 (to roughly scale to (-1,1)), followed by 1 layer fully connected network with relu activations to a linear layer to of size 1 to act as the logit.
The generator minimizes LG = log(D(x)) + log(1 â D(G(z))) and the discriminator minimizes LD = âlog(D(x)) â log(1 â D(G(z))) where x is sampled from the data distribution and z â¼ N (0, I256). Both networks are optimized using Adam (Kingma & Ba, 2014) with a learning rate of 1e-4 and β1=0.5.
The network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating. | 1611.02163#53 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 54 | â Pad arrays appearing in the inputs and in the output to a maximum length L = 20 with a special NULL value.
â Represent the type (singleton integer or integer array) of each input and of the output using a one-hot-encoding vector. Embed each integer in the valid integer range (â256 to 255) using a learned embedding into E = 20 dimensional space. Also learn an embedding for the padding NULL value.
14
Published as a conference paper at ICLR 2017
â Concatenate the representations of the input types, the embeddings of integers in the inputs, the representation of the output type, and the embeddings of integers in the output into a single (ï¬xed-length) vector.
â Pass this vector through H = 3 hidden layers containing K = 256 sigmoid units each.
⢠Pool the last hidden layer encodings of each input-output example together by simple arith- metic averaging.
Fig. 7 shows a schematic drawing of this encoder architecture, together with the decoder that per- forms independent binary classiï¬cation for each function in the DSL, indicating whether or not it appears in the ground truth source code. | 1611.01989#54 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 54 | The network is trained by alternating updates of the generator and the discriminator. One step consists of either G or D updating.
# B MORE MIXTURE OF GAUSSIAN EXPERIMENTS
B.1 EFFECTS OF TIME DELAY / HISTORICAL AVERAGING
Another comparison we looked at was with regard to historical averaging based approaches. Re- cently similarly inspired approaches have been used in (Salimans et al., 2016) to stabilize training. For our study, we looked at taking an ensemble of discriminators over time.
First, we looked at taking an ensemble of the last N steps, as shown in Figure App.1.
. . 1 - . - Fy . - - ° . 2 2 - . - ° 5 e - = a . bd 4 - a i * - bad @ 20 * . = . _ 3 O 5 ° ⬠3 . 2 = 50 - . . â ~ 5 - oO 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps
Figure App.1: Historical averaging does not visibly increase stability on the mixture of Gaussians task. Each row corresponds to an ensemble of discriminators which consists of the indicated number of immediately preceding discriminators. The columns correspond to different numbers of training steps. | 1611.02163#54 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 54 | # B CONNECTION BETWEEN Q-PROP AND COMPATIBLE FEATURE APPROXIMATION
In this section we show that actor-critic with compatible feature approximation is a form of control variate. A critic Q, is compatible (Sutton et al., 1999) if it satisfies (1) Q,(s;,a;) = w! Vo logm(a;|s:), ie. VwOQw(8:,@1) = Vologm(a,|s;), and (2) w is fit with objective w = argmin,, L(w) = arg miny, Ep, x{(O(s;, ar) - Ow(s:,@1))â], that is fitting Q,, on on-policy Monte Carlo returns. Condition (2) implies the following identity,
VL = 2Ep,,a[Vo log 9 (a,|8;)(O(;,4r) â Qw(S;,41))] = 0. (15)
In compatible feature approximation, it directly uses Q,, as control variate, rather than its Taylor expansion Q,, as in Q-Prop. Using Eq. 15, the Monte Carlo policy gradient is, | 1611.02247#54 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 55 | Attribute Predictions Sigmoids Final Activations ry Pooled _â Gl ry ee Hiddens 3 Hiddens 2 - Hiddens 1 ZN ZN State Embeddings Program State Inputs 1 Outputs 1 Inputs 5 Outputs 5
Figure 7: Schematic representation of our feed-forward encoder, and the decoder.
While DeepCoder learns to embed integers into a E = 20 dimensional space, we built the system up gradually, starting with a E = 2 dimensional space and only training on programs of length T = 1. Such a small scale setting allowed easier investigation of the workings of the neural network, and indeed Fig. 8 below shows a learned embedding of integers in R2. The ï¬gure demonstrates that the network has learnt the concepts of number magnitude, sign (positive or negative) and evenness, presumably due to FILTER (>0), FILTER (<0), FILTER (%2==0) and FILTER (%2==1) all being among the programs on which the network was trained.
# D DEPTH-FIRST SEARCH | 1611.01989#55 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 55 | To further explore this idea, we ran experiments with an ensemble of 5 discriminators, but with different periods between replacing discriminators in the ensemble. For example, if I sample at a rate of 100, it would take 500 steps to replace all 5 discriminators. Results can be seen in Figure App.2.
We observe that given longer and longer time delays, the model becomes less and less stable. We hypothesize that this is due to the initial shape of the discriminator loss surface. When training, the discriminatorâs estimates of probability densities are only accurate on regions where it was trained. When ï¬xing this discriminator, we are removing the feedback between the generator exploitation
15
Published as a conference paper at ICLR 2017
8, e e ° . ° 2 ba p : . £ - G 2 ce . gi « = |. . c .- - . o g o 2 100 . = = 3 5 a a © 1000 @ . a . o 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Update Steps
Figure App.2: Introducing longer time delays between the discriminator ensemble results in insta- bility and probability distributions that are not in the window being visualized. The x axis is the number of weight updates and the y axis is how many steps to skip between discriminator updates when selecting the ensemble of 5 discriminators. | 1611.02163#55 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 55 | = Ep,z,x|Vo log 16 (a1|81) Ow( $1, â¬7)] = Ep,.2[(Vo log m9 (az|8,) V9 log 29 (az|s;)" )w] (16) = Ep, [1(0; s;)w
# VoJ(@)
where I(0; s;) = Ez, [Vo log 7% (a;|8;) Vo log m9 (a,|8;)7] is Fisherâs information matrix. Thus, vari- ance reduction depends on ability to compute or estimate /(@; s,) and w effectively.
# C_ UNIFYING POLICY GRADIENT AND ACTOR-CRITIC
Q-Prop closely ties together policy gradient and actor-critic algorithms. To analyze this point, we write a generalization of Eq. 9 below, introducing two additional variables @, pcr:
Vod (8) e<OtE pg [V9 log 29 (ar|81)(A(s:, a1) â NAw(Sr,41)] (17) + NE pcx [VaQw ($14) la=p9(s1) V oH40(S1)] | 1611.02247#55 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 56 | # D DEPTH-FIRST SEARCH
We use an optimized C++ implementation of depth-ï¬rst search (DFS) to search over programs with a given maximum length T . In depth-ï¬rst search, we start by choosing the ï¬rst function (and its arguments) of a potential solution program, and then recursively consider all ways of ï¬lling in the rest of the program (up to length T ), before moving on to a next choice of ï¬rst instruction (if a solution has not yet been found).
A program is considered a solution if it is consistent with all M = 5 provided input-output examples. Note that this requires evaluating all candidate programs on the M inputs and checking the results for equality with the provided M respective outputs. Our implementation of DFS exploits the sequential structure of programs in our DSL by caching the results of evaluating all preï¬xes of the currently considered program on the example inputs, thus allowing efï¬cient reuse of computation between candidate programs with common preï¬xes. This allows us to explore the search space at roughly the speed of â¼ 3 à 106 programs per second.
15
Published as a conference paper at ICLR 2017
2 al 2 é - . Pa S at rt fe . # 5 6 Wull Second embedding dimension ¢,(n)
=@m
=â¢m
44a
44a
# eee | 1611.01989#56 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 56 | and the discriminators ability to move. As a result, the generator is able to exploit these ï¬xed areas of poor performance for older discriminators in the ensemble. New discriminators (over)compensate for this, leading the system to diverge.
B.2 EFFECTS OF THE SECOND GRADIENT
A second factor we analyzed is the effect of backpropagating the learning signal through the un- rolling in Equation 12. We can turn on or off this backpropagation through the unrolling by in- troducing stop gradient calls into our computation graph between each unrolling step. With the stop gradient in place, the update signal corresponds only to the ï¬rst term in Equation 12. We looked at 3 conï¬gurations: without stop gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the k unrolling steps instead of taking the ï¬nal value. Results can be see in Figure App.3.
We initially observed no difference between unrolling with and without the second gradient, as both required 3 unrolling steps to become stable. When the discriminator is unrolled to convergence, the second gradient term becomes zero. Due to the simplicity of the problem, we suspect that the discriminator nearly converged for every generator step, and the second gradient term was thus irrelevant. | 1611.02163#56 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 56 | Eq. 17 enables more analysis where bias generally is introduced only when a& # | or pcr # Px- Importantly, Eq. 17 covers both policy gradient and deterministic actor-critic algorithm as its special cases. Standard policy gradient is recovered by n = 0, and deterministic actor-critic is recovered by a = 0 and pcr = pg. This allows heuristic or automatic methods for dynamically changing these variables through the learning process for optimizing different metrics, e.g. sample efficiency, convergence speed, stability.
Table 2 summarizes the various edge cases of Eq. 17. For example, since we derive our method from a control variates standpoint, Q,, can be any function and the gradient remains almost unbiased (see
12
Published as a conference paper at ICLR 2017
Parameter | Implementation options | Introduce bias? Ow off-policy TD; on-policy TD(A); model-based; etc. No Vo on-policy Monte Carlo fitting; Ez, [Q,,(s;,a@,)]; etc No a O<A<1 Yes, except A = 1 a a>0 Yes, except @ = 1 n any 1) No PCR p of any policy Yes, except Pcr = Px
Table 2: Implementation options and edge cases of the generalized Q-Prop estimator in Eq. 17. | 1611.02247#56 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 57 | =@m
=â¢m
44a
44a
# eee
# ee
even positive numbers even negative number Odd positive numbers odd negative numbers
# zero
Null (padding value)
First embedding dimension ¢,(n)
Figure 8: A learned embedding of integers {â256, â255, . . . , â1, 0, 1, . . . , 255} in R2. The color intensity corresponds to the magnitude of the embedded integer.
When the search procedure extends a partial program by a new function, it has to try the functions in the DSL in some order. At this point DFS can opt to consider the functions as ordered by their predicted probabilities from the neural network. The probability of a function consisting of a higher- order function and a lambda is taken to be the minimum of the probabilities of the two constituent functions.
E TRAINING LOSS FUNCTION
In Sect. 4.5 we outlined a justiï¬cation for using marginal probabilities of individual functions as a sensible intermediate representation to provide a solver employing a Sort and add scheme (we considered Enumerative search and the Sketch solver with this scheme). Here we provide a more detailed discussion. | 1611.01989#57 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 57 | To test this, we modiï¬ed the dynamics to perform ï¬ve generator steps for each discriminator update. Results are shown in Figure App.4. With the discriminator now kept out of equilibrium, successful training can be achieved with half as many unrolling steps when using both terms in the gradient than when only including the ï¬rst term.
# C RNN MNIST TRAINING DETAILS
The network architecture for the experiment in Section 3.2 is as follows:
The MNIST dataset is scaled to [-1, 1).
The generator ï¬rst scales the 256D noise vector through a 256 unit fully connected layer with relu activation. This is then fed into the initial state of a 256D LSTM(Hochreiter & Schmidhuber, 1997) that runs 28 steps corresponding to the number of columns in MNIST. The resulting sequence of ac- tivations is projected through a fully connected layer with 28 outputs with a tanh activation function. All weights are initialized via the âXavierâ initialization (Glorot & Bengio, 2010). The forget bias on the LSTM is initialized to 1. | 1611.02163#57 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 57 | Table 2: Implementation options and edge cases of the generalized Q-Prop estimator in Eq. 17.
Section 2.1). A natural choice is to use off-policy temporal difference learning to learn the critic Q,, corresponding to policy 2. This enables effectively utilizing off-policy samples without introducing further bias. An interesting alternative to this is to utilize model-based roll-outs to estimate the critic, which resembles MuProp in stochastic neural networks (Gu et al., 2016a). Unlike prior work on using fitted dynamics model to accelerate model-free learning (Gu et al., 2016b), this approach does not introduce bias to the gradient of the original objective.
# D_ EXPERIMENT DETAILS | 1611.02247#57 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 58 | Predicting program components from input-output examples can be cast as a multilabel classiï¬cation problem, where each instance (set of input-output examples) is associated with a set of relevant labels (functions appearing in the code that generated the examples). We denote the number of labels (functions) by C, and note that throughout this work C = 34. When the task is to predict a subset of labels y â {0, 1}C, different loss functions can be employed to measure the prediction error of a classiï¬er h(x) or ranking function f (x). Dembczynski et al. (2010) discuss the following three loss functions:
⢠Hamming loss counts the number of labels that are predicted incorrectly by a classiï¬er h:
Cc Luly h(x) = 0 Wye zneoo} c=1
⢠Rank loss counts the number of label pairs violating the condition that relevant labels are ranked higher than irrelevant ones by a scoring function f :
Cc Lyfe) = DY iper3 (4,9):yi=l1,yj=0
⢠Subset Zero-One loss indicates whether all labels have been correctly predicted by h:
Ls(y, h(x)) = Lyznix)}
16
Published as a conference paper at ICLR 2017 | 1611.01989#58 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 58 | The discriminator network feeds the input into a Convolution(16, stride=2) followed by a Convo- lution(32, stride=2) followed by Convolution(32, stride=2). All convolutions have stride 2. As in (Radford et al., 2015) leaky rectiï¬ers are used with a 0.3 leak. Batch normalization is applied after each layer (Ioffe & Szegedy, 2015). The resulting 4D tensor is then ï¬attened and a linear projection is performed to a single scalar.
16
Published as a conference paper at ICLR 2017
Unrolled GAN Unrolled GAN without second gradient
0 - o - . . - . C . . . 1 a * *- . o . . . . bw MAO > memos 10 tes. & 4 d oa a 5 a Doe o- eo OO OO > 0 5000 10000 15000 20000 = 25000 _~âs 30000» 35000 += 40000» 45000 = 50000 Update Steps | 1611.02163#58 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 58 | # D_ EXPERIMENT DETAILS
Policy and value function architectures. The network architectures are largely based on the benchmark paper by Duan et al. (2016). For policy gradient methods, the stochastic policy 19 (a,|8;) = 4 ("@(s;),X@) is a local Gaussian policy with a local state-dependent mean and a global covariance matrix. jg(s;) is a neural network with 3 hidden layers of sizes 100-50-25 and tanh nonlinearities at the first 2 layers, and X@ is diagonal. For DDPG, the policy is deterministic and has the same architecture as j1g except that it has an additional tanh layer at the output. Vg (s;) for baselines and GAE is fit with the same technique by Schulman et al. (2016), a variant of linear regression on Monte Carlo returns with soft-update constraint. For Q-Prop and DDPG, Q,,(s,a) is parametrized with a neural network with 2 hidden layers of size 100 and ReLU nonlinearity, where a is included after the first hidden layer. | 1611.02247#58 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 59 | Ls(y, h(x)) = Lyznix)}
16
Published as a conference paper at ICLR 2017
Dembczynski et al. (2010) proved that Bayes optimal decisions under the Hamming and Rank loss functions, i.e., decisions minimizing the expected loss under these loss functions, can be computed from marginal probabilities pc(yc|x). This suggests that:
⢠Multilabel classiï¬cation under these two loss functions may not beneï¬t from considering dependencies between the labels.
⢠âInstead of minimizing the Rank loss directly, one can simply use any approach for single label prediction that properly estimates the marginal probabilities.â (Dembczy´nski et al., 2012)
Training the neural network with the negative cross entropy loss function as the training objective is precisely a method for properly estimating the marginal probabilities of labels (functions appearing in source code). It is thus a sensible step in preparation for making predictions under a Rank loss. | 1611.01989#59 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 59 | - - 5 5 0 : - - - - . . 1 - bed - - wa ~ - bed o ~ = . a 2 mn, a3 e J : â -?* oO a â = O5 - : Ly «- N £ SS 2 3 o- ees 2c 0 le OC 0 5000 10000 15000 © 20000 :»= 25000 _~â«-30000»S «35000 ©= 40000» «45000 += 50000 Update Steps
Figure App.3: If the discriminator remains nearly at its optimum during learning, then performance is nearly identical with and without the second gradient term in Equation 12. As shown in Figure App.4, when the discriminator lags behind the generator, backpropagating through unrolling aids convergence.
The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). Both networks are trained with Adam(Kingma & Ba, 2014) with learning rates of 1e-4 and β1=0.5. The network is trained alternating updating the generator and the discriminator for 150k steps. One step consists of just 1 network update.
# D CIFAR10/MNIST TRAINING DETAILS | 1611.02163#59 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.02247 | 59 | Training details. This section describes parameters of the training algorithms and their hyperpa- rameter search values in {}. The optimal performing hyperparameter results are reported. Policy gradient methods (VPG, TRPO, Q-Prop) used batch sizes of {1000, 5000, 25000} time steps, step sizes of {0.1, 0.01, 0.001} for the trust-region method, and base learning rates of {0.001, 0.0001} with Adam (Kingma & Ba, 2014) for vanilla policy gradient methods. For Q-Prop and DDPG, Q,, is learned with the same technique as in DDPG (Lillicrap et al., 2016), using soft target networks with t = 0.999, a replay buffer of size 10° steps, a mini-batch size of 64, and a base learning rate of {0.001, 0.0001} with Adam (Kingma & Ba, 2014). For Q-Prop we also tuned the relative ratio of gradient steps on the critic Q,, against the number of steps on the policy, in the range {0.1, 0.5, 1.0}, where 0.1 corresponds to 100 critic updates for every policy update if the batch size is 1000. For DDPG, we swept the reward scaling using {0.01,0.1,1.0} as it is sensitive to this parameter.
13 | 1611.02247#59 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a
wide variety of simulated domains. However, a major obstacle facing deep RL in
the real world is their high sample complexity. Batch policy gradient methods
offer stable learning, but at the cost of high variance, which often requires
large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly
hyperparameter sweeps to stabilize. In this work, we aim to develop methods
that combine the stability of policy gradients with the efficiency of
off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor
expansion of the off-policy critic as a control variate. Q-Prop is both sample
efficient and stable, and effectively combines the benefits of on-policy and
off-policy methods. We analyze the connection between Q-Prop and existing
model-free algorithms, and use control variate theory to derive two variants of
Q-Prop with conservative and aggressive adaptation. We show that conservative
Q-Prop provides substantial gains in sample efficiency over trust region policy
optimization (TRPO) with generalized advantage estimation (GAE), and improves
stability over deep deterministic policy gradient (DDPG), the state-of-the-art
on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control
environments. | http://arxiv.org/pdf/1611.02247 | Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine | cs.LG | Conference Paper at the International Conference on Learning
Representations (ICLR) 2017 | null | cs.LG | 20161107 | 20170227 | [] |
1611.01989 | 60 | It remains to discuss the relationship between the Rank loss and the actual quantity we care about, which is the total runtime of a Sort and add search procedure. Recall the simplifying assumption that the runtime of searching for a program of length T with C functions made available to the search is proportional to C T , and consider a Sort and add search for a program of length T , where the size of the active set is increased by 1 whenever the search fails. Starting with an active set of size 1, the total time until a solution is found can be upper bounded by A ⤠CC T
where CA is the size of the active set when the search ï¬nally succeeds (i.e., when the active set ï¬nally contains all necessary functions for a solution to exist). Hence the total runtime of a Sort and add search can be upper bounded by a quantity that is proportional to C T A .
Now ï¬x a valid program solution P that requires CP functions, and let yP â {0, 1}C be the indicator vector of functions used by P . Let D := CA â CP be the number of redundant operations added into the active set until all operations from P have been added. Example 1. Suppose the labels, as sorted by decreasing predicted marginal probabilities f (x), are as follows: | 1611.01989#60 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 60 | # D CIFAR10/MNIST TRAINING DETAILS
The network architectures for the discriminator, generator, and encoder as as follows. All convolu- tions have a kernel size of 3x3 with batch normalization. The discriminator uses leaky ReLUâs with a 0.3 leak and the generator uses standard ReLU.
The generator network is deï¬ned as:
number outputs Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,512 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512 256 128 64 1 or 3 2 2 2 1
17
Published as a conference paper at ICLR 2017
Unrolled GAN with 5 G Steps per D
ca . 0 * . - . . 1 a ~ ° - a ° c a 2 . m3 e - - . a o . . 0 £ = = : i Ss P " . é. Pore x aie a 2) ens 3 10 Be le @ Yl o> eee - ees i 4 i So Sod ans Dod 30 . be ec Cc ma ote 0 5000 10000 +â-15000+~â« 20000 ~â«25000 «30000 :~=« 35000 ~=«40000+~=«45000 +~â-50000 Update Steps | 1611.02163#60 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 61 | 1 1 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Then the solution P contains CP = 6 functions, but the active set needs to grow to size CA = 11 to include all of them, adding D = 5 redundant functions along the way. Note that the rank loss of the predictions f (x) is Lr(yP , f (x)) = 2 + 5 = 7, as it double counts the two redundant functions which are scored higher than two relevant labels.
Noting that in general Lr(yP , f (x)) ⥠D, the previous upper bound on the runtime of Sort and add can be further upper bounded as follows:
A = (CP + D)T ⤠const + const à DT ⤠const + const à Lr(yP , f (x))T Hence we see that for a constant value of T , this upper bound can be minimized by optimizing the Rank loss of the predictions f (x). Note also that Lr(yP , f (x)) = 0 would imply D = 0, in which case CA = CP .
# F DOMAIN SPECIFIC LANGUAGE OF DEEPCODER | 1611.01989#61 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 61 | Unrolled GAN with 5 G Steps per D without second gradient
.- . 5 ; . ; - a . . a 5 a Gy 7 â 2 3 ns y . 7 C aD = gs - bd - Ee . - 5 oe _â ~~. ik 20 \ ee Ea ane 7 0 5000 10000 15000 20000 25000 «30000» «35000 ©= 40000» «45000 ~ââ 50000 Update Steps
Figure App.4: Backpropagating through the unrolling process aids convergence when the dis- criminator does not fully converge between generator updates. When taking 5 generator steps per discriminator step unrolling greatly increases stability, requiring only 5 unrolling steps to converge. Without the second gradient it requires 10 unrolling steps. Also see Figure App.3.
The discriminator network is deï¬ned as:
number outputs Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64 128 256 1 2 2 2 | 1611.02163#61 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 62 | # F DOMAIN SPECIFIC LANGUAGE OF DEEPCODER
Here we provide a description of the semantics of our DSL from Sect. 4.1, both in English and as a Python implementation. Throughout, NULL is a special value that can be set e.g. to an integer outside the working integer range.
First-order functions:
HEAD :: [int] -> int
lambda xs: xs[0] if len(xs)>0 else Null Given an array, returns its ï¬rst element (or NULL if the array is empty).
LAST :: [int] -> int
lambda xs: xs[-1] if len(xs)>0 else Null Given an array, returns its last element (or NULL if the array is empty).
17
Published as a conference paper at ICLR 2017
TAKE :: int -> [int] -> int
lambda n, xs: xs[:n] Given an integer n and array xs, returns the array truncated after the n-th element. (If the length of xs was no larger than n in the ï¬rst place, it is returned without modiï¬cation.)
DROP :: int -> [int] -> int | 1611.01989#62 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 62 | The discriminator network is deï¬ned as:
number outputs Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64 128 256 1 2 2 2
The generator network minimises LG = log(D(G(z))) and the discriminator minimizes LD = log(D(x)) + log(1 â D(G(z))). The networks are trained with Adam with a generator learning rate of 1e-4, and a discriminator learning rate of 2e-4. The network is trained alternating updating the generator and the discriminator for 100k steps. One step consists of just 1 network update.
18
Published as a conference paper at ICLR 2017
E 1000 CLASS MNIST
number outputs Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,64 Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 64 32 16 8 3 2 2 2 1
# stride
The discriminator network is parametrized by a size X and is deï¬ned as follows. In our tests, we used X of 1/4 and 1/2.
number outputs stride Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 8*X 16*X 32*X 1 2 2 2
F COLORED MNIST DATASET
F.1 DATASET | 1611.02163#62 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 63 | DROP :: int -> [int] -> int
lambda n, xs: xs[n:] Given an integer n and array xs, returns the array with the ï¬rst n elements dropped. (If the length of xs was no larger than n in the ï¬rst place, an empty array is returned.)
ACCESS :: int -> [int] -> int
lambda n, xs: xs[n] if n>=0 and len(xs)>n else Null Given an integer n and array xs, returns the (n+1)-st element of xs. (If the length of xs was less than or equal to n, the value NULL is returned instead.)
MINIMUM :: [int] -> int
lambda xs: min(xs) if len(xs)>0 else Null Given an array, returns its minimum (or NULL if the array is empty).
MAXIMUM :: [int] -> int
lambda xs: max(xs) if len(xs)>0 else Null Given an array, returns its maximum (or NULL if the array is empty).
REVERSE :: [int] -> [int]
lambda xs: list(reversed(xs)) Given an array, returns its elements in reversed order. | 1611.01989#63 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 63 | F COLORED MNIST DATASET
F.1 DATASET
To generate this dataset we ï¬rst took the mnist digit, I, scaled between 0 and 1. For each image we sample a color, C, normally distributed with mean=0 and std=0.5. To generate a colored digit between (-1, 1) we do I â C + (I â 1). Finally, we add a small amount of pixel independent noise sampled from a normal distribution with std=0.2, and the resulting values are cliped between (-1, 1). When visualized, this generates images and samples that can be seen in ï¬gure App.5. Once again it is very hard to visually see differences in sample diversity when comparing the 128 and the 512 sized models. | 1611.02163#63 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 64 | REVERSE :: [int] -> [int]
lambda xs: list(reversed(xs)) Given an array, returns its elements in reversed order.
SORT :: [int] -> [int] lambda xs: sorted(xs) Given an array, return its elements in non-decreasing order.
SUM :: [int] -> int lambda xs: sum(xs) Given an array, returns the sum of its elements. (The sum of an empty array is 0.)
Higher-order functions:
⢠MAP :: (int -> int) -> [int] -> [int] lambda f, xs: [f(x) for x in xs] Given a lambda function f mapping from integers to integers, and an array xs, returns the array resulting from applying f to each element of xs.
FILTER :: (int -> bool) -> [int] -> [int] lambda f, xs: [x for x in xs if f(x)] Given a predicate f mapping from integers to truth values, and an array xs, returns the elements of xs satisfying the predicate in their original order. | 1611.01989#64 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 64 | Figure App.5: Right: samples from the data distribution. Middle: Samples from 1/4 size model with 0 look ahead steps (worst diversity). Left: Samples from 1/1 size model with 10 look ahead steps (most diversity).
F.2 MODELS
The models used in this section are parametrized by a variable X to control capacity. A value of X=1 is same architecture used in the cifar10 experiments. We used 1/4, 1/2 and 1 as these values.
The generator network is deï¬ned as:
19
Published as a conference paper at ICLR 2017
number outputs stride Input: z â¼ N (0, I256) Fully connected Reshape to image 4,4,512*X Transposed Convolution Transposed Convolution Transposed Convolution Convolution 4 * 4 * 512*X 256*X 128*X 64*X 3 2 2 2 1
The discriminator network is deï¬ned as:
number outputs stride Input: x â¼ pdata or G Convolution Convolution Convolution Flatten Fully Connected 64*X 128*X 256*X 1 2 2 2
# G OPTIMIZATION BASED VISUALIZATIONS | 1611.02163#64 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 65 | ⢠COUNT :: (int -> bool) -> [int] -> int lambda f, xs: len([x for x in xs if f(x)]) Given a predicate f mapping from integers to truth values, and an array xs, returns the number of elements in xs satisfying the predicate.
# ZIPWITH
::
# (int
>
# int
>
# int)
>
# [int]
>
# [int]
>
# [int]
lambda f, xs, ys: [f(x, y) for (x, y) in zip(xs, ys)] Given a lambda function f mapping integer pairs to integers, and two arrays xs and ys, returns the array resulting from applying f to corresponding elements of xs and ys. The length of the returned array is the minimum of the lengths of xs and ys. ⢠SCANL1 :: (int -> int -> int) -> [int] -> [int]
Given a lambda function f mapping integer pairs to integers, and an array xs, returns an array ys of the same length as xs and with its content deï¬ned by the recurrence ys[0] = xs[0], ys[n] = f(ys[n-1], xs[n]) for n ⥠1. | 1611.01989#65 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 65 | # G OPTIMIZATION BASED VISUALIZATIONS
More examples of model based optimization. We performed 5 runs with different seeds of each of of the unrolling steps conï¬guration. Bellow are comparisons for each run index. Ideally this would be a many to many comparison, but for space efï¬ciency we grouped the runs by the index in which they were run.
20
Published as a conference paper at ICLR 2017
' Qivii 0.0133 f- I 7 i iy vl re 0.0128 âi re 0.0133, = CUE . E.
0251 ae % asi oe 0.0465 0.0302 0.0272 0.0206 0.012 0.007 0.0085 eee 0.0252 0.0167 0.0172 0.0268 0.0157 0.0154 0.0121 0.0235 0.0185, * 0.0325 0.0295 0.0222
Figure App.6: Samples from 1/5 with different random seeds.
21
Published as a conference paper at ICLR 2017 | 1611.02163#65 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 66 | The INTâINT lambdas (+1), (-1), (*2), (/2), (*(-1)), (**2), (*3), (/3), (*4), (/4) provided by our DSL map integers to integers in a self-explanatory manner. The INTâBOOL lambdas (>0), (<0), (%2==0), (%2==1) respectively test positivity, negativity, evenness and
18
Published as a conference paper at ICLR 2017
oddness of the input integer value. Finally, the INTâINTâINT lambdas (+), (-), (*), MIN, MAX apply a function to a pair of integers and produce a single integer.
As an example, consider the function SCANL1 MAX, consisting of the higher-order function SCANL1 and the INTâINTâINT lambda MAX. Given an integer array a of length L, this function computes the running maximum of the array a. Speciï¬cally, it returns an array b of the same length L whose i-th element is the maximum of the ï¬rst i elements in a. | 1611.01989#66 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 67 | = + 1 * NO YW -03 .06 .00 .09 . 415) (13) {14} (13) {12} 01. {5} .00 .02 . (16) {17} {16} {17} -02 10.00. (8 {8} MAXIMUM KE ROP >| FILTER >0) <0) (%2==1) -04 .09 .08 .00 . {10} (6 {9} {10} -09 .02 .01 .05 .06 .05 .03 .00 .02 {39} {36} {38} (37) (34) (37) (32) .03 . .09 .06 .08 .09 .05 .01 .06 .04 .09 .09 .00 . 141) (42) (38) (38) (35) {40} (39) (39) (38) -07 .08 .05 .11 .08 .04 .11 .07 .01 .00 .03 . â a7) 12) (ea) (81) (110) (91) {89} {92} (89) 01 .02 .07 . pas aa oe 1. 05 .00 | (33) (32) 2 425) (29) 00 .01 .01 . .07 .06 .01 d 10.01. {33} {33} 32) | 1611.01989#67 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 67 | 0.0338 0.024 0.0232 0.0178 Fall 0.0273 0.0168 0. 0.0145 Se 0.0355 0.0232 0.0262 BB) 0.0151 0.0127 0.0137 0.008 0.0213 0.0159 0.0221 0.0114 Aaa 0.0305 & 0255 0.0199 0.0046 0.0039 0.0027 Pn a nn 0.0292 0.0239 0.0211 0.016 0.0201 0.0217 0. 4 EVE 0.0213 0.0207 0.0339 0.0215 pepe je 0.0211 0.0291 0.0226 0.015 0.0156
Figure App.7: Samples from 2/5 with different random seeds.
22
Published as a conference paper at ICLR 2017
J = 0147 0.0173 â0.0242 â0, 0156 0.0144 © tt 0. rez) 0.0174 f ; F | 0.0133 | 1611.02163#67 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 68 | .00 | (33) (32) 2 425) (29) 00 .01 .01 . .07 .06 .01 d 10.01. {33} {33} 32) (33) 0 (24) (26) .00 .06 .06 .05 .09 .11 06 1 .05 .07 00 . 38} (36) 36) (37) (26 (29) 27) (29) 0 .02 .06 .03 .09 .09 04 09 . 7 .00 . {45} {39} {42} {45} CEE] (40) {33} -01 .03 . ate 00. {32} {28} {28} -00 .01 .02 .02 .02 .01 .02 .02 .01 .06 .05 .04 {249} {231} {243} {246} {243} {240} (248) 188} {193} .00 .01 .02 .01 .01 .01 .03 .02 .02 .10 .08 .04 .02 . {121} (121) {115} {114} {117} {118} {120} {90} {92} {102} -00 .02 .01 .02 .03 .02 .02 .02 .02 07 .05 .02 . {126} {125} {125} {124} (122) {117} (125) | 1611.01989#68 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 68 | 0.0352 02. oo 0.0109 0.0111 0.0221 0.0133 0.0144 0.0086 0.0157 0.0196 0.0111 CE 0.0357 0.0351 0.0258 SA SNEN 0.015 0.0107 0.0219 0. 0105 0.013 = =0.0112 0.0105 0.0177 0.019 d 0.0146 a 0.0169 0.015
Figure App.8: Samples from 3/5 with different random seeds.
23
Published as a conference paper at ICLR 2017
0.0259 0.016 0.0288 0.0464 0.0261 0.0269 0.0239 0.0366 0.0248 0.0205 0.0178 ee oe a ng Ry 0.0336 0.0228 0.0557 0.0322 0.0304 0.0282 | 1611.02163#68 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 69 | .02 .03 .02 .02 .02 .02 07 .05 .02 . {126} {125} {125} {124} (122) {117} (125) {83} {99} {110} -01 .02 .01 .01 .02 .02 .02 .02 .00 . 02. {175} {168} {170} {172} {159} {165} {171} (168) {136} -00 .02 .00 .02 .03 .03 .03 .02 .01 ol) 02. {152} {149} (148) {145} {145} {150} {142} {147} {119) -01 .03 .01 .02 .03 .02 .03 .01 .00 -03 .08 .08 {142} {132} {135} {137} {135} {133} (130) {138} {102} {107} -01 .02 .01 .02 .03 .02 .03 .02 .02 -04 .11 .08 .02 {426} {409} (407) {408} (401) {403} (397) (413) 4259) (284) (289) -00 .03 .01 .01 .02 .01 .02 .02 .00 -05 .11 .09 .02 . 22) (118) (120) 21) (116 | 1611.01989#69 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.02163 | 69 | Data O step 0.024 0.0244 0.0212 Faas 0.0361 0.0289 ââ 0.0219 0.0122 Ha ° re] N ~ roo) 0.0314 0.0217 Pte 0.0142 0.0084 »S 0.0294 0.0163 0.0362 0.0494 0.0277 Be 0.0375 0.0323 0.0247 0.0206
Figure App.9: Samples from 4/5 with different random seeds.
24
Published as a conference paper at ICLR 2017
0.0128 0. 0058 0. 0065 0.0392 0. ~s] 0.0218 0.0177 0.0402 0.0308 0.0286 0.0184 ms Me _c! 0.0119 0. 0077 0.0402 0.0299 0.0233 0.0188 te fe oe 0.026 0.0144 0.0165 0.0122 0.0097 eae 0061 0.005 0046 Bl 0.0105 0.0051 0.005 LAA omit 0.0236 0.0256 0.0158 | 1611.02163#69 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by
defining the generator objective with respect to an unrolled optimization of
the discriminator. This allows training to be adjusted between using the
optimal discriminator in the generator's objective, which is ideal but
infeasible in practice, and using the current value of the discriminator, which
is often unstable and leads to poor solutions. We show how this technique
solves the common problem of mode collapse, stabilizes training of GANs with
complex recurrent generators, and increases diversity and coverage of the data
distribution by the generator. | http://arxiv.org/pdf/1611.02163 | Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein | cs.LG, stat.ML | null | null | cs.LG | 20161107 | 20170512 | [
{
"id": "1511.06350"
},
{
"id": "1609.03126"
},
{
"id": "1605.09304"
},
{
"id": "1610.09585"
},
{
"id": "1601.06759"
},
{
"id": "1612.02780"
},
{
"id": "1606.03498"
},
{
"id": "1606.05328"
},
{
"id": "1606.00704"
},
{
"id": "1503.03167"
},
{
"id": "1606.00709"
},
{
"id": "1511.06434"
},
{
"id": "1503.05571"
},
{
"id": "1606.03657"
},
{
"id": "1506.06579"
},
{
"id": "1605.08803"
},
{
"id": "1606.04474"
},
{
"id": "1603.01768"
},
{
"id": "1506.03877"
},
{
"id": "1509.00519"
},
{
"id": "1603.08155"
}
] |
1611.01989 | 70 | -00 .03 .01 .01 .02 .01 .02 .02 .00 -05 .11 .09 .02 . 22) (118) (120) 21) (116 (119) (122) (119) {69} (64) (99) .01 .04 .07 .05 .01 .04 .03 .01 .06 :02 .09 .06 .00 . 3) (32) (32) G1) (33) G0) (32) G2 (26) (26) {26} 01 .09 .02 .02 .01 .02 .04 .06 -05 .01 .02 .03 | -08 {40} {38} (38) {38} {38} {36} {38} {38} {33} {32} {29} (38) 0 .02 .03 .05 .03 .03 .04 .03 .02 .10 .06 .09 .02 . .02 fal {27} {26} (26) (26) (26) {26} {27} {27 {20} {22} {26} -01 .01 .07 .01 .04 .00 .00 .01 .00 -02 .06 .04 z -00 .02 .01 (21) {19} (21) (21) (20) (21) (21) (2 {15} {16} | 1611.01989#70 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 71 | .06 .04 z -00 .02 .01 (21) {19} (21) (21) (20) (21) (21) (2 {15} {16} {16} {21} {19} {21} -00 .01 .04 .01 .01 .00 .04 .01 .01 .08 .03 .09 .02 .03 .06 .03 .01 .07 .03 .02 {39} {34} (38) {39} {38} {39} {36} {39} {35} {20} {28} {37} {36} {36} {38} .02 .02 .00 .02 .04 .03 .02 .01 .01 .04 .07 -01 .05 .08 .03 .02 .05 .01 .07 {28} {25} {23} (26) (28) {28} {27} (27 (28) (21) (28) {27} (28) {25} -00 .01 .00 .00 .01 .01 .00 .00 .03 .00 -02 .01 03 .00 . -07 .02 .01 .00 (24) {21} {22} {23} {23} {24} {22} {24} {19} {19} (24) {23} {23} {23} 0 .00 .00 | 1611.01989#71 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 72 | {23} {23} {24} {22} {24} {19} {19} (24) {23} {23} {23} 0 .00 .00 .00 .00 .00 .00 .00 -02 .02 .02 .08 .02 .06 .01 .06 .02 .04 {38} {38} {34} {36} {37} {36} {36} {38} {32} {28} {38} {35} (36) (36) 0 .02 .00 .00 .01 .00 .00 .04 .00 -00 .11 05 .06 .06 .04 .05 .02 .02 (34) {34} (33) {32} {34} (34) {33} {34} {27} (25) {35} (34) (34) {35} -00 .00 .0O0 .04 .01 .01 .00 .01 .01 -03 .08 .07 .00 . -03 .02 .04 .03 | {32} {29} {31} {30} {32} {31} (32) (30 (22) (22) (31) {32} (31) (32) -00 .00 .01 .07 .03 .02 .01 .04 .01 07 .04 .11 .04 .06 . -01 .01 .06 | 1611.01989#72 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 74 | Figure 9: Conditional confusion matrix for the neural network and test set of P = 500 programs of length T = 3 that were used to obtain the results presented in Table 1. Each cell contains the average false positive probability (in larger font) and the number of test programs from which this average was computed (smaller font, in brackets). The color intensity of each cellâs shading coresponds to the magnitude of the average false positive probability.
# G ANALYSIS OF TRAINED NEURAL NETWORKS
We analyzed the performance of trained neural networks by investigating which program instruc- tions tend to get confused by the networks. To this end, we looked at a generalization of confusion matrices to the multilabel classiï¬cation setting: for each attribute in a ground truth program (rows) measure how likely each other attribute (columns) is predicted as a false positive. More formally, in this matrix the (i, j)-entry is the average predicted probability of attribute j among test programs
19
Published as a conference paper at ICLR 2017
that do possess attribute i and do not possess attribute j. Intuitively, the i-th row of this matrix shows how the presence of attribute i confuses the network into incorrectly predicting each other attribute j. | 1611.01989#74 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 75 | Figure 9 shows this conditional confusion matrix for the neural network and P = 500 program test set conï¬guration used to obtain Table 1. We re-ordered the confusion matrix to try to expose block structure in the false positive probabilities, revealing groups of instructions that tend to be difï¬cult to distinguish. Figure 10 show the conditional confusion matrix for the neural network used to obtain the table in Fig. 3a. While the results are somewhat noisy, we observe a few general tendencies:
⢠There is increased confusion amongst instructions that select out a single element from an array: HEAD, LAST, ACCESS, MINIMUM, MAXIMUM.
⢠Some common attributes get predicted more often regardless of the ground truth program: FILTER, (>0), (<0), (%2==1), (%2==0), MIN, MAX, (+), (-), ZIPWITH.
⢠There are some groups of lambdas that are more difï¬cult for the network to distinguish within: (+) vs (-); (+1) vs (-1); (/2) vs (/3) vs (/4).
⢠When a program uses (**2), the network often thinks itâs using (*), presumably because both can lead to large values in the output.
20
Published as a conference paper at ICLR 2017 | 1611.01989#75 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 76 | = = as wW o @2ZivaG tie... ESL Ee . SoG FS ee ARIES S EES, , SSPE EPITSF ee e5 HEAD al 09 04 .09 .07 .06 .09 07 0 06 .0. 0! LAST ACCESS | = MINIMUM MAXIMUM TAKE DROP}: FILTER |: ie, ES (>0) d J d d d d ue 5 A u de d 2 (<0) |-05 .07 .08 .07 .06 09 . ! . 03. ue 10. (%2==1) (%2==0) COUNT |: MAP|: . ion fa MIN |: d J d d c d c d d G G d d 6 08 â05 d MAX | - J J d d d d . d c d d d , ae pd d +]: d J d d d J d dl C d d 2 2 a7 04 d - ws . ZIPWITH | -05 .08 .04 .04 .04 .06 . 08 .09 .09 .09 .05 .06 11 . We SCANL1 ]:°4 -09 .95 .05 .05 .09 .07 .15 oie 07 (08) eee oS pe SORT |:05 .09 .06 .02 .03 .03 . 04. =) REVERSE (*-1) | 1611.01989#76 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01989 | 77 | .15 oie 07 (08) eee oS pe SORT |:05 .09 .06 .02 .03 .03 . 04. =) REVERSE (*-1) (**2) (+1) (#2) |-03 .06 .04 .02 .05 05 . 04 11 .05 .06 . ail & ae ie 3 (#3) | 04 .08 .04 . 05 .05 .10 04 .09 12 .07 . .09 .10 .04 a ie ae x (*4) (/2) (/3) d d d d d d J d d d G d d d of A A d d d d d d A ped (/4) d , d d d A d d d dl d 2 â08 c 4. cs d d dl d , d d d d A â08 SUM |° d 14 d d o9 . 6 .14 14 d J d d J d J .02 .05 .02 (4) (6) 3) (6) 1) (5) (6) 15) (5) | 1611.01989#77 | DeepCoder: Learning to Write Programs | We develop a first line of attack for solving programming competition-style
problems from input-output examples using deep learning. The approach is to
train a neural network to predict properties of the program that generated the
outputs from the inputs. We use the neural network's predictions to augment
search techniques from the programming languages community, including
enumerative search and an SMT-based solver. Empirically, we show that our
approach leads to an order of magnitude speedup over the strong non-augmented
baselines and a Recurrent Neural Network approach, and that we are able to
solve problems of difficulty comparable to the simplest problems on programming
competition websites. | http://arxiv.org/pdf/1611.01989 | Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow | cs.LG | Submitted to ICLR 2017 | null | cs.LG | 20161107 | 20170308 | [] |
1611.01796 | 1 | # Jacob Andreas 1 Dan Klein 1 Sergey Levine 1
# Abstract
We describe a framework for multitask deep re- inforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement themâspeciï¬cally not providing the detailed guidance used by much previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion sig- nals, or intrinsic motivations). To learn from sketches, we present a model that associates ev- ery subtask with a modular subpolicy, and jointly maximizes reward over full task-speciï¬c poli- cies by tying parameters across shared subpoli- cies. Optimization is accomplished via a decou- pled actorâcritic training objective that facilitates learning common behaviors from multiple dis- similar reward functions. We evaluate the effec- tiveness of our approach in three environments featuring both discrete and continuous control, and with sparse rewards that can be obtained only after completing a number of high-level sub- goals. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learn- ing task-speciï¬c or shared policies, while nat- urally inducing a library of interpretable primi- tive behaviors that can be recombined to rapidly adapt to new tasks.
# 1. Introduction | 1611.01796#1 | Modular Multitask Reinforcement Learning with Policy Sketches | We describe a framework for multitask deep reinforcement learning guided by
policy sketches. Sketches annotate tasks with sequences of named subtasks,
providing information about high-level structural relationships among tasks but
not how to implement them---specifically not providing the detailed guidance
used by much previous work on learning policy abstractions for RL (e.g.
intermediate rewards, subtask completion signals, or intrinsic motivations). To
learn from sketches, we present a model that associates every subtask with a
modular subpolicy, and jointly maximizes reward over full task-specific
policies by tying parameters across shared subpolicies. Optimization is
accomplished via a decoupled actor--critic training objective that facilitates
learning common behaviors from multiple dissimilar reward functions. We
evaluate the effectiveness of our approach in three environments featuring both
discrete and continuous control, and with sparse rewards that can be obtained
only after completing a number of high-level subgoals. Experiments show that
using our approach to learn policies guided by sketches gives better
performance than existing techniques for learning task-specific or shared
policies, while naturally inducing a library of interpretable primitive
behaviors that can be recombined to rapidly adapt to new tasks. | http://arxiv.org/pdf/1611.01796 | Jacob Andreas, Dan Klein, Sergey Levine | cs.LG, cs.NE | To appear at ICML 2017 | null | cs.LG | 20161106 | 20170617 | [
{
"id": "1606.04695"
},
{
"id": "1609.07088"
},
{
"id": "1506.02438"
},
{
"id": "1511.04834"
},
{
"id": "1604.06057"
}
] |
1611.01796 | 2 | # 1. Introduction
v1: make planks
# â¢: make sticks
# Th
bi: get wood i ha bi:getwood 2 by: use workbench 12 53: use toolshed
Figure 1: Learning from policy sketches. The ï¬gure shows sim- pliï¬ed versions of two tasks (make planks and make sticks, each associated with its own policy (Î 1 and Î 2 respectively). These policies share an initial high-level action b1: both require the agent to get wood before taking it to an appropriate crafting sta- tion. Even without prior information about how the associated be- havior Ï1 should be implemented, knowing that the agent should initially follow the same subpolicy in both tasks is enough to learn a reusable representation of their shared structure. | 1611.01796#2 | Modular Multitask Reinforcement Learning with Policy Sketches | We describe a framework for multitask deep reinforcement learning guided by
policy sketches. Sketches annotate tasks with sequences of named subtasks,
providing information about high-level structural relationships among tasks but
not how to implement them---specifically not providing the detailed guidance
used by much previous work on learning policy abstractions for RL (e.g.
intermediate rewards, subtask completion signals, or intrinsic motivations). To
learn from sketches, we present a model that associates every subtask with a
modular subpolicy, and jointly maximizes reward over full task-specific
policies by tying parameters across shared subpolicies. Optimization is
accomplished via a decoupled actor--critic training objective that facilitates
learning common behaviors from multiple dissimilar reward functions. We
evaluate the effectiveness of our approach in three environments featuring both
discrete and continuous control, and with sparse rewards that can be obtained
only after completing a number of high-level subgoals. Experiments show that
using our approach to learn policies guided by sketches gives better
performance than existing techniques for learning task-specific or shared
policies, while naturally inducing a library of interpretable primitive
behaviors that can be recombined to rapidly adapt to new tasks. | http://arxiv.org/pdf/1611.01796 | Jacob Andreas, Dan Klein, Sergey Levine | cs.LG, cs.NE | To appear at ICML 2017 | null | cs.LG | 20161106 | 20170617 | [
{
"id": "1606.04695"
},
{
"id": "1609.07088"
},
{
"id": "1506.02438"
},
{
"id": "1511.04834"
},
{
"id": "1604.06057"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.