doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.03743 | 39 | promising for a modular-based bot as it could optimize the macromanagement policy to ï¬t the ï¬xed micromanagement policy. Additionally, learning a macromanagement policy to speciï¬cally beat other bots that are competing in a tournament is a promising future direction.
This paper also introduces a new benchmark for machine learning, where the goal is to predict the next unit, building, technology or upgrade that is produced by a human player given a game state in StarCraft. An interesting extension to the presented approach, which could potentially improve the results, could involve including positional information as features for the neural network. The features could be graphical and similar to the minimap in the game that gives an abstract overview of where units and buildings are located on the map. Regularization techniques such as dropout [24] or L2 regularization [18] could perhaps reduce the error rate of deeper networks and ultimately improve the playing bot.
Finally, it would be interesting to apply our trained network to a more sophisticated StarCraft bot that is able to manage several bases well and can control advanced units such as spell casters and shuttles. This is currently among our future goals, and hopefully this bot will participate in the coming StarCraft competitions. | 1707.03743#39 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 39 | Method Optimal Context Search Acc test val Reading Acc test val Overall Acc test val Human Performance Expert (CB) Non-Expert (OB) Language models 3-gram 4-gram 5-gram BiRNNâ Short-documents WD MF-e MF-i GAâ Long-documents WD MF-e MF-i GAâ â â â â â â 10 60 90 70 10 15 15 15 â â â â â â 0.40 0.64 0.67 0.65 0.66 0.69 0.69 0.67 â â â â â â 0.43 0.64 0.68 0.65 0.66 0.69 0.69 0.67 â â â â â â 0.250 0.209 0.237 0.486 0.124 0.185 0.230 0.474 â â â â â â 0.249 0.212 0.234 0.483 0.142 0.197 0.231 0.479 0.468 0.500 0.148 0.161 0.165 0.345 0.100 0.134 0.159 0.315 0.082 0.128 0.159 0.318 â â 0.153 0.171 0.174 0.336 0.107 0.136 0.159 0.316 0.093 0.136 0.159 0.321 | 1707.03904#39 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 40 | Despite the presented approach not achieving a skill level on pair with humans, it should be fairly straightforward to extend it further with reinforcement learning. Supervised learning on replays can be applied to pre-train networks, ensuring that the initial exploration during reinforcement learning is sensible, which proved to be a critical step to surpass humans in the game Go [23]. Reinforcement learning is especially
# VII. CONCLUSION
This paper presented an approach that learns from StarCraft replays to predict the next build produced by human players. 789,571 state-action pairs were extracted from 2,005 replays of highly skilled players. We trained a neural network with supervised learning on this dataset, with the best network achieving top-1 and top-3 error rates of 54.6% and 22.9%. To | 1707.03743#40 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03743 | 41 | demonstrate the usefulness of this approach, the open source StarCraft bot UAlbertaBot was extended to use such a neural network as a production manager, thereby allowing the bot to produce builds based on the networks predictions. Two action selection strategies were introduced: A greedy approach that always selects the action with the highest probability, and a probabilistic approach that selects actions corresponding to the probabilities of the networkâs softmax output. The probabilistic strategy proved to be the most successful and managed to achieve a win rate of 68% against the games built-in Terran bot. Additionally, we demonstrated that the presented approach was able to play competitively against UAlbertaBot with a ï¬xed rush strategy. Future research will show whether reinforcement learning can improve these results further, which could narrow the gap between humans and computers in StarCraft.
REFERENCES [1] J. Blackford and G. B. Lamont. The real-time strategy game
multi-objective build order problem. In AIIDE, 2014.
[2] M. Bogdanovic, D. Markovikj, M. Denil, and N. de Freitas. Deep Apprenticeship Learning for Playing Video Games. PhD thesis, Citeseer, 2014. | 1707.03743#41 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 41 | Method Optimal Context Search Acc val test val exact Reading Acc test val f1 test val exact Overall Acc test val Human Performance Expert (CB) Non-Expert (OB) Short-documents MF-i WD SW+D SW MF-e GAâ BiDAFâ ** Long-documents WD SW SW+D MF-i MF-e BiDAFâ ** GAâ ** â â 10 20 20 10 70 70 10 20 20 5 20 20 1 10 â â 0.35 0.40 0.64 0.56 0.45 0.44 0.57 0.43 0.74 0.58 0.44 0.43 0.47 0.44 â â 0.34 0.39 0.63 0.53 0.45 0.44 0.54 0.44 0.73 0.58 0.45 0.44 0.468 0.44 â â 0.053 0.104 0.112 0.216 0.372 0.580 0.454 0.084 0.041 0.064 0.185 0.273 0.370 0.551 â â 0.044 0.082 0.113 0.205 0.342 0.600 0.476 0.067 0.034 0.055 0.187 0.286 0.395 0.556 â â 0.053 0.104 0.157 0.299 0.372 | 1707.03904#41 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 42 | [3] Z. Chen and D. Yi. The game imitation: Deep supervised convolutional networks for quick video game ai. arXiv preprint arXiv:1702.05663, 2017.
[4] H.-C. Cho, K.-J. Kim, and S.-B. Cho. Replay-based strategy prediction and build order adaptation for starcraft ai bots. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1â7. IEEE, 2013.
[5] D. Churchill and M. Buro. Build order optimization in starcraft. In AIIDE, pages 14â19, 2011.
[6] D. Churchill, M. Preuss, F. Richoux, G. Synnaeve, A. Uriarte, S. Ontan´on, and M. Certick`y. Starcraft bots and competitions. 2016.
[7] E. W. Dereszynski, J. Hostetler, A. Fern, T. G. Dietterich, T.-T. Hoang, and M. Udarbe. Learning probabilistic behavior models in real-time strategy games. In AIIDE, 2011. | 1707.03743#42 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 42 | 0.067 0.034 0.055 0.187 0.286 0.395 0.556 â â 0.053 0.104 0.157 0.299 0.372 0.580 0.509 0.084 0.056 0.094 0.185 0.273 0.425 0.551 â â 0.044 0.082 0.155 0.271 0.342 0.600 0.524 0.067 0.050 0.088 0.187 0.286 0.445 0.556 0.547 0.515 0.019 0.042 0.072 0.120 0.167 0.256 0.257 0.037 0.030 0.037 0.082 0.119 0.17 0.245 â â 0.015 0.032 0.071 0.109 0.153 0.264 0.259 0.029 0.025 0.032 0.084 0.126 0.185 0.244 0.604 0.606 0.019 0.042 0.101 0.159 0.167 0.256 0.289 0.037 0.041 0.054 0.082 0.119 0.199 0.245 f1 test â â 0.015 0.032 0.097 0.144 0.153 0.264 0.285 0.029 | 1707.03904#42 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 43 | [8] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
[9] J.-L. Hsieh and C.-T. Sun. Building a player strategy model In Neural by analyzing replays of real-time strategy games. Networks, 2008. IJCNN 2008.(IEEE World Congress on Com- putational Intelligence). IEEE International Joint Conference on, pages 3106â3111. IEEE, 2008.
[10] N. Justesen and S. Risi. Continual online evolution for in- In The Genetic and
game build order adaptation in starcraft. Evolutionary Computation Conference (GECCO), 2017. [11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. | 1707.03743#43 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03743 | 44 | [12] H. K¨ostler and B. Gmeiner. A multi-objective genetic algorithm ii. KI-K¨unstliche for build order optimization in starcraft Intelligenz, 27(3):221â233, 2013.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬- cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. [14] M. Kuchem, M. Preuss, and G. Rudolph. Multi-objective as- sessment of pre-optimized build orders exempliï¬ed for starcraft 2. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1â8. IEEE, 2013.
[15] G. Lample and D. S. Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016. | 1707.03743#44 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 44 | Table 3: Performance comparison on QUASAR-T. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with â . Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.
2017. Gated-attention readers for text comprehen- sion. ACL .
Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179 .
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693â 1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. ICLR . | 1707.03904#44 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 45 | [15] G. Lample and D. S. Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016.
[16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015.
[17] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous In International methods for deep reinforcement Conference on Machine Learning, 2016. | 1707.03743#45 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 45 | Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. ACL .
Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for directly reading documents. EMNLP .
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine read- ing comprehension dataset. NIPS .
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. EMNLP .
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. EMNLP .
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4. | 1707.03904#45 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 46 | [18] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473â493, 1992. [19] S. OntaËn´on, K. Mishra, N. Sugandh, and A. Ram. Case- based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning, pages 164â 178. Springer, 2007.
[20] S. Ontan´on, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss. A survey of real-time strategy game ai research and competition in starcraft. IEEE Transactions on Computa- tional Intelligence and AI in games, 5(4):293â311, 2013. [21] S. Risi and J. Togelius. Neuroevolution in games: State of the art and open challenges. IEEE Transactions on Computational Intelligence and AI in Games, 2015. | 1707.03743#46 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 46 | Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. ICLR .
Andreas Stolcke et al. 2002. Srilm-an extensible lan- In Interspeech. volume guage modeling toolkit. 2002, page 2002.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR confer- ence on Research and development in information retrieval. ACM, pages 200â207.
Yusuke Watanabe, Bhuwan Dhingra, and Ruslan Salakhutdinov. 2017. Question answering from unstructured text by retrieval and comprehension. arXiv preprint arXiv:1703.08885 . | 1707.03904#46 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 47 | [22] G. Robertson and I. D. Watson. An improved dataset and extraction process for starcraft ai. In FLAIRS Conference, 2014. [23] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershel- vam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[24] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural net- works from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
[25] M. Stanescu, N. A. Barriga, A. Hess, and M. Buro. Evaluating real-time strategy game states using convolutional neural net- works. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â7. IEEE, 2016. | 1707.03743#47 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 47 | Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efï¬cient neural ar- chitecture for question answering. arXiv preprint arXiv:1703.04816 .
Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based ques- tion answering. In Proceedings of the 23rd interna- tional conference on World wide web. ACM, pages 515â526.
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In EMNLP. Citeseer, pages 2013â 2018.
# A QUASAR-S Relation Deï¬nitions
Table 4 includes the deï¬nition of all the annotated relations for QUASAR-S.
# B Performance Analysis | 1707.03904#47 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03743 | 48 | [26] G. Synnaeve and P. Bessiere. A bayesian model for plan recognition in rts games applied to starcraft. arXiv preprint arXiv:1111.3735, 2011.
[27] G. Synnaeve and P. Bessiere. A dataset for starcraft ai\ & an example of armies clustering. arXiv preprint arXiv:1211.4552, 2012.
[28] A. Uriarte and S. Ontan´on. Automatic learning of combat In Eleventh Artiï¬cial Intelligence and models for rts games. Interactive Digital Entertainment Conference, 2015.
[29] N. Usunier, G. Synnaeve, Z. Lin, and S. Chintala. Episodic ex- ploration for deep deterministic policies: An application to star- craft micromanagement tasks. arXiv preprint arXiv:1609.02993, 2016.
[30] B. G. Weber and M. Mateas. A data mining approach to strategy In Computational Intelligence and Games, 2009. prediction. CIG 2009. IEEE Symposium on, pages 140â147. IEEE, 2009. | 1707.03743#48 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | [
{
"id": "1609.02993"
},
{
"id": "1702.05663"
},
{
"id": "1609.05521"
}
] |
1707.03904 | 48 | Table 4 includes the deï¬nition of all the annotated relations for QUASAR-S.
# B Performance Analysis
Figure 5 shows a comparison of the human perfor- mance with the best performing baseline for each category of annotated questions. We see consis- tent differences between the two, except in the following cases. For QUASAR-S, Bi-RNN per- forms comparably to humans for the developed- with and runs-on categories, but much worse in the has-component and is-a categories. For QUASAR- T, BiDAF performs comparably to humans in the sports category, but much worse in history & re- ligion and language, or when the answer type is a number or date/time.
Relation (head â answer) Deï¬nition is-a component-of has-component developed-with extends runs-on synonym used-for head is a type of answer head is a component of answer answer is a component of head head was developed using the answer head is a plugin or library providing additional functionality to larger thing answer answer is an operating system, platform, or framework on which head runs head and answer are the same entity head is a software / framework used for some functionality related to answer | 1707.03904#48 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03904 | 49 | Table 4: Description of the annotated relations between the head entity, from whose deï¬nition the cloze is constructed, and the answer entity which ï¬lls in the cloze. These are the same as the descriptions shown to the annotators.
(a) QUASAR-S relations (b) QUASAR-T genres (c) QUASAR-T answer categories
Figure 5: Performance comparison of humans and the best performing baseline across the categories annotated for the development set. | 1707.03904#49 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | [
{
"id": "1703.04816"
},
{
"id": "1704.05179"
},
{
"id": "1611.09830"
},
{
"id": "1703.08885"
}
] |
1707.03017 | 0 | 7 1 0 2 c e D 8 1
] V C . s c [ 5 v 7 1 0 3 0 . 7 0 7 1 : v i X r a
# Learning Visual Reasoning Without Strong Priors
Ethan Perez12, Harm de Vries1, Florian Strub3, Vincent Dumoulin1, Aaron Courville14
1MILA, Universit´e of Montr´eal, Canada; 2Rice University, U.S.A. 3Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 CRIStAL France 4CIFAR Fellow, Canada [email protected], [email protected], [email protected] [email protected], [email protected]
# Abstract | 1707.03017#0 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 1 | # Abstract
Achieving artiï¬cial visual reasoning â the ability to answer image-related questions which require a multi-step, high-level process â is an important step towards artiï¬cial general intel- ligence. This multi-modal task requires learning a question- dependent, structured reasoning process over images from lan- guage. Standard deep learning approaches tend to exploit bi- ases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-of- the-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step pro- cess. Previous work has operated under the assumption that vi- sual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively. Index Terms: Deep Learning, Language and Vision
Note: A full paper extending this study is available at http: //arxiv.org/abs/1709.07871, with additional refer- ences, experiments, and analysis. | 1707.03017#1 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 2 | (a) What number of cylin- small purple ders are things or yellow rubber things? Predicted: 2 (b) What color is the other is the same object shape as the large brown matte thing? Predicted: Brown that
Figure 1: Examples from CLEVR and our modelâs answer.
this, recent efforts have built new learning architectures that ex- plicitly model reasoning or relational associations [10, 11, 13], some of which even outperform humans [10, 11].
In this paper, we show that a general model can achieve strong visual reasoning from language. We use Conditional Batch Normalization [14, 15, 16] with a Recurrent Neural Net- work (RNN) and a Convolutional Neural Network (CNN) to show that deep learning architectures built without strong priors can learn underlying structure behind visual reasoning, directly from language and images. We demonstrate this by achieving state-of-the-art visual reasoning on CLEVR and ï¬nding struc- tured patterns while exploring the internals of our model.
# 1. Introduction | 1707.03017#2 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 3 | # 1. Introduction
The ability to use language to reason about every-day visual input is a fundamental building block of human intelligence. Achieving this capacity to visually reason is thus a meaningful step towards artiï¬cial agents that truly understand the world. Advances in both image-based learning and language-based learning using deep neural networks have made huge strides in difï¬cult tasks such as object recognition [1, 2] and machine translation [3, 4]. These advances have in turn fueled research on the intersection of visual and linguistic learning [5, 6, 7, 8, 9]. To this end, [9] recently proposed the CLEVR dataset to test multi-step reasoning from language about images, as tradi- tional visual question-answering datasets such as [5, 7] ask sim- pler questions on images that can often be answered in a single glance. Examples from CLEVR are shown in Figure 1. Struc- tured, multi-step reasoning is quite difï¬cult for standard deep learning approaches [10, 11], including those successful on traditional visual question answering datasets. Previous work highlights that standard deep learning approaches tend to ex- ploit biases in the data rather than reason [9, 12]. To overcome
# 2. Method | 1707.03017#3 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 4 | # 2. Method
Our model processes the multi-modal question-image input us- ing a RNN and CNN combined via Conditional Batch Normal- ization (CBN). CBN has proven highly effective for image styl- ization [14, 16], speech recognition [17], and traditional visual question answering tasks [15]. We start by explaining CBN in Section 2.1 and then describe our model in Section 2.2.
# 2.1. Conditional batch normalization
Batch normalization (BN) is a widely used technique to improve neural network training by normalizing activations throughout the network with respect to each mini-batch. BN has been shown to accelerate training and improve generalization by re- ducing covariate shift throughout the network [18]. To explain BN, we deï¬ne B = {Fi,.,.,.}N i=1 as a mini-batch of N sam- ples, where F corresponds to input feature maps whose sub- scripts c, h, w refers to the cth feature map at the spatial loca- tion (h, w). We also deï¬ne γc and βc as per-channel, trainable
Answer: Yes Are â>| t a | - than yellow things
Figure 2: The linguistic pipeline (left), visual pipeline (middle), and CBN residual block architecture (right) of our model. | 1707.03017#4 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 5 | Figure 2: The linguistic pipeline (left), visual pipeline (middle), and CBN residual block architecture (right) of our model.
scalars and ⬠as a constant damping factor for numerical stabil- ity. BN is defined at training time as follows:
Fy,c,w,n â Es[Fc,-,.] Var [F.,c,.,.] + ⬠BN (Fi,c,hw|Yes Be) = Ye + Be. (1)
Conditional Batch Normalization (CBN) [14, 15, 16] in- stead learns to output new BN parameters Ëγi,c and Ëβi,c as a function of some input xi:
Ëγi,c = fc(xi) Ëβi,c = hc(xi), (2)
where f and h are arbitrary functions such as neural networks. Thus, f and h can learn to control the distribution of CNN acti- vations based on xi.
Combined with ReLU non-linearities, CBN empowers a conditioning model to manipulate feature maps of a target CNN by scaling them up or down, negating them, shutting them off, selectively thresholding them, and more. Each feature map is modulated independently, giving the conditioning model an ex- ponential (in the number of feature maps) number of ways to affect the feature representation. | 1707.03017#5 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 6 | Rather than output Ëγi,c directly, we output âËγi,c, where:
Ëγi,c = 1 + âËγi,c, (3)
since initially zero-centered Ëγi,c can zero out CNN feature map activations and thus gradients. In our implementation, we opt to output âËγi,c rather than Ëγi,c, but for simplicity, in the rest of this paper, we will explain our method using Ëγi,c.
# 2.2. Model
Our model consists of a linguistic pipeline and a visual pipeline as depicted in Figure 2. The linguistic pipeline processes a question q using a Gated Recurrent Unit (GRU) [19] with 4096 hidden units that takes in learned, 200-dimensional word em- beddings. The ï¬nal GRU hidden state is a question embedding eq. From this embedding, the model predicts the CBN param- eters (γm,n i,· ) for the nth CBN layer of the mth residual block via linear projection with a trainable weight matrix W and bias vector b:
(γm,n i,· , βm,n i,· ) = W m,neq + bm,n (4) | 1707.03017#6 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 7 | (γm,n i,· , βm,n i,· ) = W m,neq + bm,n (4)
The visual pipeline extracts 14 à 14 image features using the conv4 layer of a ResNet-101 [2] pre-trained on ImageNet [20], as done in [10] for CLEVR. Image features are processed by a 3 à 3 convolution followed by several â 3 for our model â CBN residual blocks with 128 feature maps, and a ï¬nal clas- siï¬er. The classiï¬er consists of a 1 à 1 convolution to 512 fea- ture maps, global max-pooling, and a two-layer MLP with 1024 hidden units that outputs a distribution over ï¬nal answers. | 1707.03017#7 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 8 | Each CBN residual block starts with a 1 à 1 convolution followed by two 3 à 3 convolutions with CBN as depicted in Figure 2. Drawing from [11, 21], we concatenate coordinate feature maps indicating relative spatial position (scaled from â1 to 1) to the image features, each residual blockâs input, and the classiï¬erâs input. We train our model end-to-end from scratch with Adam (learning rate 3eâ4) [22], early stopping on the validation set, weight decay (1eâ5), batch size 64, and BN and ReLU throughout the visual pipeline, using only image- question-answer triplets from the training set.
# 3. Experiments
# 3.1. CLEVR dataset | 1707.03017#8 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 9 | # 3. Experiments
# 3.1. CLEVR dataset
CLEVR is a generated dataset of 700K (image, question, an- swer, program) tuples. Images contain 3D-rendered objects of various shapes, materials, colors, and sizes. Questions are multi-step and compositional in nature, as shown in Figure 1. They range from counting questions (âHow many green objects have the same size as the green metallic block?â) to comparison questions (âAre there fewer tiny yellow cylinders than yellow metal cubes?â) and can be 40+ words long. Answers are each one word from a set of 28 possible answers. Programs are an additional supervisory signal consisting of step-by-step instruc- tions, such as filter shape[cube], relate[right], and count, on how to answer the question. Program labels are difï¬cult to generate or come by for real world datasets. Our model avoids using this extra supervision, learning to reason effectively directly from linguistic and visual input.
# 3.2. Results | 1707.03017#9 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 10 | # 3.2. Results
Our results on CLEVR are shown in Table 1. Our model achieves a new overall state-of-the-art, outperforming humans and previous, leading models, which often use additional pro- gram supervision. Notably, CBN outperforms Stacked Atten- tion networks (CNN+LSTM+SA in 1) by 21.0%. Stacked At- tention networks are highly effective for visual question answer- ing with simpler questions [23] and are the previously leading model for visual reasoning that does not build in reasoning, making them a relevant baseline for CBN. We note also that our modelâs pattern of performance more closely resembles that of humans than other models do. Strong performance (< 1% er- ror) in exist and query attribute categories is perhaps explained by our modelâs close resemblance to standard CNNs, which traditionally excel at these classiï¬cation-type tasks. Our model also demonstrates strong performance on more complex categories such as count and compare attribute. | 1707.03017#10 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 11 | Comparing numbers of objects gives our model more difï¬- culty, understandably so; this question type requires more high- level reasoning steps â querying attributes, counting, and com- paring â than other question type. The best model from [10] beats our model here but is trained with extra supervision via 700K program labels. As shown in Table 1, the equivalent, more comparable model from [10] which uses 9K program labels sig- niï¬cantly underperforms our method in this category. | 1707.03017#11 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 12 | Model Overall Count Exist Compare Numbers Query Attribute Compare Attribute Human [10] 92.6 86.7 96.6 86.5 95.0 96.0 Q-type baseline [10] LSTM [10] CNN+LSTM [10] CNN+LSTM+SA [11] N2NMN* [13] PG+EE (9K prog.)* [10] PG+EE (700K prog.)* [10] CNN+LSTM+RNâ [11] 41.8 46.8 52.3 76.6 83.7 88.6 96.9 95.5 34.6 41.7 43.7 64.4 68.5 79.7 92.7 90.1 50.2 61.1 65.2 82.7 85.7 89.7 97.1 97.8 51.0 69.8 67.1 77.4 84.9 79.1 98.7 93.6 36.0 36.8 49.3 82.6 90.0 92.6 98.1 97.9 51.3 51.8 53.0 75.4 88.7 96.0 98.9 97.1 CNN+GRU+CBN 97.6 94.5 99.2 93.8 99.2 99.0 | 1707.03017#12 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 14 | To understand what our model learns, we use t-SNE [24] to visualize the CBN parameter vectors (γ, β), of 2,000 ran- dom validation points, modulating ï¬rst and last CBN lay- The (γ, β) ers in our model, as shown in Figure 4. parameters of the ï¬rst and last CBN layers are grouped by the low-level and high-level reasoning functions nec- essary to answer CLEVR questions, For respectively. for equal color and example, query color are close for the ï¬rst layer but apart for layer, and the same is true for equal shape the last and query shape, equal size and query size, and equal material and query material. Conversely, equal shape, equal size, and equal material CBN parameters are grouped in the last layer but split in the ï¬rst layer. Similar patterns emerge when visualizing residual block activa- tions. Thus, we see that CBN learns a sort of function-based modularity, directly from language and image inputs and with- out an architectural prior on modularity. Simply with end-to- end training, our model learns to handle not only different types of questions differently, but also different types of question sub- parts differently, working from low-level to high-level processes as is the proper approach to answer CLEVR questions. | 1707.03017#14 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 15 | 3.4 5 6 7 8 9 10 11 12 13 14 15 16 17 1B 19 a 2 2B Program length
Figure 3: Validation error rate by program length.
ple where our model correctly counts two cyan objects and two yellow objects but simultaneously does not answer that there are the same number of cyan and yellow objects. In fact, it does not answer that the number of cyan blocks is more, less, or equal to the number of yellow blocks. These errors could be prevented by directly minimizing logical inconsistency, which is an inter- esting avenue for future work orthogonal to our approach.
These types of mistakes in a state-of-the-art visual rea- soning model suggest that more work is needed to truly achieve human-like reasoning and logical consistency. We view CLEVR as a curriculum of tasks and believe that the key to the most meaningful and advanced reasoning lies in tackling these last few percentage points of error.
Additionally, we observe that many points that break the previously mentioned clustering patterns do so in meaningful ways. For example, Figure 4 shows that some count questions have last layer CBN parameters far from those of other count questions but close to those of exist questions. Closer ex- amination reveals that these count questions have answers of either 0 or 1, making them similar to exist questions.
# 3.4. Error analysis | 1707.03017#15 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 16 | # 3.4. Error analysis
An analysis of our modelâs errors reveals that 94% of its count- ing mistakes are off-by-one errors, indicating our model has learned underlying concepts behind counting, such as close re- lationships between close numbers.
# 4. Related Work
One leading approach for visual reasoning is the Program Gen- erator + Execution Engine model from [10]. This approach con- sists of a sequence-to-sequence Program Generator (PG), which takes in a question and outputs a sequence corresponding to a tree of composable Neural Modules, each of which is a two- layer residual block similar to ours. This tree of Neural Mod- ules is assembled to form the Execution Engine (EE) that then predicts an answer from the image. The PG+EE model uses a strong prior by training with program labels and explicitly modeling the compositional nature of reasoning. Our approach learns to reason directly from textual input without using addi- tional cues or a specialized architecture.
As shown in Figure 3, our CBN model struggles more on questions that require more steps, as indicated by the length of the corresponding CLEVR programs; error rates for questions requiring 10 or fewer steps are around 1.5%, while error rates for questions requiring 17 or more steps are around 5.5%, more than three times higher. | 1707.03017#16 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 17 | Furthermore, the model sometimes makes curious reason- ing mistakes a human would not. In Figure 5, we show an examThis modular approach is part of a recent line of work in Neural Module Networks [13, 25, 26]. Of these, End-to-End Module Networks (N2NMN) [13] also tackle visual reasoning but do not perform as well as other approaches. These methods also use strong priors by modeling the compositionality of reasoning, using program-level supervision, and building per- module, hand-crafted neural architectures for speciï¬c functions.
First CBN Layer Parameters Last CBN Layer Parameters e 0- exist e° 1-less_than » 2-greater_than * 3- count e° 4-query_material ° 5-query_size 6 - query_color e° 7-query_shape e° 8- equal_color e 9- equal_integer © 10 - equal_shape e 11-equal_size e 12 - equal_material
Figure 4: t-SNE plots of γ, β of the ï¬rst BN layer of the ï¬rst residual block (left) and the last BN layer of the last residual block (right). CBN parameters are grouped by low-level reasoning functions for the ï¬rst layer and by high-level reasoning functions for the last layer. | 1707.03017#17 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 18 | Question How many yellow things are there? How many cyan things are there? Are there as many yellow things as cyan things? Are there more yellow things than cyan things? Are there fewer yellow things than cyan things? Answer 2 2 No No No
Figure 5: An interesting failure example where our model counts correctly but compares counts erroneously. Its third an- swer is incorrect and inconsistent with its other answers.
architecture conditions 50 BN layers of a pre-trained ResNet. We show that a few layers of CBN after a ResNet can also be highly effective, even for complex problems. We also show how CBN models can learn to carry out multi-step processes and rea- son in a structured way â from low-level to high-level. | 1707.03017#18 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 19 | Additionally, CBN is essentially a post-BN, feature-wise afï¬ne conditioning, with BNâs trainable scalars turned off. Thus, there are many interesting connections with other con- ditioning methods. A common approach, used for example in Conditional DCGANs [27], is to concatenate constant feature maps of conditioning information to the input of convolutional layers, which amounts to adding a post-convolutional, feature- wise conditional bias. Other approaches, such as LSTMs [28] and Hierarchical Mixtures of Experts [29], gate an inputâs fea- tures as a function of that same input (rather than a separate, conditioning input), which amounts to a feature-wise, condi- tional scaling, restricted to between 0 and 1. CBN consists of both scaling and shifting, each unrestricted, giving it more ca- pacity than many of these related approaches. We leave explor- ing these connections more in-depth for future work. | 1707.03017#19 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 20 | Relation Networks (RNs) from [11] are another leading ap- proach for visual reasoning. RNs use an MLP to carry out pairwise comparisons over each location of extracted convolu- tional features over an image, including LSTM-extracted ques- tion features as input to this MLP. RNs then element-wise sum over the resulting comparison vectors to form another vector from which a ï¬nal classiï¬er predicts the answer. This approach is end-to-end differentiable and trainable from scratch to high performance, as we show in Table 1. Our approach lifts the explicitly relational aspect of this model, freeing our approach from the use of a comparison-based prior, as well as the scaling difï¬culties of pairwise comparisons over spatial locations. | 1707.03017#20 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 21 | CBN itself has its own line of work. The results of [14, 16] show that the closely related Conditional Instance Normaliza- tion is able to successfully modulate a convolutional style- transfer network to quickly and scalably render an image in a huge variety of different styles, simply by learning to output a different set of BN parameters based on target style. For visual question answering, answering general questions often of natu- ral images, de Vries et al. [15] show that CBN performs highly on real-world VQA and GuessWhat?! datasets, demonstrating CBNâs effectiveness beyond the simpler CLEVR images. Their
5. Conclusion | 1707.03017#21 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 22 | 5. Conclusion
With a simple and general model based on CBN, we show it is possible to achieve state-of-the-art visual reasoning on CLEVR without explicitly incorporating reasoning priors. We show that our model learns an underlying structure required to answer CLEVR questions by ï¬nding clusters in the CBN parameters of our model; earlier parameters are grouped by low-level reason- ing functions while later parameters are grouped by high-level reasoning functions. Simply by manipulating feature maps with CBN, a RNN can effectively use language to inï¬uence a CNN to carry out diverse and multi-step reasoning tasks over an image. It is unclear whether CBN is the most effective general way to use conditioning information for visual reasoning or other tasks, as well as what precisely about CBN is so effective. Other ap- proaches [27, 28, 29, 30, 31, 32, 33] employ a similar, repetitive conditioning, so perhaps there is an underlying principle that ex- plains the success of these approaches. Regardless, we believe that CBN is a general and powerful technique for multi-modal and conditional tasks, especially where more complex structure is involved. | 1707.03017#22 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 23 | 6. Acknowledgements We would like to thank the developers of PyTorch (http: //pytorch.org/) for their elegant deep learning frame- work. Also, our implementation was based off the open-source code from [10]. We thank Mohammad Pezeshki, Dzmitry Bah- danau, Yoshua Bengio, Nando de Freitas, Joelle Pineau, Olivier Pietquin, J´er´emie Mary, Chin-Wei Huang, Layla Asri, and Max Smith for helpful feedback and discussions, as well as Justin Johnson for CLEVR test set evaluations. We thank NVIDIA for donating a DGX-1 computer used in this work. We also acknowledge FRQNT through the CHIST-ERA IGLU project and CPER Nord-Pas de Calais, Coll`ege Doctoral Lille Nord de France and FEDER DATA Advanced data science and technolo- gies 2015-2020 for funding our research.
7. References [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet clas- siï¬cation with deep convolutional neural networks,â in Proc. of NIPS, 2012. | 1707.03017#23 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 24 | [2] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proc. of CVPR, 2016.
[3] K. Cho, B. van Merrienboer, C¸ . G¨ulc¸ehre, F. Bougares, H. Schwenk, and Y. Bengio, âLearning phrase representations us- ing RNN encoder-decoder for statistical machine translation,â in Proc. of EMNLP, vol. abs/1406.1078, 2014.
[4] I. Sutskever, O. Vinyals, and Q. V. Le, âSequence to sequence learning with neural networks,â in Proc. of NIPS, 2014.
[5] M. Malinowski and M. Fritz, âA multi-world approach to question answering about real-world scenes based on uncertain input,â in Proc. of NIPS, 2014.
[6] D. Geman, S. Geman, N. Hallonquist, and L. Younes, âVisual tur- ing test for computer vision systems,â vol. 112, no. 12. National Acad Sciences, 2015, pp. 3618â3623. | 1707.03017#24 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 25 | [7] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh, âVQA: Visual Question Answering,â in Proc. of ICCV, 2015.
[8] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville, âGuessWhat?! Visual object discovery through multi-modal dialogue,â in Proc. of CVPR, 2017.
[9] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zit- nick, and R. B. Girshick, âCLEVR: A diagnostic dataset for com- positional language and elementary visual reasoning,â in Proc. of CVPR, 2017.
[10] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, F. Li, C. L. Zitnick, and R. B. Girshick, âInferring and executing programs for visual [Online]. Available: reasoning,â 2017. http://arxiv.org/abs/1705.03633 | 1707.03017#25 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 26 | [11] A. Santoro, D. Raposo, D. G. T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, , and T. Lillicrap, âA simple neural network module for relational reasoning,â CoRR, vol. abs/1706.01427, 2017. [Online]. Available: http://arxiv.org/abs/ 1706.01427
[12] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, âMaking the V in VQA matter: Elevating the role of image un- derstanding in Visual Question Answering,â in Proc. of CVPR, 2017.
[13] R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko, âLearning to reason: End-to-end module networks for visual question answering,â CoRR, vol. abs/1704.05526, 2017. [Online]. Available: http://arxiv.org/abs/1704.05526
[14] V. Dumoulin, J. Shlens, and M. Kudlur, âA learned representation for artistic style,â in Proc. of ICLR, 2017. | 1707.03017#26 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 27 | [15] H. de Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville, âModulating early visual processing by language,â arXiv preprint arXiv:1707.00683, 2017. [Online]. Available: http://arxiv.org/abs/1707.00683
[16] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens, âExploring the structure of a real-time, arbitrary neural artistic stylization network,â CoRR, vol. abs/1705.06830, 2017. [Online]. Available: http://arxiv.org/abs/1705.06830
[17] T. Kim, I. Song, and Y. Bengio, âDynamic layer normalization for adaptive neural acoustic modeling in speech recognition,â in Proc. of InterSpeech, 2017.
[18] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â in Proc. of ICML, 2015. | 1707.03017#27 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 28 | [19] J. Chung, C¸ . G¨ulc¸ehre, K. Cho, and Y. Bengio, âEmpirical evalu- ation of gated recurrent neural networks on sequence modeling,â in Deep Learning workshop at NIPS, 2014.
[20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li, âImagenet large scale visual recognition challenge,â In- ternational Journal of Computer Vision, vol. 115, no. 3, pp. 211â 252, 2015.
[21] N. Watters, A. Tachetti, T. Weber, R. Pascanu, P. Battaglia, interaction networks,â CoRR, vol. , and D. Zoran, âVisual abs/1706.01433, 2017. [Online]. Available: http://arxiv.org/abs/ 1706.01433
[22] D. P. Kingma and J. Ba, âAdam: A method for stochastic opti- mization,â in Proc. of ICLR, 2015. | 1707.03017#28 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 29 | [23] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola, âStacked atten- tion networks for image question answering,â in Proc. of CVPR, 2016.
[24] L. van der Maaten and G. Hinton, âVisualizing data using t-sne,â JMLR, vol. 9, no. Nov, pp. 2579â2605, 2008.
[25] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, âNeural mod- ule networks,â in Proc. of CVPR, 2016.
[26] J. Andreas, R. Marcus, T. Darrell, and D. Klein, âLearning to compose neural networks for question answering,â in Proc. of NAACL, 2016.
âUnsuper- and vised representation learning with deep convolutional gen- erative [Online]. Available: http://arxiv.org/abs/1511.06434 | 1707.03017#29 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 30 | âUnsuper- and vised representation learning with deep convolutional gen- erative [Online]. Available: http://arxiv.org/abs/1511.06434
[28] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural Comput., vol. 9, no. 8, pp. 1735â1780, Nov. 1997. [Online]. Available: http://dx.doi.org/10.1162/neco.1997.9.8. 1735
[29] M. I. Jordan and R. A. Jacobs, âHierarchical mixtures of experts and the em algorithm,â Neural Comput., vol. 6, http: no. 2, pp. 181â214, Mar. 1994. [Online]. Available: //dx.doi.org/10.1162/neco.1994.6.2.181
[30] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu, âWavenet: A generative model for raw audio,â 2016. [Online]. Available: http://arxiv.org/abs/1609.03499 | 1707.03017#30 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.03017 | 31 | [31] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves et al., âConditional image generation with pixelcnn de- coders,â in Proc. of NIPS, 2016.
[32] S. E. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, D. Belov, and N. de Freitas, âParallel multiscale autoregressive density estimation,â 2017. [Online]. Available: http://arxiv.org/abs/1703.03664
[33] S. Reed, A. van den Oord, N. Kalchbrenner, V. Bapst, M. Botvinick, and N. de Freitas, âGenerating interpretable images with controllable structure,â in Proc. of ICLR, 2017. | 1707.03017#31 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | [
{
"id": "1707.00683"
}
] |
1707.02286 | 1 | # Abstract
The reinforcement learning paradigm allows, in principle, for complex behaviours In practice, however, it is to be learned directly from simple reward signals. common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Speciï¬cally, we train agents in diverse environmental contexts, and ï¬nd that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion â behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed in this video.
# Introduction | 1707.02286#1 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 2 | # Introduction
Reinforcement learning has demonstrated remarkable progress, achieving high levels of performance in Atari games [1], 3D navigation tasks [2, 3], and board games [4]. What is common among these tasks is that there is a well-deï¬ned reward function, such as the game score, which can be optimised to produce the desired behaviour. However, there are many other tasks where the ârightâ reward function is less clear, and optimisation of a naïvely selected one can lead to surprising results that do not match the expectations of the designer. This is particularly prevalent in continuous control tasks, such as locomotion, and it has become standard practice to carefully handcraft the reward function, or else elicit a reward function from demonstrations.
Reward engineering has led to a number of successful demonstrations of locomotion behaviour, however, these examples are known to be brittle: they can lead to unexpected results if the reward function is modiï¬ed even slightly, and for more advanced behaviours the appropriate reward function is often non-obvious in the ï¬rst place. Also, arguably, the requirement of careful reward design sidesteps a primary challenge of reinforcement learning: how an agent can learn for itself, directly from a limited reward signal, to achieve rich and effective behaviours. In this paper we return to this challenge. | 1707.02286#2 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 3 | Our premise is that rich and robust behaviours will emerge from simple reward functions, if the environment itself contains sufï¬cient richness and diversity. Firstly, an environment that presents a spectrum of challenges at different levels of difï¬culty may shape learning and guide it towards solutions that would be difï¬cult to discover in more limited settings. Secondly, the sensitivity to reward functions and other experiment details may be due to a kind of overï¬tting, ï¬nding idiosyncratic solutions that happen to work within a speciï¬c setting, but are not robust when the agent is exposed to a wider range of settings. Presenting the agent with a diversity of challenges thus increases the
performance gap between different solutions and may favor the learning of solutions that are robust across settings.
We focus on a set of novel locomotion tasks that go signiï¬cantly beyond the previous state-of-the-art for agents trained directly from reinforcement learning. They include a variety of obstacle courses for agents with different bodies (Quadruped, Planar Walker, and Humanoid [5, 6]). The courses are procedurally generated such that every episode presents a different instance of the task. | 1707.02286#3 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 4 | Our environments include a wide range of obstacles with varying levels of difï¬culty (e.g. steepness, unevenness, distance between gaps). The variations in difï¬culty present an implicit curriculum to the agent â as it increases its capabilities it is able to overcome increasingly hard challenges, resulting in the emergence of ostensibly sophisticated locomotion skills which may naïvely have seemed to require careful reward design or other instruction. We also show that learning speed can be improved by explicitly structuring terrains to gradually increase in difï¬culty so that the agent faces easier obstacles ï¬rst and harder obstacles only when it has mastered the easy ones. | 1707.02286#4 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 5 | In order to learn effectively in these rich and challenging domains, it is necessary to have a reliable and scalable reinforcement learning algorithm. We leverage components from several recent approaches to deep reinforcement learning. First, we build upon robust policy gradient algorithms, such as trust region policy optimization (TRPO) and proximal policy optimization (PPO) [7, 8], which bound parameter updates to a trust region to ensure stability. Second, like the widely used A3C algorithm [2] and related approaches [3] we distribute the computation over many parallel instances of agent and environment. Our distributed implementation of PPO improves over TRPO in terms of wall clock time with little difference in robustness, and also improves over our existing implementation of A3C with continuous actions when the same number of workers is used.
The paper proceeds as follows. In Section 2 we describe the distributed PPO (DPPO) algorithm that enables the subsequent experiments, and validate its effectiveness empirically. Then in Section 3 we introduce the main experimental setup: a diverse set of challenging terrains and obstacles. We provide evidence in Section 4 that effective locomotion behaviours emerge directly from simple rewards; furthermore we show that terrains with a âcurriculumâ of difï¬culty encourage much more rapid progress, and that agents trained in more diverse conditions can be more robust.
# 2 Large scale reinforcement learning with Distributed PPO | 1707.02286#5 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 6 | # 2 Large scale reinforcement learning with Distributed PPO
Our focus is on reinforcement learning in rich simulated environments with continuous state and action spaces. We require algorithms that are robust across a wide range of task variation, and that scale effectively to challenging domains. We address each of these issues in turn.
Robust policy gradients with Proximal Policy Optimization Deep reinforcement learning algo- rithms based on large-scale, high-throughput optimization methods, have produced state-of-the-art results in discrete and low-dimensional action spaces, e.g. on Atari games [9] and 3D navigation [2, 3]. In contrast, many prior works on continuous action spaces (e.g. [10, 7, 11, 12, 6, 13]), although impressive, have focused on comparatively small problems, and the use of large-scale, distributed optimization is less widespread and the corresponding algorithms are less well developed (but see e.g. [14, 15, 16]). We present a robust policy gradient algorithm, suitable for high-dimensional continuous control problems, that can be scaled to much larger domains using distributed computation. | 1707.02286#6 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 7 | Policy gradient algorithms provide an attractive paradigm for continuous control. They operate by directly maximizing the expected sum of rewards J(@) = E,,(7) [o,7~tr(sz, az)] with respect to the parameters 6 of the stochastic policy 7g(a|s). The expectation is with respect to the distribution of trajectories T = (so,ao, 51,-..) induced jointly by the policy 7 and the system dynamics p(sr41|St, 4): po(T) = p(s0)â¢(a0|80)p(s1|S0, 40) .... The gradient of the objective with respect to 4 is given by VoJ = Eg [D>, Vo log 79 (az|sz)(Ri â by)], where Ry = Dyer v tr(se, ay) and b; is an baseline that does not depend on a; or future states and actions. The baseline is often chosen to be b, = V°(s,) = Eo [R:|s:]. In practice the expected future return is typically approximated with a sample rollout and V° is replaced by a learned approximation V4(s) with parameters ¢. Policy gradient estimates can have high variance | 1707.02286#7 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 8 | approximated with a sample rollout and V° is replaced by a learned approximation V4(s) with parameters ¢. Policy gradient estimates can have high variance (e.g. [18]}) and algorithms can be sensitive to the settings of their hyperparameters. Several approaches have been proposed to make policy gradient algorithms more robust. One effective measure is to employ a trust region constraint that restricts | 1707.02286#8 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 10 | the amount by which any update is allowed to change the policy [19, 7, 14]. A popular algorithm that makes use of this idea is trust region policy optimization (TRPO; [7]). In every iteration given current parameters θold, TRPO collects a (relatively large) batch of data and optimizes the t γtâ1 Ïθ(at|st) surrogate loss JT RP O(θ) = EÏθold (Ï ) subject to a constraint on how much the policy is allowed to change, expressed in terms of the Kullback-Leibler divergence (KL) KL [Ïθold |Ïθ] < δ. Aθ is the advantage function given as Aθ(st, at) = Eθ [Rt|st, at] â V θ(st). The Proximal Policy Optimization (PPO) algorithm [8] can be seen as an approximate version of TRPO that relies only on ï¬rst order gradients, making it more convenient to use with recurrent neural networks (RNNs) and in a large-scale distributed setting. The trust region constraint is implemented via a regularization term. The coefï¬cient of this regularization term is adapted depending | 1707.02286#10 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 12 | # Algorithm 1 Proximal Policy Optimization (adapted from [8])
fori ⬠{1,--- ,N}do Run policy 7 for T timesteps, collecting {s;, a1, rz} Estimate advantages A; = )>,,.,7° âry â Va(sz) Told <~ 76 for j ⬠{1,--- , M}do Jppo(0) = 4 ale Ay â AKL [roial>] Update 6 by a gradient method w.r.t. Jppo(0) end for for j ⬠{1,--- ,B}do Ler() =~ Dia (se te = Valse)? Update ¢ by a gradient method w.r.t. Ler (¢) end for if KL[to1a8] > BrighK Larger then Afar else if KL[7o1a>] < BiowKLtarget then A A/a end if end for
In algorithm 1, the hyperparameter KLtarget is the desired change in the policy per iteration. The scaling term α > 1 controls the adjustment of the KL-regularization coefï¬cient if the actual change in the policy stayed signiï¬cantly below or signiï¬cantly exceeded the target KL (i.e. falls outside the interval [βlowKLtarget, βhighKLtarget]). | 1707.02286#12 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 13 | Scalable reinforcement learning with Distributed PPO To achieve good performance in rich, simulated environments, we have implemented a distributed version of the PPO algorithm (DPPO). Data collection and gradient calculation are distributed over workers. We have experimented with both synchronous and asynchronous updates and have found that averaging gradients and applying them synchronously leads to better results in practice. | 1707.02286#13 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 14 | The original PPO algorithm estimates advantages using the complete sum of rewards. To facilitate the use of RNNs with batch updates while also supporting variable length episodes we follow a strategy similar to and use truncated backpropagation through time with a window of length kKâ. This makes it natural (albeit not a requirement) to use /â-step returns also for estimating the advantage, i.e. we sum the rewards over the same Kâ-step windows and bootstrap from the value function after K-steps: Ay = Dy trig + 71 Va(si4.) â Valse): The publicly available implementation of PPO by John Schulman adds several modifications to the core algorithm. These include normalization of inputs and rewards as well as an additional term in the loss that penalizes large violations of the trust region constraint. We adopt similar augmentations in the distributed setting but find that sharing and synchronization of various statistics across workers requires some care. The implementation of our distributed PPO (DPPO) is in TensorFlow, the parameters reside on a parameter server, and workers synchronize their parameters after every gradient step. Pseudocode and further details are provided in the supplemental material.
3 | 1707.02286#14 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 15 | 3
â PPoa â Pro4 â PPo8 Planar Walker = PPO E Humanoid f Reacher2-Memory ° 5 10 15 20 2 20 4 60 80 100 120 140 o 5 10 15 20 25 hours (wall clack) hours (wall clack) hours (wall lock)
Figure 1: DPPO benchmark performance on the Planar Walker (left), Humanoid (middle), and Memory Reacher (right) tasks. In all cases, DPPO achieves performance equivalent to TRPO, and scales well with the number of workers used. The Memory Reacher task demonstrates that it can be used with recurrent networks.
# 2.1 Evaluation of Distributed PPO
We compare DPPO to several baseline algorithms. The goal of these experiments is primarily to establish that the algorithm allows robust policy optimization with limited parameter tuning and that the algorithm scales effectively. We therefore perform the comparison on a selected number of benchmark tasks related to our research interests, and compare to two algorithmic alternatives: TRPO and continuous A3C. For details of the comparison please see the supplemental material. | 1707.02286#15 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 16 | Benchmark tasks We consider three continuous control tasks for benchmarking the algorithms. All environments rely on the Mujoco physics engine [21]. Two tasks are locomotion tasks in obstacle- free environments and the third task is a planar target-reaching task that requires memory. Planar walker: a simple bipedal walker with 9 degrees-of-freedom (DoF) and 6 torque actuated joints. It receives a primary reward proportional to its forward velocity, additional terms penalize control and the violation of box constraints on torso height and angle. Episodes are terminated early when the walker falls. Humanoid: The humanoid has 28 DoF and 21 acutated joints. The humanoid, too, receives a reward primarily proportional to its velocity along the x-axis, as well as a constant reward at every step that, together with episode termination upon falling, encourage it to not fall. Memory reacher: A random-target reaching task with a simple 2 DoF robotic arm conï¬ned to the plane. The target position is provided for the ï¬rst 10 steps of each episode during which the arm is not allowed to move. When the arm is allowed to move, the target has already disappeared and the | 1707.02286#16 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 17 | for the ï¬rst 10 steps of each episode during which the arm is not allowed to move. When the arm is allowed to move, the target has already disappeared and the RNN memory must be relied upon in order for the arm to reach towards the correct target location. The reward in this task is the distance between the positions of end-effector and target, and it tests the ability of DPPO to optimize recurrent network policies. | 1707.02286#17 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 18 | Results Results depicted in Fig. 1 show that DPPO achieves performance similar to TRPO and that DPPO scales well with the number of workers used, which can signiï¬cantly reduce wall clock time. Since it is fully gradient based it can also be used directly with recurrent networks as demonstrated by the Memory reacher task. DPPO is also faster (in wallclock) than our implementation of A3C when the same number of workers is used.
# 3 Methods: environments and models
Our goal is to study whether sophisticated locomotion skills can emerge from simple rewards when learning from varied challenges with a spectrum of difï¬culty levels. Having validated our scalable DPPO algorithm on simpler benchmark tasks, we next describe the settings in which we will demonstrate the emergence of more complex behavior.
# 3.1 Training environments
In order to expose our agents to a diverse set of locomotion challenges we use a physical simulation environment roughly analogous to a platform game, again implemented in Mujoco [21]. We procedu- rally generate a large number of different terrains with a variety of obstacles; a different instance of the terrain and obstacles is generated in each episode.
4 | 1707.02286#18 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 19 | 4
Bodies We consider three different torque-controlled bodies, described roughly in terms of increas- ing complexity. Planar walker: a simple walking body with 9 DoF and 6 actuated joints constrained to the plane. Quadruped: a simple three-dimensional quadrupedal body with 12 DoF and 8 actuated joints. Humanoid: a three-dimensional humanoid with 21 actuated dimensions and 28 DoF. The bodies can be seen in action in ï¬gures 4, 5, and 7 respectively. Note that the Planar walker and Humanoid bodies overlap with those used in the benchmarking tasks described in the previous section, however the benchmark tasks only consisted of simple locomotion in an open plane. | 1707.02286#19 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 20 | Rewards We keep the reward for all tasks simple and consistent across terrains. The reward consists of a main component proportional to the velocity along the x-axis, encouraging the agent to make forward progress along the track, plus a small term penalizing torques. For the walker the reward also includes the same box constraints on the pose as in section 2. For the quadruped and humanoid we penalize deviations from the center of the track, and the humanoid receives an additional reward per time-step for not falling. Details can be found in the supplemental material. We note that differences in the reward functions across bodies are the consequence of us adapting previously proposed reward functions (cf. e.g. [12, 18]) rather than the result of careful tuning, and while the reward functions vary slightly across bodies we do not change them to elicit different behaviors for a single body. | 1707.02286#20 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 21 | Terrain and obstacles All of our courses are procedurally generated; in every episode a new course is generated based on pre-deï¬ned statistics. We consider several different terrain and obstacle types: (a) hurdles: hurdle-like obstacles of variable height and width that the walker needs to jump or climb over; (b) gaps: gaps in the ground that must be jumped over; (c) variable terrain: a terrain with different features such as ramps, gaps, hills, etc.; (d) slalom walls: walls that form obstacles that require walking around, (e) platforms: platforms that hover above the ground which can be jumped on or crouched under. Courses consist of a sequence of random instantiations of the above terrain types within user-speciï¬ed parameter ranges.
We train on different types of courses: single-type courses (e.g. gaps only, hurdles only, etc.); mixtures of single-type courses (e.g. every episode a different terrain type is chosen); and mixed terrains (individual courses consisting of more than one terrain type). We consider stationary courses for which the obstacle statistics are effectively ï¬xed over the the length of the course, and âcurriculumâ courses in which the difï¬culty of the terrain increases gradually over the length of the course. Fig. 3 shows a few different course types. | 1707.02286#21 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 22 | Observations The agents receive two sets of observations [22]: (1) a set of egocentric, âpro- prioceptiveâ features containing joint angles and angular velocities; for the Quadruped and Hu- manoid these features also contain the readings of a velocimeter, accelerometer, and a gyroscope positioned at the torso providing egocentric ve- locity and acceleration information, plus con- tact sensors attached to the feet and legs. The Humanoid also has torque sensors in the joints of the lower limbs. (2) a set of âexteroceptiveâ features containing task-relevant information in- cluding the position with respect to the center of the track as well as the proï¬le of the terrain ahead. Information about the terrain is provided as an array of height measurements taken at sampling points that translate along the x- and y-axis with the body and the density of which decreases with distance from the body. The Pla- nar Walker is conï¬ned to the xz-plane (i.e. it cannot move side-to-side), which simpliï¬es its perceptual features. See supplemental material for details.
ac » |fe i ee t 1 i Prop: Joints/Sensors Extero: Terrain etc. âPN itt | 1707.02286#22 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 23 | ac » |fe i ee t 1 i Prop: Joints/Sensors Extero: Terrain etc. âPN itt
Figure 2: Schematic of the network architecture. We use an architecture similar to [22], consisting of a component processing information local to the controlled body (egocentric information; blue) and a modulatory component that processes environ- ment and task related âexteroceptiveâ information such as the terrain shape (green).
5 | 1707.02286#23 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 25 | Figure 4: Walker skills: Time-lapse images of a representative Planar Walker policy traversing rubble; jumping over a hurdle; jumping over gaps and crouching to pass underneath a platform.
# 3.2 Policy parameterization
Similar to [22] we aim to achieve a separation of concerns between the basic locomotion skills and terrain perception and navigation. We structure our policy into two subnetworks, one of which receives only proprioceptive information, and the other which receives only exteroceptive information. As explained in the previous paragraph with proprioceptive information we refer to information that is independent of any task and local to the body while exteroceptive information includes a representation of the terrain ahead. We compared this architecture to a simple fully connected neural network and found that it greatly increased learning speed. Fig. 2 shows a schematic.
# 4 Results
We apply the Distributed PPO algorithm to a variety of bodies, terrains, and obstacles. Our aim is to establish whether simple reward functions can lead to the emergence of sophisticated locomotion skills when agents are trained in rich environments. We are further interested whether the terrain structure can affect learning success and robustness of the resulting behavior. | 1707.02286#25 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 26 | Planar Walker We train the walker on hurdles, gaps, platforms, and variable terrain separately, on a mixed course containing all features interleaved, and on a mixture of terrains (i.e. the walker was placed on different terrains in different episodes). It acquired a robust gait, learned to jump over hurdles and gaps, and to walk over or crouch underneath platforms. All of these behaviors emerged spontaneously, without special cased shaping rewards to induce each separate behaviour. Figure 4 shows motion sequences of the Planar Walker traversing a rubble-ï¬eld, jumping over a hurdle, and over gaps, and crouching under a platform. Examples of the respective behaviors can be found in the supplemental video. The emergence of these skills was robust across seeds. At the end of learning the Planar Walker jumped over hurdles nearly as tall as its own body.
Quadruped The quadruped is a generally less agile body than the walker but it adds a third dimension to the control problem. We considered three different terrain types: variable terrain, slalom walls, gaps, and a variation of the hurdles terrain which contained obstacles that can be avoided, and others that require climbing or jumping. | 1707.02286#26 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 27 | The Quadruped, too, learns to navigate most obstacles quite reliably, with only small variations across seeds. It discovers that jumping up or forward (in some cases with surprising accuracy) is a suitable strategy to overcome hurdles, and gaps, and it learns to navigate walls, turning left and right as appropriate â in both cases despite only receiving reward for moving forward. For the variation of the hurdles-terrain it learns to distinguish between obstacles that it can and / or has to climb over, and those it has to walk around. The variable terrain may seem easy but is, in fact, surprisingly hard
6 | 1707.02286#27 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 28 | Figure 5: Time-lapse images of a representative Quadruped policy traversing gaps (left); and navigating obstacles (right)
Easy test environment Hard test environment. ae Planar Walker ae Quadruped a)® #0 lm Hurdles mmm Simple mm Hurdles b a vs pee ° rm | " Ne Yas a se asf ill! * Ih Bos 08 om | iI" 20 wt il Bos 06 eal Se INN gos oa Zn f° wo } yw! E id See, 3 WM Se Em ~ ; ° ° 0, 0.0 0.0 0.2 04 06 08 10 12 14 00 0.2 04 06 08 10 12 14 Friction Rubble Model Incline Friction Rubble Model âTraining steps 1e7 âTraining steps 1e7 | 1707.02286#28 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 29 | Figure 6: a) Curriculum training: Evaluation of policies trained on hurdle courses with different statistics: âregularâ courses contain arbitrarily interleaved high and low hurdles (blue); âcurriculumâ courses gradually increase hurdle height over the course of the track (green). During training we eval- uate both policies on validation courses with low/âeasy" hurdles (left) and tall/âhard" hurdles (right). The performance of the policy trained on the curriculum courses increases faster. b) Robustness of Planar Walker policies (left) and Quadruped policies (right): We evaluate how training on hurdles (green) increases policy robustness relative to training on ï¬at terrain (blue). Policies are assessed on courses with unobserved changes in ground friction, terrain surface (rubble), strength of the body actuators, and incline of the ground plane. There is a notable advantage in some cases for policies trained on the hurdle terrain. All plots show the average returns normalized for each terrain setting.
because the body shape of the Quadruped is poorly suited (i.e. the legs of the quadruped are short compared to the variations in the terrain). Nevertheless it learns strategies to traverse reasonably robustly. Fig. 5 shows some representative motion sequences; further examples can be found in the supplemental video. | 1707.02286#29 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 30 | Analyses We investigate whether the nature of the terrain affects learning. In particular, it is easy to imagine that training on, for instance, very tall hurdles only will not be effective. For training to be successful in our setup it is required that the walker occasionally âsolvesâ obstacles by chance â and the probability of this happening, is, of course, minuscule when all hurdles are very tall. We verify this by training a Planar Walker on two different types of hurdles-terrains. The ï¬rst possesses stationary statistics with high- and low hurdles being randomly interleaved. In the second terrain the difï¬culty, as given by the minimum and maximum height of the hurdles, increases gradually over the length of the course. We measure learning progress by evaluating policies during learning on two test terrains, an easy one with shallow hurdles and a difï¬cult one with tall hurdles. Results are shown in Fig. 6a for a representative Planar Walker policy. The policy trained on the terrain with gradually increasing difï¬culty improves faster than the one trained on a stationary terrain. | 1707.02286#30 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 31 | We further study whether training on varying terrains leads to more robust gaits compared to usual task of moving forward on a plane. To this end we train Planar Walker and Quadruped policies on a ï¬at course as well as on the (more challenging) hurdles. We then evaluate representative policies from each experiment with respect to their robustness to (a) unobserved variations in surface friction, (b) unobserved rumble-strips, (c) changes in the model of the body, (d) unobserved inclines / declines of the ground. Results depicted in Fig. 6b show a trend of training on hurdles increasing robustness on other forms of unobserved variation in the terrain.
Humanoid Our ï¬nal set of experiments considers the 28-DoF Humanoid, a considerably more complex body than Planar Walker and Quadruped. The set of terrains is qualitatively similar to the ones used for the other bodies, including gaps, hurdles, a variable terrain, as well as the slalom walls. We also trained agents on mixtures of the above terrains.
7 | 1707.02286#31 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 32 | Figure 7: Time lapse sequences of the Humanoid navigating different terrains
As for the previous experiments we considered a simple reward function, primarily proportional to the velocity along the x-axis (see above). We experimented with two alternative termination conditions: (a) episodes were terminated when the minimum distance between head and feet fell below 0.9m; (b) episodes were terminated when the minimum distance between head and ground fell below 1.1m.
In general, the humanoid presents a considerably harder learning problem largely because with its relatively large number of degrees of freedoms it is prone to exploit redundancies in the task speciï¬cation and / or to get stuck in local optima, resulting in entertaining but visually unsatisfactory gaits. Learning results tend to be sensitive to the particular algorithm, exploration strategy, reward function, termination condition, and weight initialization.
The results we obtained for the humanoid were indeed much more diverse than for the other two bodies, with signiï¬cant variations across seeds for the same setting of the hyperparameters. Some of the variations in the behaviors were associated with differences in learning speed and asymptotic performance (suggesting a local optimum); others were not (suggesting alternative solution strategies). | 1707.02286#32 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 33 | Nevertheless we obtained for each terrain several well performing agents, both in terms of performance and in terms of visually pleasing gaits. Fig. 7 shows several examples of agents trained on gaps, hurdles, slalom walls, and variable terrain. As in the previous experiments the terrain diversity and the inherent curriculum led the agents to discover robust gaits, the ability to overcome obstacles, to jump across gaps, and to navigate slalom courses. We highlight several solution strategies for each terrain in the supplemental video, including less visually appealing ones. To test the robustness of the learned behaviors we further constructed two test courses with (a) statistics rather different from the training terrains and (b) unobserved perturbations in the form of see-saws and random forces applied to the Humanoidâs torso, which is also presented in the video. Qualitatively we see moderately large levels of robustness to these probe challenges (see supplemental video).
# 5 Related work | 1707.02286#33 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 34 | # 5 Related work
Physics-based character animation is a long-standing and active ï¬eld that has produced a large body of work with impressive results endowing simulated characters with locomotion and other movement skills (see [23] for a review). For instance, [24] show sophisticated skill sequencing for maneuvering obstacles on a parametric terrain, while [25, 26, 27] demonstrate how terrain adaptive behaviors or other skilled movements can emerge as the result of optimization problems. While there are very diverse approaches, essentially all rely on signiï¬cant prior knowledge of the problem domain and many on demonstrations such as motion capture data.
Basic locomotion behaviors learned end-to-end via RL have been demonstrated, for instance, by [7, 12, 6, 13] or guided policy search [10]. Locomotion in the context of higher-level tasks has been considered in [22]. Terrain-adaptive locomotion with RL has been demonstrated by [28], but they still impose considerable structure on their solution. Impressive results were recently achieved with learned locomotion controllers for a 3D humanoid body [29], but these rely on a domain-speciï¬c structure and human motion capture data to bootstrap the movement skills for navigating ï¬at terrains. | 1707.02286#34 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 35 | The idea of curricula is long-standing in the machine learning literature (e.g. [30]). It has been exploited for learning movement skills for instance by [31]. The present work combines and develops elements from many of these research threads, but pushes uniquely far in a particular direction â using simple RL rewards and curriculum training to produce adaptive locomotion in challenging environments while imposing only limited structure on the policy and behavior.
8
# 6 Discussion
We have investigated the question whether and to what extent training agents in a rich environment can lead to the emergence of behaviors that are not directly incentivized via the reward function. This departs from the common setup in control where a reward function is carefully tuned to achieve particular solutions. Instead, we use deliberately simple and generic reward functions but train the agent over a wide range of environmental conditions. Our experiments suggest that training on diverse terrain can indeed lead to the development of non-trivial locomotion skills such as jumping, crouching, and turning for which designing a sensible reward is not easy. While we do not claim that environmental variations will be sufï¬cient, we believe that training agents in richer environments and on a broader spectrum of tasks than is commonly done today is likely to improve the quality and robustness of the learned behaviors â and also the ease with which they can be learned. In that sense, choosing a seemingly more complex environment may actually make learning easier.
# Acknowledgments | 1707.02286#35 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 36 | # Acknowledgments
We thank Joseph Modayil and many other colleagues at DeepMind for helpful discussions and comments on the manuscript.
9
# References
[1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015. [2] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
[3] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. | 1707.02286#36 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 37 | [4] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[5] Yuval Tassa, Tom Erez, and Emanuel Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 4906â4913. IEEE, 2012.
[6] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. [7] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy
optimization. In ICML, pages 1889â1897, 2015.
[8] Pieter Abbeel and John Schulman. Deep reinforcement learning through policy optimization. Tuto- rial at Neural Information Processing Systems, 2016. URL https://nips.cc/Conferences/2016/ Schedule?showEvent=6198. | 1707.02286#37 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 38 | [9] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
[10] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014.
[11] S Levine, C Finn, T Darrell, and P Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[12] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[13] Nicolas Heess, Gregory Wayne, David Silver, Timothy P. Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015. | 1707.02286#38 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 39 | [14] Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Rémi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efï¬cient actor-critic with experience replay. CoRR, abs/1611.01224, 2016.
[15] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. arXiv preprint arXiv:1610.00633, 2016.
[16] Ivaylo Popov, Nicolas Heess, Timothy P. Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin A. Riedmiller. Data-efï¬cient deep reinforcement learning for dexterous manipulation. CoRR, abs/1704.03073, 2017.
[17] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
[18] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. CoRR, abs/1604.06778, 2016. | 1707.02286#39 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 40 | [19] Jan Peters, Katharina Mülling, and Yasemin Altün. Relative entropy policy search. In Proceedings of the Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence (AAAI 2010), 2010.
[20] PPO. https://github.com/joschu/modular_rl, 2016. [21] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
[22] Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016.
[23] Thomas Geijtenbeek and Nicolas Pronost. Interactive character animation using simulated physics: A state-of-the-art review. In Computer Graphics Forum, volume 31, pages 2492â2515. Wiley Online Library, 2012. | 1707.02286#40 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 41 | [24] Libin Liu, KangKang Yin, Michiel van de Panne, and Baining Guo. Terrain runner: control, parameteriza- tion, composition, and planning for highly dynamic motions. ACM Transactions on Graphics (TOG), 31 (6):154, 2012.
10
[25] Jia-chi Wu and Zoran Popovi´c. Terrain-adaptive bipedal locomotion control. ACM Transactions on Graphics, 29(4):72:1â72:10, Jul. 2010.
[26] Igor Mordatch, Martin De Lasa, and Aaron Hertzmann. Robust physics-based locomotion using low- dimensional planning. ACM Transactions on Graphics (TOG), 29(4):71, 2010.
[27] Igor Mordatch, Emanuel Todorov, and Zoran Popovic. Discovery of complex behaviors through contact- invariant optimization. ACM Trans. Graph., 31(4):43:1â43:8, 2012.
[28] Xue Bin Peng, Glen Berseth, and Michiel van de Panne. Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2016), 35(4), 2016. | 1707.02286#41 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 42 | [29] Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel van de Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2017), 36(4), 2017.
[30] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In International Conference on Machine Learning, ICML, 2009.
[31] Andrej Karpathy and Michiel Van De Panne. Curriculum learning for motor skills. In Canadian Conference on Artiï¬cial Intelligence, pages 325â330. Springer, 2012.
11
# A Distributed PPO
# A.1 Algorithm details
Pseudocode for the Distributed PPO algorithm is provided in Algorithm Boxes 2 and 3. W is the number of workers; D sets a threshold for the number of workers whose gradients must be available to update the parameters. M, B is the number of sub-iterations with policy and baseline updates given a batch of datapoints. T is the number of data points collected per worker before parameter updates are computed. K is the number of time steps for computing K-step returns and truncated backprop through time (for RNNs)
# Algorithm 2 Distributed Proximal Policy Optimization (chief) | 1707.02286#42 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 43 | # Algorithm 2 Distributed Proximal Policy Optimization (chief)
for i â {1, · · · , N } do for j â {1, · · · , M } do Wait until at least W â D gradients wrt. θ are available average gradients and update global θ end for for j â {1, · · · , B} do Wait until at least W â D gradients wrt. Ï are available average gradients and update global Ï end for end for
# Algorithm 3 Distributed Proximal Policy Optimization (worker) | 1707.02286#43 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 44 | fori ⬠{1,--- NI do for w ⬠{1,---T/K} do Run policy 7 for K timesteps, collecting {s:, az, re fort ⬠{(iâ1)K,...,ik â 1} Estimate return R, = 37â KY (CDE p + 7K V5 (six) Estimate advantages Ar = Ri â Vo (se) Store partial trajectory information end for Told <~ 76 form ⬠{1,--- ,M}do Ippo(0) = i, eee Ay â AKL [rroia|t9] â Emax(0, KL [moral] â 2KLtarget)â if KL[to1ao > 4K Leargee then break and continue with next outer iteration i + 1 end if Compute Vo Jpro send gradient wrt. to 0 to chief wait until gradient accepted or dropped; update parameters end for for b ⬠{1,---,B}do Lai () = â ja (Re â Valse)? Compute Vela. send gradient wrt. to ¢ to chief wait until gradient accepted or dropped; update parameters end for if KL[to1a>] > BrignKLiarget then A+ ar else if KL[7o1a>] < Brow KLtarget then Ae X/& end if
# end if end for | 1707.02286#44 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 45 | # end if end for
Normalization Following [20] we perform the following normalization steps:
1. We normalize observations (or states st) by subtracting the mean and dividing by the standard deviation using the statistics aggregated over the course of the entire experiment.
12
2. We scale the reward by a running estimate of its standard deviation, again aggregated over the course of the entire experiment.
3. We use per-batch normalization of the advantages.
Sharing of algorithm parameters across workers In the distributed setting we have found it to be important to share relevant statistics for data normalization across workers. Normalization is applied during data collection and statistics are updated locally after every environment step. Local changes to the statistics are applied to the global statistics after data collection when an iteration is complete (not shown in pseudo-code). The time-varying regularization parameter λ is also shared across workers but updates are determined based on local statistics based on the average KL computed locally for each worker, and applied separately by each worker with an adjusted Ëα = 1 + (α â 1)/K.
Additional trust region constraint We also adopt an additional penalty term that becomes active when the KL exceeds the desired change by a certain margin (the threshold is 2KLtarget in our case). In our distributed implementation this criterion is tested and applied on a per-worker basis.
Stability is further improved by early stopping when changes lead to too large a change in the KL. | 1707.02286#45 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 46 | Stability is further improved by early stopping when changes lead to too large a change in the KL.
# A.2 Algorithm comparison
TRPO has been established as a robust algorithm that learns high-performing policies and requires little parameter tuning. Our primary concern was therefore whether DPPO can achieve results comparable to TRPO. Secondarily, we were interested in whether the algorithm scales to large numbers of workers and allows speeding up experiments where large numbers of data points are required to obtain reliable gradient estimates. We therefore compare to TRPO in a regime where a large number number samples is used to compute parameter updates (N = 100000). For simple tasks we expect TRPO to produce good results in this regime (for the benchmark tasks a smaller N would likely be sufï¬cient). | 1707.02286#46 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 47 | For DPPO we perform a coarse search over learning rate for policy and baseline. All experiments in section 2.1 use the same learning rates (0.00005 and 0.0001 respectively.) In each iteration we use batches of size of 64000 (walker), 128000 (humanoid), and 24000 (reacher) time steps. Data collection and gradient computation are distributed across varying numbers of workers. Due to early termination this number is sometimes smaller (when an episode terminates early the remaining steps in the current unroll window of length K are being ignored during gradient calculation). An alternative point of comparison would be to use a ï¬xed overall number of time steps and vary the number of time steps per worker.
Networks use tanh nonlinearities and parameterize the mean and standard deviation of a condi- tional Gaussian distribution over actions. Network sizes were as follows: Planar Walker: 300,200; Humanoid: 300,200,100; Memory Reacher: 200; and 100 LSTM units.
For A3C with continuous actions we also perform a coarse search over relevant hyper parameters, especially the learning rate and entropy cost. Due to differences in the code base network architectures were not exactly identical to those used for DPPO but used the same numbers of hidden units. | 1707.02286#47 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 48 | We note that a like-for-like comparison of the algorithms is difï¬cult since they are implemented in different code bases and especially for distributed algorithms performance in wall clock time is affected both by conceptual changes to the algorithm as well as by implementation choices. A more careful benchmarking of several recent high-throughput algorithms will be the subject of future work.
# B Additional experimental details
# B.1 Observations
For all courses terrain height (and platform height where applicable) was provided as a heightï¬eld where each "pixel" indicates the height of the terrain (platform) within a small region. This heightï¬eld was then sampled at particular points relative to the position of the agent.
Planar walker The exteroceptive features for the planar walker consist of sampling points of the terrain and, where applicable, platform height. There were 50 equally spaced points along the x-axis
13
starting 2m behind the agent and extending 8m ahead. Platform height was represented separately from terrain height with a separate set of sampling points. In addition the exteroceptive features contained the height of the walker body above the ground (measured at its current location) as well as the difference between the agents position and the next sampling grid center (the intention behind this last input is to resolve the aliasing arising from the piece-wise constant terrain representation with ï¬nite sampling). | 1707.02286#48 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 49 | Quadruped & Humanoid The Quadruped and Humanoid use the same set of exteroceptive features, effectively a two-dimensional version of what is used for the walker. The sampling points are placed on a variable-resolution grid and range from 1.2m behind the agent to 5.6m ahead of it along the x-axis as well as 4m to the left and to the right. To reduce dimensionality of the input data sampling density decreases with increasing distance from the position of the body. In addition to the height samples the exteroceptive features include the height of the body above the ground, and the x and y distance of the walker body to the next sampling grid center (to reduce aliasing; see above).
# B.2 Rewards | 1707.02286#49 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.02286 | 50 | # B.2 Rewards
Planar walker r= 10.0v, + 0.5n, â |A;, â 1.2| â 10.0I[Ap, < 0.3] â 0.1]]ul|? Here n, is the projection of the z-axis of the torso coordinate frame onto the z-axis of the global coordinate frame (this value varies from 1.0 to -1.0) depending on whether the Planar Walkerâs torso is upright or upside down. Ay, is the height of the Planar Walkerâs torso above the feet. I{-] is the indicator function. v,, is the velocity along the x-axis.
Quadruped r= v, + 0.05n, â 0.01||u||? where n, is the projection of the z-axis of the torso coordinate frame onto the z-axis of the global coordinate frame (this value varies from 1.0 to -1.0) depending on whether the Quadruped is upright or upside down.
Humanoid =r = min(vr, Ymax) â 0.005(vz + v7) â 0.05y? â 0.02||ul|? + 0.02 where vnax is a cutoff for the velocity reward which we usually set to 4m/s.
14 | 1707.02286#50 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | [
{
"id": "1610.05182"
},
{
"id": "1509.02971"
},
{
"id": "1507.04296"
},
{
"id": "1611.05397"
},
{
"id": "1610.00633"
},
{
"id": "1506.02438"
},
{
"id": "1504.00702"
}
] |
1707.01891 | 1 | # ABSTRACT
Trust region methods, such as TRPO, are often used to stabilize policy optimiza- tion algorithms in reinforcement learning (RL). While current trust region strate- gies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we pro- pose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efï¬ciency. When evaluated on a number of continuous control tasks, Trust-PCL signiï¬cantly improves the solution quality and sample efï¬ciency of TRPO.1
# INTRODUCTION | 1707.01891#1 | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy
optimization algorithms in reinforcement learning (RL). While current trust
region strategies are effective for continuous control, they typically require
a prohibitively large amount of on-policy interaction with the environment. To
address this problem, we propose an off-policy trust region method, Trust-PCL.
The algorithm is the result of observing that the optimal policy and state
values of a maximum reward objective with a relative-entropy regularizer
satisfy a set of multi-step pathwise consistencies along any path. Thus,
Trust-PCL is able to maintain optimization stability while exploiting
off-policy data to improve sample efficiency. When evaluated on a number of
continuous control tasks, Trust-PCL improves the solution quality and sample
efficiency of TRPO. | http://arxiv.org/pdf/1707.01891 | Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans | cs.AI | ICLR 2018 | null | cs.AI | 20170706 | 20180222 | [
{
"id": "1605.08695"
},
{
"id": "1707.06347"
},
{
"id": "1702.08165"
},
{
"id": "1704.06440"
},
{
"id": "1509.02971"
},
{
"id": "1707.02286"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
},
{
"id": "1706.00387"
}
] |
1707.01891 | 2 | # INTRODUCTION
The goal of model-free reinforcement learning (RL) is to optimize an agentâs behavior policy through trial and error interaction with a black box environment. Value-based RL algorithms such as Q-learning (Watkins, 1989) and policy-based algorithms such as actor-critic (Konda & Tsitsiklis, 2000) have achieved well-known successes in environments with enumerable action spaces and pre- dictable but possibly complex dynamics, e.g., as in Atari games (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016). However, when applied to environments with more sophisticated action spaces and dynamics (e.g., continuous control and robotics), success has been far more limited.
In an attempt to improve the applicability of Q-learning to continuous control, Silver et al. (2014) and Lillicrap et al. (2015) developed an off-policy algorithm DDPG, leading to promising results on continuous control environments. That said, current off-policy methods including DDPG often improve data efï¬ciency at the cost of optimization stability. The behaviour of DDPG is known to be highly dependent on hyperparameter selection and initialization (Metz et al., 2017); even when using optimal hyperparameters, individual training runs can display highly varying outcomes. | 1707.01891#2 | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy
optimization algorithms in reinforcement learning (RL). While current trust
region strategies are effective for continuous control, they typically require
a prohibitively large amount of on-policy interaction with the environment. To
address this problem, we propose an off-policy trust region method, Trust-PCL.
The algorithm is the result of observing that the optimal policy and state
values of a maximum reward objective with a relative-entropy regularizer
satisfy a set of multi-step pathwise consistencies along any path. Thus,
Trust-PCL is able to maintain optimization stability while exploiting
off-policy data to improve sample efficiency. When evaluated on a number of
continuous control tasks, Trust-PCL improves the solution quality and sample
efficiency of TRPO. | http://arxiv.org/pdf/1707.01891 | Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans | cs.AI | ICLR 2018 | null | cs.AI | 20170706 | 20180222 | [
{
"id": "1605.08695"
},
{
"id": "1707.06347"
},
{
"id": "1702.08165"
},
{
"id": "1704.06440"
},
{
"id": "1509.02971"
},
{
"id": "1707.02286"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
},
{
"id": "1706.00387"
}
] |
1707.01891 | 3 | On the other hand, in an attempt to improve the stability and convergence speed of policy-based RL methods, Kakade (2002) developed a natural policy gradient algorithm based on Amari (1998), which subsequently led to the development of trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO has shown strong empirical performance on difï¬cult continuous control tasks often outperforming value-based methods like DDPG. However, a major drawback is that such meth- ods are not able to exploit off-policy data and thus require a large amount of on-policy interaction with the environment, making them impractical for solving challenging real-world problems.
Efforts at combining the stability of trust region policy-based methods with the sample efï¬ciency of value-based methods have focused on using off-policy data to better train a value estimate, which can be used as a control variate for variance reduction (Gu et al., 2017a;b).
In this paper, we investigate an alternative approach to improving the sample efï¬ciency of trust region policy-based RL methods. We exploit the key fact that, under entropy regularization, the | 1707.01891#3 | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy
optimization algorithms in reinforcement learning (RL). While current trust
region strategies are effective for continuous control, they typically require
a prohibitively large amount of on-policy interaction with the environment. To
address this problem, we propose an off-policy trust region method, Trust-PCL.
The algorithm is the result of observing that the optimal policy and state
values of a maximum reward objective with a relative-entropy regularizer
satisfy a set of multi-step pathwise consistencies along any path. Thus,
Trust-PCL is able to maintain optimization stability while exploiting
off-policy data to improve sample efficiency. When evaluated on a number of
continuous control tasks, Trust-PCL improves the solution quality and sample
efficiency of TRPO. | http://arxiv.org/pdf/1707.01891 | Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans | cs.AI | ICLR 2018 | null | cs.AI | 20170706 | 20180222 | [
{
"id": "1605.08695"
},
{
"id": "1707.06347"
},
{
"id": "1702.08165"
},
{
"id": "1704.06440"
},
{
"id": "1509.02971"
},
{
"id": "1707.02286"
},
{
"id": "1611.01224"
},
{
"id": "1606.01540"
},
{
"id": "1706.00387"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.